Next Page: 10000

          Fernando Laudares Camargos: PostgreSQL Monitoring: Set Up an Enterprise-Grade Server (and Sign Up for Webinar Weds 10/10…)      Cache   Translate Page      
PostgreSQL Monitoring

PostgreSQL logoThis is the last post in our series on building an enterprise-grade PostgreSQL set up using open source tools, and we’ll be covering monitoring.

The previous posts in this series discussed aspects such as security, backup strategy, high availability, connection pooling and load balancing, extensions, and detailed logging in PostgreSQL. Tomorrow, Wednesday, October 10 at 10AM EST, we will be reviewing these topics together, and showcasing then in practice in a webinar format: we hope you can join us!

Register Now

 

Monitoring databases

The importance of monitoring the activity and health of production systems is unquestionable. When it comes to the database, with its high number of customizable settings, the ability to track its various metrics (status counters and gauges) allows for the maintenance of a historical record of its performance over time. This can be used for capacity planningtroubleshooting and validation.

When it comes to capacity planning, a monitoring solution is a helpful tool to help you assess how the current setup is faring. At the same time, it can help predict future needs based on trends, such as the increase of active connections, queries, and CPU usage. For example, an increase in CPU usage might be due to a genuine increase in workload, but it could also be a sign of unoptimized queries growing in popularity. In which case, comparing CPU with disk access might provide a more complete view of what is going on.

Being able to easily correlate data like this helps you to catch minor issues and to plan accordingly, sometimes allowing you to avoid an easier but more costly solution of scaling up to mitigate problems like this. But having the right monitoring solution is really invaluable when it comes to investigative work and root cause analysis. Trying to understand a problem that has already taken place is a rather complicated, and often unenviable, task unless you established a continuous, watchful eye on the set up for the whole time.

Finally, a monitoring solution can help you validate changes made in the business logic in general or in the database configuration in specific. By comparing prior and post results for a given metric or for overall performance, you can observe the impact of such changes in practice.

Monitoring PostgreSQL with open source solutions

There is a number of monitoring solutions for PostgreSQL and postgresql.org’s Wiki provides an extensive list, albeit a little outdated. It categorizes the main monitoring solutions into two distinct categories: those that can be identified as generic solutions—and can be extended to cover different technologies through custom plugins—and those labeled as Postgres-centric, which are specific to PostgreSQL.

In the first group, we find venerated open source monitoring tools such as Munin, Zabbix, and CactiNagios could have also been added to this group but it was instead indirectly included in the “Checkers” group. That category includes monitoring scripts that can be used both in stand-alone mode or as feeders (plugins) for “Nagios like software“. Examples of these are check_pgactivity and check_postgres.

One omission from this list is Grafana, a modern time series analytics platform conceived to display metrics from a number of different data sources. Grafana includes a solution packaged as a PostgreSQL native plugin. Percona has built its Percona Monitoring and Management (PMM) platform around Grafana, using Prometheus as its data source. Since version 1.14.0, PMM supports PostgreSQL. Query Analytics (QAN) integration is coming soon.

An important factor that all these generic solutions have in common is that they are widely used for the monitoring of a diverse collection of services, like you’d normally find in enterprise-like environments. It’s common for a given company to adopt one, or sometimes two, such solutions with the aim of monitoring their entire infrastructure. This infrastructure often includes a heterogeneous combination of databases and application servers.

Nevertheless, there is a place for complementary Postgres-centric monitoring solutions in such enterprise environments too. These solutions are usually implemented with a specific goal in mind. Two examples we can mention in this context are PGObserver, which has a focus on monitoring stored procedures, and pgCluu, with its focus on auditing.

Monitoring PostgreSQL with PMM

We built an enterprise-grade PostgreSQL set up for the webinar, and use PMM for monitoring. We will be showcasing some of PMM’s main features, and highlighting some of the most important metrics to watch, during our demo.You may want to have a look at this demo setup to get a feel of how our PostgreSQL Overview dashboard looks:

You can find instructions on how to setup PMM for monitoring your PostgreSQL server in our documentation space. And if there’s still time, sign up for tomorrow’s webinar!

Register Now

 

The post PostgreSQL Monitoring: Set Up an Enterprise-Grade Server (and Sign Up for Webinar Weds 10/10…) appeared first on Percona Database Performance Blog.


          CRON PLA Filament, 1kg, 1.75mm, Skin      Cache   Translate Page      
Skin Flexible Filament

Skin PLA filament for open source 3D Printers.  High quality, constant diameter, bright constant colours for accurate repeatable prints.  The filament comes on a plastic spool and is sealed in a plastic bag with dececant and packaged in a box (sometimes..

Price: R299.00


          CRON PLA Filament, 1kg, 1.75mm, Graphite      Cache   Translate Page      
Graphite PLA Filament

Graphite PLA filament for open source 3D Printers.  High quality, constant diameter, bright constant colours for accurate repeatable prints.  The filament comes on a plastic spool and is sealed in a plastic bag with dececant and packaged in a box (somet..

Price: R299.00


          Admin/Presales Support Specialist at Virtual Nigeria      Cache   Translate Page      

Virtual Nigeria is a Red Hat training partner. Red Hat is an American software company and is the worldandrsquo;s most trusted provider of Linux and open source technology. We provide enterprises with an open stack of trusted, high-performing technologies and services made possible by the open source model. Currently the only certified training partner (CTP) in West Africa, we have an unwaverin


          Business Development Representative at Virtual Nigeria      Cache   Translate Page      

Virtual Nigeria is a Red Hat training partner. Red Hat is an American software company and is the worldandrsquo;s most trusted provider of Linux and open source technology. We provide enterprises with an open stack of trusted, high-performing technologies and services made possible by the open source model. Currently the only certified training partner (CTP) in West Africa, we have an unwaverin


          Open Space Makers ouvre sa plateforme collaborative pour le spatial      Cache   Translate Page      
La plateforme Open Space Makers permet d’intégrer des groupes projet. DR Des semaines si ce n’est des mois qu’on l’attendait. L’association Open Space Makers, produit de l’initiative Fédération du Centre National d’Etudes Spatiales, vient de lancer sa plateforme collaborative dédiée au développement et à la démocratisation de projets spatiaux en open source. Encore en mode... Voir l'article
          PDFCreator 3.3.0      Cache   Translate Page      

PDFCreator e Open Source програма, с която можете да създавате PDF-файлове на практика от всяко Windows-приложение. Може да се интегрира като принтер в Microsoft Word, StarCalc или друго приложение. Поддържани изходни файлови формати: PDF, PNG, JPEG, BMP, PCX, TIFF, PS, EPS. Работеща като принтерен драйвер, PDFCreator ви позволява да създадете PDF файлове от който и […]

Материалът PDFCreator 3.3.0 е публикуван за пръв път на kaldata.com.


          Comentario en Shotcut un excelente editor de video open source multiplataforma por Juan      Cache   Translate Page      
Hola tengo problemas para cambiar el idioma, sale por defecto en ingles. Voy a configuracion, cambiara español, me pide reiniciar, reinicio pero continua en ingles. Uso elementary OS 0.4.1 Loki
          Comentario en Shotcut un excelente editor de video open source multiplataforma por Aldo Castro      Cache   Translate Page      
Yeah!! Lo voy usando hace poco y esta muy bueno. Aunque lo probé en Ubuntu y todo bien, en Windows no tuve tanta suerte ;D
          KDE Plasma 5.14 Released: What’s New In The Popular Linux Desktop      Cache   Translate Page      

Plasma is one of the most popular Linux desktop environments around; it’s loved by new open source enthusiasts and veterans alike. To bring a fresh and updated experience to the users, the KDE Project keeps bringing newer versions of the Plasma desktop from time to time. The latest Plasma release 5.14.0 has just been pushed and […]

The post KDE Plasma 5.14 Released: What’s New In The Popular Linux Desktop appeared first on Fossbytes.


          RE: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development? (no replies)      Cache   Translate Page      
Thanks to everyone who made it to the call.
I'll be sharing the minutes and the action plan.

Also, I'll be reaching out to the community seeking volunteers on some of the action items and your feedback.

Excited to the part of this community and hoping to gain more participation going ahead.

Regards,
Ashwin Krishna
ashwin.krishna@betsol.com

-----Original Message-----
From: Ashwin Krishna
Sent: Monday, October 08, 2018 11:03 AM
To: 'amanda-users@amanda.org' <amanda-users@amanda.org>
Subject: RE: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development?

Gentle Reminder : We are started the call. Meeting Details are below.

We propose to have the call on Oct 8th at 11 AM Mountain Time.

Agenda:
* Zmanda's Acquisition by BETSOL
* Attendee Introductions
* Existing Governance Model of Amanda Community
* Suggested changes to the Governance Model
* BETSOL's Commitment to Open Source Community

We have taken a note of all the suggestions received on the mailing list and we will go through the same on the call.

Meeting Details:
Amanda Open Source Community Discussion Mon, Oct 8, 2018 11:00 AM - 12:00 PM MDT Please join my meeting from your computer, tablet or smartphone.
https://global.gotomeeting.com/join/438069045
You can also dial in using your phone.
United States: +1 (786) 535-3211
Access Code: 438-069-045
First GoToMeeting? Let's do a quick system check: https://link.gotomeeting.com/system-check

-----Original Message-----
From: Ashwin Krishna
Sent: Tuesday, October 02, 2018 10:35 AM
To: amanda-users@amanda.org
Subject: RE: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development?

Hi All,

We propose to have the call on Oct 8th at 11 AM Mountain Time.

Agenda:
* Zmanda's Acquisition by BETSOL
* Attendee Introductions
* Existing Governance Model of Amanda Community
* Suggested changes to the Governance Model
* BETSOL's Commitment to Open Source Community

We have taken a note of all the suggestions received on the mailing list and we will go through the same on the call.

Meeting Details:
Amanda Open Source Community Discussion Mon, Oct 8, 2018 11:00 AM - 12:00 PM MDT Please join my meeting from your computer, tablet or smartphone.
https://global.gotomeeting.com/join/438069045
You can also dial in using your phone.
United States: +1 (786) 535-3211
Access Code: 438-069-045
First GoToMeeting? Let's do a quick system check: https://link.gotomeeting.com/system-check

Regards,
Ashwin Krishna

-----Original Message-----
From: Nathan Stratton Treadway <nathanst@ontko.com>
Sent: Thursday, September 27, 2018 9:01 PM
To: Ashwin Krishna <ashwin.krishna@betsol.com>
Cc: amanda-users@amanda.org
Subject: Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development?

Ashwin, thanks very much for getting in contact with the Amanda mailing list.

On Thu, Sep 27, 2018 at 06:16:02 +0000, Ashwin Krishna wrote:
> We are 100% committed to the open source community and will be
> contributing to the code base to the best of our abilities.
>
[...]
> I want to assure you that we are actively investing in growing Amanda
> and we have young enthusiastic engineers in the team.
>
> You can expect the next Amanda releases to include support for newer
> versions of operating systems, defect fixes, security enhancements
> etc.
>
[...]
> We have retained the team members that we could of previous Zmanda team.
> I can tell you that it's not easy without support from the community members.
> We encourage the community members to guide and contribute as much as you can.
> If you need commit access to the code base, please don't hesitate to reach out to us.
> You can expect our commitment and support to you.

On Thu, Sep 27, 2018 at 22:54:02 +0000, Ashwin Krishna wrote:
> We are planning to host a conference call and would like all the
> active admins and community members to join to have a discussion with
> the Zmanda team at BETSOL regarding future collaborations.
>
> Will be sending out the meeting details (US time) with the agenda later.

It sounds like getting the new BETSOL team in direct contact with the admins for the mailing list and other amanda.org-related resources in an important step at this point.


However, I would say that for many of us here on the list, the most notable change in the past 7 months is not related those things (which have continued to chug along as before), but rather the lack of "a developer" to move things along here on the public lists and in the public source repo.

A decade or two ago it sounds like there were a number of developers involved, but more recently it's just been one or two Zmanda people who have served that role.

Obviously this could be a good time to reconsider this arrangement if there are in fact other people ready to jump in, but off hand I'm guessing that what's likely to work going forward is for there to be a small number of BETSOL developers back in that role.

As an Amanda user who has tried to contribute back a few improvements to the code line, I'm not really looking to have direct commit access myself, but rather hope to get back to someone (hanging out here on the mailing lists) who can take the patches I came up with hacking around on my own system and understand whether or not they will really work for everyone, and who will know which branches should have that change pushed onto them, and what tweaks are needed to make the patch apply to some older branch, etc.

So, here's hoping you all at BETSOL are soon able to identify someone/a few people to take over that function, and patches and discussions can start flowing again....

Nathan

p.s. Personally I'd say that, rather than than a new major release with support for newer versions of operating systems and whatnot., more urgent would be a minor release to gather up the handful of bugfixes which have already been discussed since 3.5.1 came out and get them published as part of an official release....


----------------------------------------------------------------------------
Nathan Stratton Treadway - nathanst@ontko.com - Mid-Atlantic region
Ray Ontko & Co. - Software consulting services - http://www.ontko.com/
GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt ID: 1023D/ECFB6239
Key fingerprint = 6AD8 485E 20B9 5C71 231C 0C32 15F3 ADCD ECFB 6239
This message was imported via the External PhorumMail Module
          RE: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development? (no replies)      Cache   Translate Page      
Gentle Reminder : We are started the call. Meeting Details are below.

We propose to have the call on Oct 8th at 11 AM Mountain Time.

Agenda:
* Zmanda's Acquisition by BETSOL
* Attendee Introductions
* Existing Governance Model of Amanda Community
* Suggested changes to the Governance Model
* BETSOL's Commitment to Open Source Community

We have taken a note of all the suggestions received on the mailing list and we will go through the same on the call.

Meeting Details:
Amanda Open Source Community Discussion Mon, Oct 8, 2018 11:00 AM - 12:00 PM MDT Please join my meeting from your computer, tablet or smartphone.
https://global.gotomeeting.com/join/438069045
You can also dial in using your phone.
United States: +1 (786) 535-3211
Access Code: 438-069-045
First GoToMeeting? Let's do a quick system check: https://link.gotomeeting.com/system-check

-----Original Message-----
From: Ashwin Krishna
Sent: Tuesday, October 02, 2018 10:35 AM
To: amanda-users@amanda.org
Subject: RE: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development?

Hi All,

We propose to have the call on Oct 8th at 11 AM Mountain Time.

Agenda:
* Zmanda's Acquisition by BETSOL
* Attendee Introductions
* Existing Governance Model of Amanda Community
* Suggested changes to the Governance Model
* BETSOL's Commitment to Open Source Community

We have taken a note of all the suggestions received on the mailing list and we will go through the same on the call.

Meeting Details:
Amanda Open Source Community Discussion Mon, Oct 8, 2018 11:00 AM - 12:00 PM MDT Please join my meeting from your computer, tablet or smartphone.
https://global.gotomeeting.com/join/438069045
You can also dial in using your phone.
United States: +1 (786) 535-3211
Access Code: 438-069-045
First GoToMeeting? Let's do a quick system check: https://link.gotomeeting.com/system-check

Regards,
Ashwin Krishna

-----Original Message-----
From: Nathan Stratton Treadway <nathanst@ontko.com>
Sent: Thursday, September 27, 2018 9:01 PM
To: Ashwin Krishna <ashwin.krishna@betsol.com>
Cc: amanda-users@amanda.org
Subject: Re: Zmanda acquired from Carbonite by BETSOL -- future of Amanda development?

Ashwin, thanks very much for getting in contact with the Amanda mailing list.

On Thu, Sep 27, 2018 at 06:16:02 +0000, Ashwin Krishna wrote:
> We are 100% committed to the open source community and will be
> contributing to the code base to the best of our abilities.
>
[...]
> I want to assure you that we are actively investing in growing Amanda
> and we have young enthusiastic engineers in the team.
>
> You can expect the next Amanda releases to include support for newer
> versions of operating systems, defect fixes, security enhancements
> etc.
>
[...]
> We have retained the team members that we could of previous Zmanda team.
> I can tell you that it's not easy without support from the community members.
> We encourage the community members to guide and contribute as much as you can.
> If you need commit access to the code base, please don't hesitate to reach out to us.
> You can expect our commitment and support to you.

On Thu, Sep 27, 2018 at 22:54:02 +0000, Ashwin Krishna wrote:
> We are planning to host a conference call and would like all the
> active admins and community members to join to have a discussion with
> the Zmanda team at BETSOL regarding future collaborations.
>
> Will be sending out the meeting details (US time) with the agenda later.

It sounds like getting the new BETSOL team in direct contact with the admins for the mailing list and other amanda.org-related resources in an important step at this point.


However, I would say that for many of us here on the list, the most notable change in the past 7 months is not related those things (which have continued to chug along as before), but rather the lack of "a developer" to move things along here on the public lists and in the public source repo.

A decade or two ago it sounds like there were a number of developers involved, but more recently it's just been one or two Zmanda people who have served that role.

Obviously this could be a good time to reconsider this arrangement if there are in fact other people ready to jump in, but off hand I'm guessing that what's likely to work going forward is for there to be a small number of BETSOL developers back in that role.

As an Amanda user who has tried to contribute back a few improvements to the code line, I'm not really looking to have direct commit access myself, but rather hope to get back to someone (hanging out here on the mailing lists) who can take the patches I came up with hacking around on my own system and understand whether or not they will really work for everyone, and who will know which branches should have that change pushed onto them, and what tweaks are needed to make the patch apply to some older branch, etc.

So, here's hoping you all at BETSOL are soon able to identify someone/a few people to take over that function, and patches and discussions can start flowing again....

Nathan

p.s. Personally I'd say that, rather than than a new major release with support for newer versions of operating systems and whatnot., more urgent would be a minor release to gather up the handful of bugfixes which have already been discussed since 3.5.1 came out and get them published as part of an official release....


----------------------------------------------------------------------------
Nathan Stratton Treadway - nathanst@ontko.com - Mid-Atlantic region
Ray Ontko & Co. - Software consulting services - http://www.ontko.com/
GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt ID: 1023D/ECFB6239
Key fingerprint = 6AD8 485E 20B9 5C71 231C 0C32 15F3 ADCD ECFB 6239
This message was imported via the External PhorumMail Module
          Python Online Training      Cache   Translate Page      
Open Source Technologies is one of the best in the market since its inception. Some of the main features of the Open Source Technologies is best infrastructure, real time assignments and top-notch faculty. One of the best training provided by Open Source Technologies is Python Training.
          The Design of NetNewsWire’s Timeline      Cache   Translate Page      

I had some principles in mind for the timeline:

  • It should be appealing
  • It should include some graphics, to cut down on the wall of text
  • It should be easily scannable
  • It should scroll really, really fast

The last one of those — super-fast scrolling — is important. (High performance is a principle of the app in general.)

Luckily, I’ve written so many timelines over the years that I know how to make them fast. Even on my five-year-old MacBook Air — where I spend about 20% of my coding time — scrolling in the app is as fast as it could be.

So let’s set that aside.

The first one of these — “it should be appealing” — is also super-important. But, as a principle, it’s too general, and the answer is subjective.

So that leaves me with two things to think about: including graphics to cut down on the wall of text, and making it easily scannable. If I do a good job designing those two things, then hopefully the result is just naturally appealing — on the grounds that something well-designed for scannability and legibility tends to be appealing.

(Before I go any further: if you want to go get the app — it’s free and open source — you can.)

What Scannability Means

A timeline cell should have enough info that you can quickly figure out, when you glance at it, what it’s likely to be about, where it comes from, and how old it is.

It should be large enough to be able to include that much context, but not so large that you find you have to scroll constantly (even though scrolling is fast).

It should include the title — hopefully the entire title — and perhaps some of the text, for further context. It should include the feed name so you know where it comes from.

Each cell should be the same height as other cells, since this also helps scannability. (I tried a variable-height table early in the development process, and found that a regular height was better.)

Layout

After trying a bunch of different layouts — my design process often requires me to actually code a thing and try it (for better or worse) — I settled on the following layout:

Title
Title line 2 or first line of body
Date
Feed name

Examples:

The title is bold and dark, since it should stand out the most. The first line of body text, when displayed, is lighter in color and is regular text.

In the case of a title-less article, the layout is this:

Article text
Article text continued
Date
Feed name

Example:

For title-less articles, the article text weight is in between the title and body text. Not as bold as the title, not as light as article text. This tells you, at a glance, that you’re reading the first line of an article and not an actual title.

The date and feed name are set in smaller type. The date is bold, which sets it apart from the article title/text above it and the feed name below it.

All text is ellipsized when needed, which lets you know when you’re not seeing the whole thing.

I decided against horizontal grid lines in favor of vertical spacing. This means less visual noise in an area of the app that could easily be too noisy. And, well, it’s a news reader, not a spreadsheet, and using spacing instead of gridlines seemed more publication-like.

This all might seem obvious. I hope so! Of course it wasn’t obvious — but I like it if seems that way.

Graphics

I knew I wanted graphics too, in part because they brighten up the UI and keep it from being too text-heavy (and thus hard to scan).

But graphics can serve another purpose: they can quickly tell you something about the article, which helps with scannability.

In NetNewsWire Lite 4.0 I added graphics to the timeline, so this wasn’t a new idea for me. For each article, it looked at the HTML text and found the first image, and assumed it was a featured image. It downloaded those images and made thumbnails to show in the timeline.

This looked pretty cool, brightened up the UI, and cut down on wall-of-text. But it had one major drawback: sometimes the results were a bit risible. Because it was making square thumbnails, it had to crop — which meant sometimes you’d get a nose but no eyes.

And sometimes in trying to find the featured image it would just be a social media sharing icon. Or an image of an emoji. Or a blank tracking image. There was no set of rules that was going to make this work correctly all the time.

And then of course there are plenty of feeds that rarely use images at all. This blog, for instance, or Daring Fireball.

In practice I found that those images — while attractive when they worked — didn’t add that much to scannability (except in the case of photo feeds, which are a minority of feeds, surely).

So, for NetNewsWire 5, I decided to go with feed icons. These are like big favicons, basically, and they serve the purpose of providing a super-fast visual indication of which feed an article comes from.

(Finding these is a bit of a chore. There’s a property for this in JSON Feed, but there are plenty of RSS and Atom feeds out there. So, for those, it downloads the HTML for the home page and attempts to find the feed icon by looking at various metadata properties: the apple touch icon, for instance. I’ll write more about the technical side another day.)

Initially I put them on the left side and moved the text to the right. I was thinking of Twitter and chat and similar apps, where the avatar goes on the left.

The problem there, though, was that not every feed has one of these. So I could either leave the left side blank for those feeds, or move the text all the way to the left — but that made it so sometimes the text was indented (when there’s a graphic) and sometimes not, which looked weird.

So I put them on the right — to my surprise, because I never pictured them there — and it works just fine.

Single-feed selection

The above all works great for the Today, All Unread, and Starred pseudo-feeds. It works great when a folder is selected, or when multiple feeds are selected.

But when you have a single feed selected, it looked weird to have the feed name and the feed icon repeated a whole bunch of times.

So, in that case, the layout removes the feed icon and the feed name. The row height becomes correspondingly shorter.

It does mean the timeline is a bit more wall-of-text in this case — but it’s also a quick reminder that the selection is a single feed. I think it’s fine — though I could imagine revisiting this.

Future

I could also revisit the idea of using thumbnails of images from article text. There are problems to solve with that, of course.

One idea would be to put a larger, non-cropped version of the image below the rest of the cell — as you see in Twitter and various unfurls. This would bring back variable-height cells, but that’s not necessarily the worst thing. (People deal with variable heights just fine in social media apps.)

Another idea for the timeline — one I’m pretty keen on — is to render microblog posts in full in the timeline. (Microblog in the generic sense, not just posts from Micro.blog.) These would have to include clickable links and basic HTML formatting.

The thing is, I don’t want to just use a web view. To make this work well would mean writing a small HTML layout engine that handles the basics (bold, italics, links, blockquotes, images). While this sounds like a fun challenge, it’s also probably not a one-day project. It would take some time. And right now I’ve got enough other things — syncing, especially — to do, so I’ve put off thinking about this until after 5.0 ships.

Another thing put off until after 5.0 ships: an alternate high-density timeline. I’ve heard from a number of people that they really prefer one-line cells with multiple columns: title, date, feed name. For them this is the most scannable UI, and they liked this exact feature in years-ago versions of NetNewsWire. So it’s something to consider — but, again, I’m not thinking about it till after 5.0 ships.

Dark Mode

Though I don’t run in Dark Mode normally — even though it’s beautiful — sometimes I switch to it just to look at NetNewsWire. :)


          Forge Your Future with Open Source: Build Your Skills. Build Your Network. Build the Future of Technology, in print      Cache   Translate Page      

          Software Design Engineer - Video Research (Entry, Intermediate & Sr.) - Evertz Microsystems Limited - Burlington, ON      Cache   Translate Page      
Experience with open source codecs e.g. x264, JM, HM. Senior or Junior Software Design Engineer....
From Evertz Microsystems Limited - Fri, 03 Aug 2018 19:41:33 GMT - View all Burlington, ON jobs
          Components Database, available for Cooperations      Cache   Translate Page      

Do you mean corporations?

Most public part data is either Open Source or proprietary. Open Source data tends to have strings attached that companies do not like, proprietary data source (e.g. Samacsys) tend to charge for their work.

Obviously there are manufacturer data sheets you can gather data from, but your probably looking for data ready to use in KiCad.

tldr; there is no magic part data tree.


          Scientists propose blasting space junk out of orbit with powerful plasma beam      Cache   Translate Page      


  • Japanese researchers have developed a magnetic nozzle plasma thruster that could clean up our atmosphere.
  • The satellite solves a long-standing issue by including two thrusters to ensure junk is pushed out of orbit.
  • While the project is not without risks, it marks a potential solution for the growing problem of space debris.

None


Humans have some crazy ideas. On occasion, a radical vision becomes commonplace, such as speaking a few words into a device in your palm and immediately connecting to someone on the other side of the planet. We forget what an incredible moment we're living through when the algorithms are invisible. It's taken quite a bit of work to get here.

Other ideas, however, seem more a band-aid to a solution that should be solved by other means. Sucking carbon dioxide out of the atmosphere and into space is a recurrent answer to climate change, instead of accepting the reality of finite resources and adjusting accordingly.

Space too has plenty of pollution. Addressing this problem, researchers at Tohoku University in Japan might have stumbled across a solution that appeals to Star Wars fantasies while providing real-world utility: shooting a plasma beam at space debris to push it further into space.

Thankfully space is vast, otherwise every night we'd see the over 8,000 tons of junk floating out there—a problem, as the graph below shows, that is only getting worse. In just over six decades of interstellar exploration humans have launched over 42,000 tracked objects into space. Of the 23,000 still out there, roughly 1,200 remain operational.

None


While this problem will likely never affect most of us, it does limit our ability to send more satellites into orbit. If you think texting drivers are bad, just wait until you're trying to reach Mars when free-floating space junk veers into your lane.

Blasting junk sounds simple, yet until now researchers have run into the same issue: the satellite shooting the beam would by necessity (aka Newton's third law of motion) move in the opposite direction, reducing the force required to move the junk out of the way.

The answer? That resides in one of the greatest sentences in any research paper ever:

By employing a magnetic nozzle plasma thruster having two open source exits, bi-directional plasma ejection can be achieved using a single electric propulsion device.

By sending an equally powerful beam in the opposite direction, the satellite could hover in place, and even accelerate and decelerate as needed. Remote controllers would manage the trajectory of the junk.

Plasma, which comes from the Greek word for "moldable substance," is (alongside solid, liquid, and gas) a fundamental state of nature. Plasma does not freely exist on earth, but is generated from excitable events such as lighting and neon lights, or even in your plasma television. Plasma produce electric currents and magnetic fields. The sun's interior is one example of fully ionized plasma.

None


Plasma makes for a perfect remedy for combating space debris because, as the team, led by associate professor Kazunori Takahashi in the Department of Electrical Engineering, writes, the problem is not going to correct itself. In fact, it's going to get worse.

If remedial action is not taken in the near future, it will be difficult to prevent the mass of debris increasing, and the production rate of new debris resulting from collisions will exceed the loss rate due to natural orbital decay.

Takahashi says that his helicon plasma thruster will be able to "undertake long operations performed at a high-power level." The technology is not without risks, however. A developmental risk involves mounting two propulsion systems on a single satellite. Ejecting plasma through ion-gridded thrusters is also a concern, as the force of the beams could quickly erode the entire structure.

In laboratory experiments, the thruster is operational in acceleration and deceleration modes, as well as space debris removal mode. While the map is not the territory, it is an exciting time to be orbiting the planet. Now if only we could blast the garbage from our oceans, we might make serious progress.

--

Stay in touch with Derek on Twitter and Facebook.


          Open Eye Gallery, Open Source      Cache   Translate Page      

DEADLINE: Rolling

The post Open Eye Gallery, Open Source appeared first on Artinliverpool.com.


          encryptic      Cache   Translate Page      
An encryption-focused open source note taking application.
          Update - Rambox v0.6.1      Cache   Translate Page      
Rambox is a Open Source and Cross Platform messaging and emailing application that combines many common web applications into a single interface. Instead of using multiple apps....
          Reddit: Would making open source software pretty and intuitive attract more users?      Cache   Translate Page      
submitted by /u/mabasic
[link] [comments]
          LXer: 6 tips for receiving feedback on your open source contributions      Cache   Translate Page      
In the free and open source software world, there are few moments as exciting or scary as submitting your first contribution to a project. You've put your work out there and now it's subject to review and feedback by the rest of the community.Not to put it too lightly, but feedback is great. Without feedback we keep making the same mistakes. Without feedback we can't learn and grow and evolve. It's one of the keys that makes free and open source collaboration work.read more
          Alternatif Sumber CDN Selain RawGit      Cache   Translate Page      
Seperti yang kita tahu, RawGit merupakan salah satu situs layanan Open Source sumber CDN untuk memuat file static dari link GitHub. Beberapa tahun yang lalu sejak Google Code resmi ditutup pada awal tahun 2016, dari situ saya mulai menggunakan GitHub untuk menyimpan file-file script yang biasa saya tambahkan di template-template yang saya bagikan di sini maupun di situs Idntheme. Link dari file script tersebut saya convert melalui situs RawGit agar bisa terbaca di template Blogger.

Alternatif Sumber CDN Selain RawGit

Namun beberapa hari yang lalu saya dikejutkan dengan dua kabar. Pertama dengan kabar tentang Alphabet yang akan menutup layanan Google+ karena diduga ada celah (bug) yang mengekspos 500 ribu data pribadi pengguna dan kabar kedua tentang RawGit yang sudah tidak melayani convert link yang baru lagi dan itu akan berdampak pada hilangnya akses langsung dari GitHub ke link ekternal yang ditambahkan ke dalam template yang mengakibatkan error pada blog karena jembatannya (RawGit) sudah tidak ada. Untuk layanan RawGit setidaknya masih aktif sampai bulan oktober tahun 2019 nanti - Baca selengkapnya.
Selengkapnya »
          ExTiX 18.10 Is the First Linux Distro Based on Ubuntu 18.10 (Cosmic Cuttlefish)      Cache   Translate Page      
GNU/Linux and Open Source software developer Arne Exton announced over the weekend the release of what it would appear to be the first Linux distro based on the upcoming Ubuntu 18.10 (Cosmic Cuttlefish) operating system.
          Databricks Launches First Open Source Framework for Machine Learning      Cache   Translate Page      
databricks_logor_stacked_rgb_1200px

Databricks recently announced a new release of MLflow, an open source, multi-cloud framework for the machine learning lifecycle, now with R integration.

The post Databricks Launches First Open Source Framework for Machine Learning appeared first on RTInsights.


          Redis Labs and the "Common Clause"       Cache   Translate Page      

So, the short version is that with the recent licensing changes to several Redis Labs modules making them no longer free and open source, GNU/Linux distributions, such as Debian and Fedora, are no longer able to ship Redis Labs' versions of the affected modules to their users.

As a result, we have begun working together to create a set of module repositories forked from prior to the license change. We will maintain changes to these modules under their original open source licenses, applying only free and open fixes and updates.

We are committed to making these available under an open source license permanently, and welcome community involvement.

You can find more background info here:


          Re: SEB (Safe Exam Browser) Pros & Cons      Cache   Translate Page      
by Joseph Liaw.  

Marcus is correct--nothing replaces old-fashioned human boots on the ground and face-to-face proctoring...


...but technology tools can help with discouraging mindless cheating, and that's where the SEB integration is a great idea with open source tools like Moodle....


I think this is what you are looking for:

https://moodle.org/plugins/quizaccess_onesession


Works really well but you should still have a teacher / proctor in the room in case something goes wrong so that you can override the settings, answer student questions, etc.


          How to Install and Configure OpenLiteSpeed Server on Ubuntu 18.04 along with MariaDB      Cache   Translate Page      

HowToForge: OpenLiteSpeed is a lightweight and open source version of the popular LiteSpeed Server


          Auditeur Sécurité technique / Pentest H/F      Cache   Translate Page      
SOPRA STERIA Logo

Sopra Steria, fort de 40 000 collaborateurs dans plus de 20 pays, propose l'un des portefeuilles d'offres les plus complets du marché : conseil, intégration de systèmes, édition de solutions métier, infrastructure management et business process services.

En forte croissance, le Groupe accueillera 3100 talents en 2018 en France pour participer à ses projets d'envergure sur l'ensemble de ses métiers.

Vous aussi, rejoignez-nous et participez au monde numérique de demain !

Activité stratégique pour Sopra Steria, la Cybersécurité bénéficie d'une entité dédiée qui, en France, fédère près de 250 experts en région parisienne et à Toulouse.

A l'heure où les cyber-menaces sont de plus en plus nombreuses et complexes, nos collaborateurs participent activement à la protection des grands acteurs publics et privés & leurs écosystèmes tout en accompagnant et accélérant leur transformation numérique.

Nos équipes accompagnent notamment des acteurs majeurs dans un environnement international.

Mission

Au sein de notre offre Cybersécurité, vous interviendrez en environnement international pour réaliser des missions à forte valeur ajoutée dans l'équipe audit de la Sécurité du Système d'Information de nos clients grands comptes.

Vous serez amené(e) à réaliser des tests de pénétrations et des audits sécurité technique :

  • audit de configuration
  • audit de conformité
  • audit de sécurité réseau
  • audit de code

Vous devrez faire preuve d'une vraie valeur ajoutée technique sur le domaine de la sécurité de l'information, mise au service d'une méthodologie et d'un relationnel client développé.

Profil recherché

De formation bac+4/5 en informatique, vous avez un profil confirmé comme auditeur sécurité. Vous êtes capable de réaliser des audits techniques et des tests de pénétration boîte noire / boîte blanche et vous avez une bonne pratique des investigations.

Vous avez une bonne connaissance du monde Windows et réseau, maîtrisez les serveurs WEB et bases de données (SQL, Oracle). Des outils comme Metasploit, Arachni vous sont familiers et vous maîtrisez les failles listées dans l'OWASP. Vous avez connaissance du monde open source.

Curieux(se) et force de proposition, vous êtes reconnu(e) pour votre écoute, votre rigueur technique, vos capacités rédactionnelles, d'analyse, de synthèse et d'innovation. Vos qualités relationnelles, votre diplomatie, votre adaptabilité et votre sens du service client sont également des atouts indispensables pour réussir vos missions.

Votre niveau d'anglais est courant à l'écrit comme à l'oral.

 


          How should you use funding for your open source project?      Cache   Translate Page      

I think the consensus agrees that sustaining open source software takes more than just money. But, it money is often a part of a larger need for open source to sustain AND thrive. So, if that's the case...how should you use funding for your open source project? Brenna Heaps writes on the Tidelift blog:

We’ve been speaking with a lot of open source maintainers about how to get paid and what that might mean for their project, and the same question keeps popping up: What do I do with the money?

The tldr?

Fund the project, community engagement, and pay it forward...

But, it's a short read and worth it — so go read this and then share it with your fellow maintainers.


          Cadeiras pelo mundo: designer investiga diferenças culturais através da mobília       Cache   Translate Page      
Cadeiras pelo mundo: designer investiga diferenças culturais através da mobília  (Foto: Divulgação)

As semelhanças e diferenças entre sociedades se revelam através dos costumes, claro, mas também por meio de seus objetos. Pensando nisso, Matteo Guarnaccia, designer da Sicília que hoje vive em Barcelona, criou o projeto Cross Culture Chairs, que analisa diversas culturas usando como ponto de partida a mais essencial das peças de mobiliário: a cadeira.

+ 10 poltronas clássicas assinadas por famosos designers brasileiros
+ 10 salas de jantar com cadeiras diferentes
+ Artista planta árvores em formato de cadeira
+ Poltrona Mole: 15 vezes em que ela brilhou no décor

Cadeiras pelo mundo: designer investiga diferenças culturais através da mobília  (Foto: Divulgação)

Passando por 8 dos países mais populosos do mundo, incluindo Brasil, México, Índia e China, Guarnaccia planeja mergulhar nos hábitos e histórias de cada um deles, refletir sobre o porquê projetar uma cadeira e, com a ajuda de um designer local, conceber uma cadeira nova em cada um destes países.

Neste sentido, o brasileiro convidado para colaborar foi o carioca Bruno Jahara, que se interessou pelo projeto por seu caráter antropológico. “Vamos desenvolver algo novo pensando em um modelo de cadeira de origens brasileiras. Provavelmente vou envolver artesãos e outros designers no processo. Mas o resultado é imprevisível por enquanto”, explica o designer.

Cadeiras pelo mundo: designer investiga diferenças culturais através da mobília  (Foto: Divulgação)

No fim, segundo o plano divulgado no site de financiamento coletivo Kickstarter, além da cadeira, todo o processo de investigação sobre as tradições e técnicas locais, e sobre o ritual de se reunir e sentar junto, estará registrado em um site “open source”. Isso sem contar com um livro de 300 páginas feito para eternizar a pesquisa e uma exposição itinerante, planejada para começar no museu de Design de Barcelona.

Quer acessar mais conteúdos da Casa Vogue? Baixe já o aplicativo Globo Mais. Nele você tem acesso a reportagens exclusivas e às edições das melhores publicações do Brasil. Cadastre-se agora e experimente 30 dias grátis.

 


 


          23 PHP Developers to Follow Online      Cache   Translate Page      

NOTE: This is an update to a post originally published in May 2014. We encourage and welcome suggestions from our readers about whom to include on this list! Please send your ideas to @NewRelic on Twitter, using hashtag #phpexperts .


23 PHP Developers to Follow Online

PHP is an incredibly powerful and versatile scripting language. PHP is also an essential skill for most web developers: it’s used today by four out of five websites and by more than half of the world’s top 1,000 sites . As a result, the PHP community\ is large, active, and vibrant, and it’s a magnet for smart developers and technologists.

The following list―in alphabetical order―includes some of the most prominent PHP source and framework committers, project leaders, teachers, visionaries, and entrepreneurs. Follow their blogs, Twitter feeds, and other communication channels to get a front-row view of what matters to the community today, where it wants to go, and how you might be able to get involved.

Be sure to check back occasionally: We plan to update this list in response to reader feedback and suggestions.


23 PHP Developers to Follow Online
Rob Allen

Owner of Nineteen Feet , a UK-based Zend Framework consultancy. Current Zend Framework Education Advisory Board member, and a prolific contributor to Zend Framework as well as other PHP-related projects; author of Zend Framework in Action (2009). A regular speaker and presenter at PHP conferences and other public events.

Blog: http://akrabat.com/

Twitter: @akrabat

GitHub: https://github.com/akrabat

Stack Overflow: https://stackoverflow.com/users/23060/rob-allen


23 PHP Developers to Follow Online
Sebastian Bergmann

Co-founder and Principal Consultant of The PHP Consulting Company , and a pioneer in PHP quality assurance. Creator of PHPUnit , an industry-leading testing tool integrated with most modern PHP frameworks and CMS platforms; author of nine English- and German-language books on PHP-related topics .

Website: http://sebastian-bergmann.de/

Twitter: @s_bergmann

GitHub: https://github.com/sebastianbergmann


23 PHP Developers to Follow Online
Jordi Boggiano

Co-founder at Nelmio , a Switzerland-based web application development firm. Co-author of Composer , a widely used application-level package manager for PHP. Symfony2 core team member; lead developer for the Monolog PHP logger; and a frequent speaker and presenter at industry events.

Blog: http://seld.be/

Twitter: @seldaek

GitHub: https://github.com/Seldaek


23 PHP Developers to Follow Online
Dries Buytaert

Co-founder and CTO of Acquia , an open source company that leverages the Drupal CMS. Creator and project lead for Drupal, as well as a co-founder and board member of the Drupal Association .

Blog: http://buytaert.net/

Twitter: @Dries


23 PHP Developers to Follow Online
Angie Byron

Senior Director of Product Management and Community Development at Acquia . Current Drupal core co-maintainer and past Drupal Association board member; also Development Manager for Drupal Spark , a Drupal distribution focused on user experience enhancements. Co-author of Using Drupal, 3rd Editio n (2016).

Blog: http://www.webchick.net/

Twitter: @webchick


23 PHP Developers to Follow Online
Anthony Ferrara

CTO at enterprise language coaching firm Lingo Live , and a Zend-certified engineer with expertise in security, performance, and object-oriented programming. Developed the new, more secure password API in PHP 5.5 ; writes extensively about security in the PHP ecosystem and is a prolific creator of instructional videos and other resources for PHP developers.

Blog:
          ‫انتشار ML.NET 0.6 (یادگیری ماشین تحت NET.)      Cache   Translate Page      
ML.NET is a cross-platform, open source machine learning framework for .NET developers. We want to enable every .NET developer to train and use machine learning models in their applications and services.

          NetAidKit Standalone VPN/Tor router for journalists and activists.      Cache   Translate Page      
https://www.aneddoticamagazine.com/wp-content/uploads/NetAidKit.jpeg

Connect your computer, your mobile phone or tablet to your NetAidKit in any way you’re used to.


The NetAidKit accepts wired and wireless connections, allowing you to connect to the NetAidKit any way you want. It will also connect wired or wirelessly to the Internet.


After you’re connected, set-up your NetAidKit and choose your security preferences in the web interface.


The NetAidKit can Torify all your internet traffic, not just your Tor browser sessions. Best browsing practices still apply to ensure your anonymity.


The NetAidKit is build on top of tried and tested open source technologies and is open by design. All of our code can by found on GitHub.


Home



Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/netaidkit-standalone-vpn-tor-router-for-journalists-and-activists/
          InMoov Open Source 3D printed robot      Cache   Translate Page      
https://www.aneddoticamagazine.com/wp-content/uploads/maxresdefault-240.jpg

Gael Langevin is a French modelmaker and sculptor. He works for the biggest brands since more than 25 years. InMoov is his personal project, it was initiated in January 2012 InMoov is the first Open Source (CC-BY-NC) 3D printed life-size robot. Replicable on any home 3D printer with a 12cm3 area, it is conceived as a development platform for Universities, Laboratories, Hobbyist, but first of all for Makers. It’s concept, based on sharing and community, gives him the honor to be reproduced for countless projects through out the world.


You can find more details:


Twitter: @inmoov


Google+: Gael Langevin


http://www.inmoov.fr


http://www.myrobotlab.org


http://inmoov.blogspot.com



 


Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/inmoov-open-source-3d-printed-robot/
          Industry Voices—Doyle: The promise of open source and the current state of telecom adoption      Cache   Translate Page      
The adoption of open source software for NFV deployments by CSPs has largely failed to live up to industry expectations. Open source software has been installed in communication service providers' IT departments, some tactical parts of the network and is being widely tested in the labs of the leading CSPs.
          Home Assistant - Open source Python3 home automation      Cache   Translate Page      
Replies: 6872 Last poster: Hmmbob at 09-10-2018 23:44 Topic is Open koelkast schreef op dinsdag 9 oktober 2018 @ 22:50: Aha @Hmmbob . Ik probeer het te converten van string naar integer met dit: code:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 alias: 'vaatwasser start' trigger: - platform: numeric_state value_template: "{{ state.sensor.vaatwasser_huidig.state | float }}" above: 10 condition: condition: or conditions: - condition: state entity_id: input_select.vaatwasser_status state: uit - condition: state entity_id: input_select.vaatwasser_status state: klaar - condition: state entity_id: input_select.vaatwasser_status state: "bijna klaar" action: - service: input_select.select_option data: entity_id: input_select.vaatwasser_status option: draait ....maar ook dat lijkt niet te werken. MOet ik het anders aanpakken?Ik ben op reis dus heb even de middelen niet bij me om te testen. Wat is de state van input_select.vaatwasser_status? Als die namelijk niet uit/klaar/“bijna klaar” is, gaat het natuurlijk ook niet werken omdat de condition niet correct is
          ‘Nearly All’ New Pentagon Weapons Vulnerable To Cyber Attacks, GAO Report Says      Cache   Translate Page      

USAF Tests Weapons In Nevada Desert.

A new report by the United States Government Accountability Office (GAO) has revealed that “nearly all” the weapon system being developed by the US military between 2012 and 2017 are vulnerable to cyber attacks.

In cybersecurity tests of major weapon systems that are being developed by the Department of Defense, testers were able to hack into some of the complex weapons systems and take control over them using relatively simple tools and techniques.

The GAO report released on Oct. 9 showed that in one case, a two-person team took only an hour to gain initial access to a weapon system, and just a day to gain full control of the system they were testing.

In some instances, the weapon systems that used commercial or open source software did not change the default password when the software was installed. It only took a search on the internet to find the passwords and gain administrator privileges.

Click here to continue and read more...


          Softwareentwickler - Java, JVM (m|w)      Cache   Translate Page      
Entwicklung mit modernen Open Source Frameworks und Tools wie Spring, Spring Boot &amp; Spring &hellip;
          WinJS 3 – Windows, Phone and now Web too      Cache   Translate Page      

Originally posted on: http://blog.geekypedia.net/archive/2014/10/05/winjs-3--windows-phone-and-now-web-too.aspx

At Build 2014, Microsoft announced that WinJS was going open source and the first goal was to make it available to web sites, not just Windows and phone apps. On September 17th, the first official release of WinJS 3.0 was made available and brought with it cross browser compatibility. Setting it up to work on your site is no different than any other JavaScript library. But extending a Universal, Windows or Phone app to also include a web interface that includes WinJS in another thing entirely.

In this article I will demonstrate how to create a web app with WinJS 3 and then reuse that code in a Universal app. I will then show how you can extend the capabilities of your site to include native WinRT features without using Cordova to create a hybrid application.

Read the full article I have posted over on Code Project.


          Armchair CEO: Windows Phone      Cache   Translate Page      

Originally posted on: http://blog.geekypedia.net/archive/2013/09/14/armchair-ceo-windows-phone.aspx

So the Nokia thing finally happened, assuming the EU allows it.  Now with Nokia under the Microsoft umbrella, what should they focus on next?

Lumia Accessories

When people love their phones, they want to buy more stuff for their phones.  This is one area the iPhone has everyone beat by far, even Samsung.  Microsoft should concentrate on more branded accessories by working with vendors and stock more of the existing third party ones in their stores.  Especially cases.  They need a super rugged case that is not huge and bulky, and a great looking waterproof case ASAP.

Free License

Stop charging what’s left of the device manufacturers a license for Windows Phone.  You’re in third place.  With the purchase of Nokia, this might not be salvageable.

Day One Updates

All carriers, all devices updated on the same day, just like Apple.  Microsoft ought to be able to at least get AT&T to agree to this and they certainly can make sure Nokia is ready to go.  These staggered rollouts have to stop.

Better XBOX Music, Video and Podcasts

Music has some issues with syncing and storage of your own music, but its a great service.  Making it available on iOS and Android is a great idea.  Now make it perfect on Windows Phone.  Better playlists like Spotify would be good.  You’re so close.

Video sucks.  Videos purchased on the XBOX marketplace should sync offline or stream to Windows Phone.  And LOWER the price of the videos.  A huge win here would be to list Netflix, Hulu and other video services queues and search right into XBOX video on the phone.  Combine all your services into one app.  It doesn’t need to play them, it just needs to launch the other apps.  This should be extensible to all third party apps.  Finally, get the missing major apps: Amazon Video, HBO GO, Showtime Anytime and somehow bring back YouTube.

Podcasts suck as much as video.  This is terrible.  There are so many things that need to be fixed here its a complete overhaul.  At a minimum show podcasts you still have to watch in their own queue.  Multiple playlists would be even better.  Back to the drawing board on this one.  Take some lessons from WPodder, my personal favorite of all the third party apps available on the platform.

Android Apps

Windows Phone has a LOT going for it over iOS and Android.  I am finally at the point where I try to get people to switch.  Average users who get this phone, love it.  The only complaint is the app market.  Most of the big apps are there or have third party alternatives.  What you don’t see are new apps and small market apps until much later.  How many times have you seen an poster or ad with the “now available” on Apple App Store and Google Play logo without the Windows Store logo sitting right next to it.  The new Windows Phone App Studio could make a big difference for small apps, but not games.  Android is open source.  Build in integration into Windows Phone and make these apps available.  You can always try to push Windows Phone apps over the Android apps but this fills a huge gap real fast.  Want to stick it back to Google, build in the Amazon marketplace over Google Play.  So when I search for an app like Instragram it shows me the official app when it is available, otherwise it shows the Amazon app and finally if that’s not available it shows the Google Play app.  And alert me when the official app is available if I am running an Android app or point me to great third party apps.

Windows RT Convergence

Finally, start the road to a single Phone / RT OS.  Someday the idea of the single device will be achievable for the average user.


          PinWorthy.com–Our New Windows Phone 7 Site      Cache   Translate Page      

Originally posted on: http://blog.geekypedia.net/archive/2011/04/09/pinworthy.comndashour-new-windows-phone-7-site.aspx

Know your audience.  I’m guessing more than a few Geeks With Blogs readers are also proud owners of Windows Phone 7 devices.  If you want to know more detail about the blog content itself, head on over to our launch post.  But for this readership crowd I’ll focus more on the technical.

Pin Worthy

  • We built the site on BlogEngine.NET 2.0
  • It uses a custom designed theme based on the Metro UI.  Want it?  Just ask in the comment section!  We have not open sourced it yet but if you pinky swear to share changes you make and not distribute it to other people (just point them to this post, we’ll get them a copy) you can have it. 
  • It also has a mobile theme with a similar Metro feel.  (see below)
  • There is a actual app in the works.  These are crazy easy to make using tools like AppMakr and Follow My Feed.

Please check us out and let us know what you think.

We’re also looking for contributors.

Pin Worthy Mobile


          Open Source Intelligence (OSINT) Analyst, JBLM WA Job - SAIC - Fort Lewis, WA      Cache   Translate Page      
For information on the benefits SAIC offers, see My SAIC Benefits. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately $4.5 billion....
From SAIC - Fri, 05 Oct 2018 02:47:27 GMT - View all Fort Lewis, WA jobs
          Senior Back End Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:37 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:02 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Sat, 29 Sep 2018 05:17:37 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Sat, 22 Sep 2018 05:17:32 GMT - View all Seattle, WA jobs
          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 06 Aug 2018 23:18:47 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Tue, 12 Jun 2018 23:18:07 GMT - View all Seattle, WA jobs
          27 monthly most popular JS repositories      Cache   Translate Page      

Originally I shared this digest to Syndicode blog.

There were too many cool projects popular this month. And it was extremely hard to select those worth your attention. Of course, it's unlikely you will use even the half of them. But I will deliver them to you in hope that they still will be somehow useful. Among these open source GitHub JavaScript repositories you'll find popular JS frameworks, useful resources to build apps, utility libraries, npm packages and data-driven animations, next-generation databases and PWA storefronts... And many other great open source JavaScript repositories you should know!

Here we are with the most interesting

Monthly most popular JavaScript repositories:

    1. Electron is a framework to build cross-platform desktop apps with JavaScript, HTML, and CSS. 65,322 stars by now.
    2. Create React App repository consists of useful resources help to build apps with no build configuration. 57,074 stars by now.
    3. date-fns is a modern JavaScript utility library. It's the Moment.js competitor that provides comprehensive, yet simple and consistent toolset for manipulating JavaScript dates in a browser and Node.js. 14,200 stars by now.
    4. Husky is an npm package that lets you define npm scripts that correlate to local Git events such as a commit or push. It makes Git hooks easy. 9,896 stars by now.
    5. PostGraphile is an instant GraphQL API for PostgreSQL database. It pairs these two technologies together to build faster applications. 6,007 stars by now.
    6. React Move is a set of data-driven animations for React. It provides built-in support for interpolating of strings, numbers, colors, SVG paths and transforms, animating HTML, SVG and React-Native. 5,502 stars by now.
    7. AngularFire is the official library for Firebase and Angular. 4,204 stars by now.
    8. WatermelonDB is a Next-gen database for powerful React and React Native apps that scales to 10,000s of records and remains fast. It is a new way of dealing with user data in React Native and React web apps. 4,183 stars by now.
    9. Vue Storefront is a standalone PWA storefront for eCommerce, possible to connect with any eCommerce backend (eg. Magento, Pimcore, Prestashop or Shopware) through the API. 3,134 stars by now.
    10. Ky is a tiny and elegant HTTP client based on the browser Fetch API. It targets modern browsers and has simpler API, JSON option, timeout support, URL prefix option, and other neat benefits. 2,771 stars by now.
    11. Pigeon Maps - ReactJS maps without external dependencies. 2,671 stars by now.
    12. BundlePhobia is a web service to find out how much will an npm package cost your project. It can measure the size of CSS/Sass libs too and report whether the module supports tree shaking. The same service is available via CLI tool and an atom plugin. 1,992 stars by now.
    13. Ring UI is a collection of UI components aims to provide all of the necessary building blocks for web-based products built inside JetBrains, as well as third-party plugins developed for JetBrains' products. 1,920 stars by now.
    14. React-Proto is a React application prototyping tool for developers and designers. It allows the user to visualize/setup their application architecture upfront and eject this architecture as application files. 1,750 stars by now.
    15. NLP.js is an NLP library built in Node.js over Natural, with entity extraction, sentiment analysis, automatic language identity, and more. 1,501 stars by now.
    16. Sqorn is a Javascript library for building SQL queries. It's composable, intuitive (tagged template), concise (for common CRUD operations), fast and secure (it generates parameterized queries safe from SQL injection). 1,480 stars by now.
    17. Apify SDK is the scalable web crawling and scraping library for JavaScript that enables the development of data extraction and web automation jobs with headless Chrome and Puppeteer. 1,257 stars by now.
    18. WWWBasic is an implementation of BASIC (Beginner's All-purpose Symbolic Instruction Code) designed to be easy to run on the Web. 1,032 stars by now.
    19. Tabulator is interactive tables and data grids for JavaScript. It allows you to create interactive tables in seconds from any HTML Table, Javascript Array or JSON formatted data. 980 stars by now.
    20. tiptap is a renderless and extendable rich-text editor for Vue.js. 958 stars by now.
    21. worker-plugin automatically bundles and compiles Web Workers within Webpack. 839 stars by now.
    22. Express ES2017 REST API Boilerplate is a Boilerplate/Generator/Starter Project for building RESTful APIs and microservices using Node.js, Express, and MongoDB. 661 stars by now.
    23. d3-dag is a bunch of layout algorithms for visualizing directed acyclic graphs. 629 stars by now.
    24. Taiko is a Node.js library to automate chrome/chromium browser, works on Windows, MacOS, and Linux. 625 stars by now.
    25. low.js is a port of Node.JS with far lower system requirements. 609 stars by now.
    26. vuejs-wordpress-theme-starter is a true WordPress theme with the guts ripped out and replaced with Vue. Based on the BlankSlate WP starter theme. 557 stars by now.
    27. Lyo is the easiest way to transform Node.js modules into browser-compatible libraries. 504 stars by now.


Check my previous JS digests here.
          Games: Kingdom Rush Origins, TinyBuild, Openwashing, Niffelheim, Unleashed, AI War 2, A Gummy's Life, KURSK and Wine      Cache   Translate Page      
  • Kingdom Rush Origins to release October 18th, Linux support confirmed for release

    Ironhide Game Studio have announced today that Kingdom Rush Origins will release on Steam on October 18th. I've no doubt it will make it to other stores too like GOG and Humble Store like previous games, however they've only mentioned Steam so far.

    I asked the developer on Twitter, if the Linux version would be released at the same time. They replied with "Yes!", so that's really great news for Linux gamers.

  • Humble are allowing you to build your own bundle of TinyBuild games and save some monies

    For those of you craving your latest Linux gaming fix, Humble are doing a build your own bundle with TinyBuild.

    The way it works, is that a ton of games are on sale and if you add at least three to your basket you will get an additional discount. If you add four the discount is higher and higher again if you add five. The saving you can get is kind of ridiculous.

  • Mojang to open source more of Minecraft with two libraries already on GitHub [Ed: This is openwashing; they just free a few bits here and there...]

    I have to admit, I am quite surprised by this. Mojang (owned by Microsoft) are to open source more of Minecraft and they've already started to do so.

  • Niffelheim, a dark survival RPG released recently with Linux support

    It seems we have a few readers interested in Niffelheim emailing it in, a dark survival RPG that follows some elements of Norse mythology that recently released with Linux support.

  • Looks like the 2D open-world sandbox RPG Unleashed is releasing soon

    Unleashed, a 2D open-world sandbox RPG that was funded on Kickstarter is looking pretty good and it's releasing soon with Linux support. I initially covered it back in March this year, as this promising RPG was emailed to us directly by the developer. I completely forgot about it, but thankfully they succeeded in getting funds on Kickstarter with around €10K being pledged. Not a lot, so hopefully the end result is still good.

  • Arcen Games grand strategy game 'AI War 2' to enter Early Access on October 15th

    Nearly two years after the Kickstarter, Arcen Games are ready to bring in more players. AI War 2 is going to enter Early Access on October 15th.

    The sequel to their 2009 hit AI War: Fleet Command, AI War 2 has you take on an overwhelming "inhuman" enemy that has underestimated you. Their currently plan is to remain in Early Access until at least "Q2 2019", although that does depend on how feedback goes and what they need to work on.

  • The amusing multiplayer game A Gummy's Life has left Early Access with an overhauled movement system

    A Gummy's Life is a really fun multiplayer game that can be played with local players and online. It's now left Early Access with a major update.

    I've had quite a lot of fun with this, especially with my Son who adores it because it's completely silly. One thing that wasn't too great was the movement system, which they've actually overhauled as part of the 1.0 update. Movement seems smoother, more responsive and you have a better amount of control with it now too making it an even better experience.

  • First-person adventure about sunken Russian sub KURSK to have a delayed Linux release

    KURSK [Official Site] seems like it's going to be quite a compelling action-adventure game which follows the story of the Russian Kursk submarine disaster back in 2000. I've been following it now for years as it sounds quite interesting, although Linux native gamers have to wait a little longer.

    The developer, Jujubee S.A., has been emailing us their usual press emails about it and it has been clearly mentioning Linux support. However, the Steam store page doesn't mention Linux. After trying to reach them for months over emails, I decided to try Facebook today and they actually responded with a clear "Yes, KURSK will be released on Linux.". Sadly though, the Linux version will come later than the Windows build while they are working to "provide the best possible results on Linux". I've been told the media folks will contact us sometime in regards to the Linux release.

  • Wine's Direct3D Code Will Now Default To OpenGL Core Contexts For NVIDIA GPUs Too

    Earlier this year with Wine 3.9 its Direct3D code changed to default to OpenGL 4.4 core contexts rather than the legacy/compatibility context. NVIDIA GPUs ended up being left at the older value but now that has changed.

    As of yesterday in Wine Git, CodeWeavers' Henri Verbeet has changed the WineD3D code now to also default to OpenGL core contexts for NVIDIA GPUs.

read more


          Multi-threaded write open source database      Cache   Translate Page      
I need an open source DB that allows multi-threaded parallel writes, which signals after each write the record can be read. It must allow parallel reads. (Budget: $250 - $750 AUD, Jobs: Database Development, Database Programming)
          Ubuntu does OpenStack      Cache   Translate Page      

Ubuntu does OpenStack

OpenStack, the open source cloud of choice for many businesses, has seen broad adoption across a large number of industries, from telco to finance, healthcare and more. It’s become something of a safe haven for highly regulated industries and for those looking to have a robust, secure cloud that is open source and enables them to innovate without breaking the bank.

For those of you that don’t know, Ubuntu does OpenStack.

In fact, Ubuntu is the#1 platform for OpenStack and the #1 platform for public cloud operations on AWS, Azure, and Google Cloud, too meaning that we know our stuff when it comes to building and operating clouds.

Which is great news because Canonical, the company behind Ubuntu, helps to deliver OpenStack on rails, with consulting, training, enterprise support and managed operations that help your business to focus on what matters most your applications, not the infrastructure.

Canonical has a pretty compelling story here, and not in a marketing sense where we’ve manufactured an ‘aren’t we great story’.

You see, Canonical and Ubuntu have been a part of open source platform from the start, a founding member, the most widely used distribution, used across the largest operators, the Intel reference platform for telco OpenStack, and much more.

It’s not just about the heritage value of saying ‘we were there from the beginning’, because there is no value in that unless you can back it up with consistently delivering a valuable product to customers.

With Canonical, customers are able to get the most efficient infrastructure as a service on-premises, according to 451 Research . There is also the added benefit of Ubuntu being the most popular public cloud OS and OpenStack, making it a perfect fit for multi-cloud operations.

Then, there is the added bonus of a fully managed OpenStack. Whether it is a shortage of skilled personnel, time constraints, or any other reason for not wanting to build and manage your own deployment, Canonical can do that as well with BootStack, and we’re happy to hand back the keys once you’re ready to take over.

So, if you’re in the market for an OpenStack cloud, just remember Ubuntu does OpenStack.

Start your journey today


          How to Install and Configure OpenLiteSpeed Server on Ubuntu 18.04 along with MariaDB      Cache   Translate Page      

HowToForge: OpenLiteSpeed is a lightweight and open source version of the popular LiteSpeed Server


          ‘Nearly All’ New Pentagon Weapons Vulnerable To Cyber Attacks, GAO Report Says      Cache   Translate Page      

USAF Tests Weapons In Nevada Desert.

A new report by the United States Government Accountability Office (GAO) has revealed that “nearly all” the weapon system being developed by the US military between 2012 and 2017 are vulnerable to cyber attacks.

In cybersecurity tests of major weapon systems that are being developed by the Department of Defense, testers were able to hack into some of the complex weapons systems and take control over them using relatively simple tools and techniques.

The GAO report released on Oct. 9 showed that in one case, a two-person team took only an hour to gain initial access to a weapon system, and just a day to gain full control of the system they were testing.

In some instances, the weapon systems that used commercial or open source software did not change the default password when the software was installed. It only took a search on the internet to find the passwords and gain administrator privileges.

Click here to continue and read more...


          How max_prepared_stmt_count bring down the production MySQL system      Cache   Translate Page      
MySQL Adventures: How max_prepared_stmt_count can bring down production We recently moved an On-Prem environment to GCP for better scalability and availability. The customer’s main database is MySQL. Due to the nature of customer’s business, it’s a highly transactional workload (one of the hot startups in APAC). To deal with the scale and meet availability requirements, we have deployed MySQL behind ProxySQL — which takes care of routing some of the resource intensive SELECTs to chosen replicas. The setup consists of: One Master Two slaves One Archive database server Post migration to GCP, everything was nice and calm for a couple of weeks, until MySQL decided to start misbehaving and leading to an outage. We were able to quickly resolve and bring the system back online and what follows are lessons from this experience. The configuration of the Database: CentOS 7. MySQL 5.6 32 Core CPU 120GB Memory 1 TB SSD for MySQL data volume. The total database size is 40GB. (yeah, it is small in size, but highly transactional) my.cnf is configured using Percona’s configuration wizard. All tables are InnoDB Engine No SWAP partitions. The Problem It all started with an alert that said MySQL process was killed by Linux’s OOM Killer. Apparently MySQL was rapidly consuming all the memory (about 120G) and OOM killer perceived it as a threat to the stability of the system and killed the process. We were perplexed and started investigating. Sep 11 06:56:39 mysql-master-node kernel: Out of memory: Kill process 4234 (mysqld) score 980 or sacrifice child Sep 11 06:56:39 mysql-master-node kernel: Killed process 4234 (mysqld) total-vm:199923400kB, anon-rss:120910528kB, file-rss:0kB, shmem-rss:0kB Sep 11 06:57:00 mysql-master-node mysqld: /usr/bin/mysqld_safe: line 183: 4234 Killed nohup /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock < /dev/null > /dev/null 2>&1 Naturally, we started looking at mysql configuration to see if something is off. InnoDB Parameters: innodb-flush-method = O_DIRECTinnodb-log-files-in-group = 2innodb-log-file-size = 512Minnodb-flush-log-at-trx-commit = 1innodb-file-per-table = 1innodb-buffer-pool-size = 100G Other Caching Parameters: tmp-table-size = 32Mmax-heap-table-size = 32Mquery-cache-type = 0query-cache-size = 0thread-cache-size = 50open-files-limit = 65535table-definition-cache = 4096table-open-cache = 50 We are not really using query cache and one of the heavy front end service is PHP Laravel. Here is the memory utilization graph. The three highlighted areas are the points at which we had issues in production. The second issue happened very shortly, so we reduced the innodb-buffer-pool-size to 90GB. But even though the memory utilization never came down. So we scheduled a cronjob to flush OS Cache at least to give some addition memory to the Operating system by using the following command. This was a temporary measure till we found the actual problem. sync; echo 3 > /proc/sys/vm/drop_cache But This didn’t help really. The memory was still growing and we had to look at what’s really inside the OS Cache? Fincore: There is a tool called fincore helped me find out what’s actually the OS cache held. Its actually using Perl modules. use the below commands to install this. yum install perl-Inline rpm -ivh http://fr2.rpmfind.net/linux/dag/redhat/el6/en/x86_64/dag/RPMS/fincore-1.9-1.el6.rf.x86_64.rpm It never directly shows what files are inside the buffer/cache. We instead have to manually give the path and it’ll check what files are in the cache for that location. I wanted to check about Cached files for the mysql data directory. cd /mysql-data-directory fincore -summary * > /tmp/cache_results Here is the sample output of the cached files results. page size: 4096 bytesauto.cnf: 1 incore page: 0dbadmin: no incore pages.Eztaxi: no incore pages.ibdata1: no incore pages.ib_logfile0: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......ib_logfile1: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......mysql: no incore pages.mysql-bin.000599: 8 incore pages: 0 1 2 3 4 5 6 7mysql-bin.000600: no incore pages.mysql-bin.000601: no incore pages.mysql-bin.000602: no incore pages.mysql-bin.000858: 232336 incore pages: 0 1 2 3 4 5 6 7 8 9 10......mysqld-relay-bin.000001: no incore pages.mysqld-relay-bin.index: no incore pages.mysql-error.log: 4 incore pages: 0 1 2 3mysql-general.log: no incore pages.mysql.pid: no incore pages.mysql-slow.log: no incore pages.mysql.sock: no incore pages.ON: no incore pages.performance_schema: no incore pages.mysql-production.pid: 1 incore page: 0 6621994 pages, 25.3 Gbytes in core for 305 files; 21711.46 pages, 4.8 Mbytes per file. The highlighted points show the graph when OS Cache is cleared.How we investigated this issue: The first document that everyone refers is How mysql uses the memory from MySQL’s documentation. So we started with where are all the places that mysql needs memory. I’ll explain this about in a different blog. Lets continue with the steps which we did. Make sure MySQL is the culprit: Run the below command and this will give you the exact memory consumption about MySQL. ps --no-headers -o "rss,cmd" -C mysqld | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }' 119808M Additional Tips: If you want to know each mysql’s threads memory utilization, run the below command. # Get the PID of MySQL:ps aux | grep mysqld mysql 4378 41.1 76.7 56670968 47314448 ? Sl Sep12 6955:40 /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock # Get all threads memory usage:pmap -x 4378 4378: /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sockAddress Kbytes RSS Dirty Mode Mapping0000000000400000 11828 4712 0 r-x-- mysqld000000000118d000 1720 760 476 rw--- mysqld000000000133b000 336 312 312 rw--- [ anon ]0000000002b62000 1282388 1282384 1282384 rw--- [ anon ]00007fd4b4000000 47816 37348 37348 rw--- [ anon ]00007fd4b6eb2000 17720 0 0 ----- [ anon ]00007fd4bc000000 48612 35364 35364 rw--- [ anon ]...........................00007fe1f0075000 2044 0 0 ----- libpthread-2.17.so00007fe1f0274000 4 4 4 r---- libpthread-2.17.so00007fe1f0275000 4 4 4 rw--- libpthread-2.17.so00007fe1f0276000 16 4 4 rw--- [ anon ]00007fe1f027a000 136 120 0 r-x-- ld-2.17.so00007fe1f029c000 2012 2008 2008 rw--- [ anon ]00007fe1f0493000 12 4 0 rw-s- [aio] (deleted)00007fe1f0496000 12 4 0 rw-s- [aio] (deleted)00007fe1f0499000 4 0 0 rw-s- [aio] (deleted)00007fe1f049a000 4 4 4 rw--- [ anon ]00007fe1f049b000 4 4 4 r---- ld-2.17.so00007fe1f049c000 4 4 4 rw--- ld-2.17.so00007fe1f049d000 4 4 4 rw--- [ anon ]00007ffecc0f1000 132 72 72 rw--- [ stack ]00007ffecc163000 8 4 0 r-x-- [ anon ]ffffffffff600000 4 0 0 r-x-- [ anon ]---------------- ------- ------- ------- total kB 122683392 47326976 47320388 InnoDB Buffer Pool: Initially we suspected the InnoDB. We have checked the innoDB usage from the monitoring system. But the result was negative. It never utilized more than 40GB. That thickens the plot. If buffer pool only has 40 GB, who is eating all that memory? Is this correct? Does Buffer Pool only hold 40GB? What’s Inside the BufferPool and whats its size? SELECT page_type AS page_type, sum(data_size) / 1024 / 1024 AS size_in_mbFROM information_schema.innodb_buffer_pageGROUP BY page_typeORDER BY size_in_mb DESC; +-------------------+----------------+| Page_Type | Size_in_MB |+-------------------+----------------+| INDEX | 39147.63660717 || IBUF_INDEX | 0.74043560 || UNDO_LOG | 0.00000000 || TRX_SYSTEM | 0.00000000 || ALLOCATED | 0.00000000 || INODE | 0.00000000 || BLOB | 0.00000000 || IBUF_BITMAP | 0.00000000 || EXTENT_DESCRIPTOR | 0.00000000 || SYSTEM | 0.00000000 || UNKNOWN | 0.00000000 || FILE_SPACE_HEADER | 0.00000000 |+-------------------+----------------+ A quick guide about this query. INDEX: B-Tree index IBUF_INDEX: Insert buffer index UNKNOWN: not allocated / unknown state TRX_SYSTEM: transaction system data Bonus: To get the buffer pool usage by index SELECT table_name AS table_name, index_name AS index_name, count(*) AS page_count, sum(data_size) / 1024 / 1024 AS size_in_mbFROM information_schema.innodb_buffer_pageGROUP BY table_name, index_nameORDER BY size_in_mb DESC; Then where mysql was holding the Memory? We checked all of the mysql parts where its utilizing memory. Here is a rough calculation for the memory utilization during the mysql crash. BufferPool: 40GBCache/Buffer: 8GBPerformance_schema: 2GBtmp_table_size: 32MOpen tables cache for 50 tables: 5GBConnections, thread_cache and others: 10GB Almost it reached 65GB, we can round it as 70GB out of 120GB. But still its approximate only. Something is wrong right? My DBA mind started to think where is the remaining? Till now, MySQL is the culprit who is consuming all of the memory. Clearing OS cache never helped. Its fine. Buffer Pool is also in healthy state. Other memory consuming parameters are looks good. It’s time to Dive into the MySQL. Lets see what kind of queries are running into the mysql. show global status like 'Com_%';+---------------------------+-----------+| Variable_name | Value |+---------------------------+-----------+| Com_admin_commands | 531242406 || Com_stmt_execute | 324240859 || Com_stmt_prepare | 308163476 || Com_select | 689078298 || Com_set_option | 108828057 || Com_begin | 4457256 || Com_change_db | 600 || Com_commit | 8170518 || Com_delete | 1164939 || Com_flush | 80 || Com_insert | 73050955 || Com_insert_select | 571272 || Com_kill | 660 || Com_rollback | 136654 || Com_show_binlogs | 2604 || Com_show_slave_status | 31245 || Com_show_status | 162247 || Com_show_tables | 1105 || Com_show_variables | 10428 || Com_update | 74384469 |+---------------------------+-----------+ Select, Insert, Update these counters are fine. But a huge amount of prepared statements were running into the mysql. One more Tip: Valgrind Valgrind is a powerful open source tool to profile any process’s memory consumption by threads and child processes. Install Valgrind: # You need C compilers, so install gcc wget ftp://sourceware.org/pub/valgrind/valgrind-3.13.0.tar.bz2tar -xf valgrind-3.13.0.tar.bz2 cd valgrind-3.13.0./configure makemake install Note: Its for troubleshooting purpose, you should stop MySQL and Run with Valgrind. Create an log file to Capture touch /tmp/massif.outchown mysql:mysql /tmp/massif.outchmod 777 /tmp/massif.out Run mysql with Valgrind /usr/local/bin/valgrind --tool=massif --massif-out-file=/tmp/massif.out /usr/sbin/mysqld –default-file=/etc/my.cnf Lets wait for 30mins (or till the mysql takes the whole memory). Then kill the Valgranid and start mysql as normal. Analyze the Log: /usr/local/bin/ms_print /tmp/massif.out We’ll explain mysql memory debugging using valgrind in an another blog. Memory Leak: We have verified all the mysql parameters and OS level things for the memory consumption. But no luck. So I started to think and search about mysql’s memory leak parts. Then I found this awesome blog by Todd. Yes, the only parameter I didn’t check is max_prepared_stmt_count. What is this? From MySQL’s Doc, This variable limits the total number of prepared statements in the server. It can be used in environments where there is the potential for denial-of-service attacks based on running the server out of memory by preparing huge numbers of statements. Whenever we prepared a statement, we should close in the end. Else it’ll not the release the memory which is allocated to it. For executing a single query, it’ll do three executions (Prepare, Run the query and close). There is no visibility that how much memory is consumed by a prepared statement. Is this the real root cause? Run this query to check how many prepared statements are running in mysql server. mysql> show global status like 'com_stmt%'; +-------------------------+-----------+| Variable_name | Value |+-------------------------+-----------+| Com_stmt_close | 0 || Com_stmt_execute | 210741581 || Com_stmt_fetch | 0 || Com_stmt_prepare | 199559955 || Com_stmt_reprepare | 1045729 || Com_stmt_reset | 0 || Com_stmt_send_long_data | 0 |+-------------------------+-----------+ You can see there are 1045729 prepared statements are running and the Com_stmt_close variables is showing none of the statements are closed. This query will return the max count for the preparements. mysql> show variables like 'max_prepared_stmt_count';+-------------------------+---------+| Variable_name | Value |+-------------------------+---------+| max_prepared_stmt_count | 1048576 |+-------------------------+---------+ Oh, its the maximum value for this parameter. Then we immediately reduced it to 2000. mysql> set global max_prepared_stmt_count=2000; -- Add this to my.cnfvi /etc/my.cnf [mysqld]max_prepared_stmt_count = 2000 Now, the mysql is running fine and the memory leak is fixed. Till now the memory utilization is normal. In Laravel framework, its almost using this prepared statement. We can see so many laravel + prepare statements questions in StackOverflow. Conclusion: The very important lesson as a DBA I learned is, before setting up any parameter value check the consequences of modifying it and make sure it should not affect the production anymore. Now the mysql side is fine, but the application was throwing the below error. Can't create more than max_prepared_stmt_count statements (current value: 20000) To continue about this series, the next blog post will explain how we fixed the above error using multiplexing and how it helped to dramatically reduce the mysql’s memory utilization. How max_prepared_stmt_count bring down the production MySQL system was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.
          Clustered Pi | CR 269      Cache   Translate Page      

A special guest and creator of PiCluster joins us to discuss the open source Docker cluster management project. PiCluster is a bit of a community hit & seems to strike a great balance compared to other solutions.

We’ll dig into the technologies they use and what it's all built on, what they love working with & thoughts about growing a community.

Plus some of our personal projects that are brewing & more!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Summer of GitHub | CR 262      Cache   Translate Page      
We discuss the week’s developer hoopla & the beard joins us to share his insights. It's a fun episode with a range of topics, including the recent rush to GitHub by a number of open source projects.#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Hi-Tech Lady Tubes | CR 259      Cache   Translate Page      
The open source model has won, we discuss the impact that’s having on the development industry. Plus Swift gets a little more interesting, & Chris is ready for his lady in a tube!#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          4k of Sin | CR 253      Cache   Translate Page      

Habitat promises full automation that travels with app. Basically it's a great way to have an extremely lightweight "environment + your app" (hence the name) that has everything you need except the OS or OS related bits. But is this a layer of abstraction too far for Mike?

Plus the chronicles of one developer's journey of getting started with Open Source, some cool dark matter development Chris spotted at Dell & more!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Botpocalypse Now | CR 217      Cache   Translate Page      

Special guest Ryan Sipes from Mycroft joins us to discuss his ambitious projects & fulfilling the mission of an open source project.

Plus our thoughts on the impending Bot revolution, the “Internet of APIs” it all depends on & the massive shift that bots could cause in the industry.

We start it all off with a new Coding Challenge!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Cyber Tests Showed 'Nearly All' New Pentagon Weapons Vulnerable To Attack, GAO Says       Cache   Translate Page      

Passwords that took seconds to guess, or were never changed from their factory settings. Cyber vulnerabilities that were known, but never fixed. Those are two common problems plaguing some of the Department of Defense's newest weapons systems, according to the Government Accountability Office.

The flaws are highlighted in a new GAO report, which found the Pentagon is "just beginning to grapple" with the scale of vulnerabilities in its weapons systems.

Drawing data from cybersecurity tests conducted on Department of Defense weapons systems from 2012 to 2017, the report says that by using "relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected" because of basic security vulnerabilities.

The GAO says the problems were widespread: "DOD testers routinely found mission critical cyber vulnerabilities in nearly all weapon systems that were under development."

When weapons program officials were asked about the weaknesses, the GAO says, they "believed their systems were secure and discounted some test results as unrealistic."

The agency says the report stems from a request from the Senate Armed Services Committee, asking it to review the Pentagon's efforts to secure its weapons systems. The GAO did so by going over data from the Pentagon's own security tests of weapon systems that are under development. It also interviewed officials in charge of cybersecurity, analyzing how the systems are protected and how they respond to attacks.

The stakes are high. As the GAO notes, "DOD plans to spend about $1.66 trillion to develop its current portfolio of major weapon systems." That outlay also comes as the military has increased its use of computerized systems, automation and connectivity.

Despite the steadily growing importance of computers and networks, the GAO says, the Pentagon has only recently made it a priority to ensure the cybersecurity of its weapons systems. It's still determining how to achieve that goal — and at this point, the report states, "DOD does not know the full scale of its weapon system vulnerabilities."

Part of the reason for the ongoing uncertainty, the GAO says, is that the Defense Department's hacking and cyber tests have been "limited in scope and sophistication." While they posed as hackers, for instance, the testers did not have free rein to attack contractors' systems, nor did they have the time to spend months or years to focus on extracting data and gaining control over networks.

Still, the tests cited in the report found "widespread examples of weaknesses in each of the four security objectives that cybersecurity tests normally examine: protect, detect, respond, and recover."

From the GAO:

"One test report indicated that the test team was able to guess an administrator password in nine seconds. Multiple weapon systems used commercial or open source software, but did not change the default password when the software was installed, which allowed test teams to look up the password on the Internet and gain administrator privileges for that software. Multiple test teams reported using free, publicly available information or software downloaded from the Internet to avoid or defeat weapon system security controls."

In several instances, simply scanning the weapons' computer systems caused parts of them to shut down.

"One test had to be stopped due to safety concerns after the test team scanned the system," the GAO says. "This is a basic technique that most attackers would use and requires little knowledge or expertise."

When problems were identified, they were often left unresolved. The GAO cites a test report in which only one of 20 vulnerabilities that were previously found had been addressed. When asked why all of the problems had not been fixed, "program officials said they had identified a solution, but for some reason it had not been implemented. They attributed it to contractor error," the GAO says.

One issue facing the Pentagon, the GAO says, is the loss of key personnel who are lured by lucrative offers to work in the private sector after they've gained cybersecurity experience.

The most capable workers – experts who can find vulnerabilities and detect advanced threats – can earn "above $200,000 to $250,000 a year" in the private sector, the GAO reports, citing a Rand study from 2014. That kind of salary, the agency adds, "greatly exceeds DOD's pay scale."

In a recent hearing on the U.S. military's cyber readiness held by the Senate Armed Services Committee, officials acknowledged intense competition for engineers.

"The department does face some cyberworkforce challenges," said Essye B. Miller, the acting principal deputy and Department of Defense chief information officer. She added, "DOD has seen over 4,000 civilian cyber-related personnel losses across our enterprise each year that we seek to replace due to normal job turnover."

Copyright 2018 NPR. To see more, visit http://www.npr.org/.

          Senior Back End Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:37 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:02 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Sat, 29 Sep 2018 05:17:37 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Sat, 22 Sep 2018 05:17:32 GMT - View all Seattle, WA jobs
          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 06 Aug 2018 23:18:47 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Tue, 12 Jun 2018 23:18:07 GMT - View all Seattle, WA jobs
          Economics Nobel Laureate Paul Romer Is a Python Programming Convert      Cache   Translate Page      
Economist Paul Romer, a co-winner of the 2018 Nobel Prize in economics, uses the programming language python for his research , according to Quartz. Romer reportedly tried using Wolfram Mathematica to make his work transparent, but it didn't work so he converted to a Jupyter notebook instead. From the report:

Romer believes in making research transparent. He argues that openness and clarity about methodology is important for scientific research to gain trust. As Romer explained in an April 2018 blog post , in an effort to make his own work transparent, he tried to use Mathematica to share one of his studies in a way that anyone could explore every detail of his data and methods. It didn't work. He says that Mathematica's owner, Wolfram Research, made it too difficult to share his work in a way that didn't require other people to use the proprietary software, too. Readers also could not see all of the code he used for his equations.

Instead of using Mathematica, Romer discovered that he could use a Jupyter notebook for sharing his research. Jupyter notebooks are web applications that allow programmers and researchers to share documents that include code, charts, equations, and data. Jupyter notebooks allow for code written in dozens of programming languages. For his research, Romer used Python -- the most popular language for data science and statistics. Importantly, unlike notebooks made from Mathematica, Jupyter notebooks are open source, which means that anyone can look at all of the code that created them. This allows for truly transparent research. In a compelling story for The Atlantic, James Somers argued that Jupyter notebooks may replace the traditional research paper typically shared as a PDF.


          Living Open and Proud: “Once I came out, everything changed.”      Cache   Translate Page      

At Intel we support employees being their full authentic selves, because that’s when true creativity and innovation is possible. Meet Miki Demeter, a security researcher for Open Source in the Intel Product Assurance and Security (IPAS) group, and an openly transgender employee. When Miki first joined Intel in 2007, she was not out. Starting at [...]

Read More...

The post Living Open and Proud: “Once I came out, everything changed.” appeared first on We Are Intel.

[...]

Read More...

The post Living Open and Proud: “Once I came out, everything changed.” appeared first on Blogs@Intel.


          LXer: 4 best practices for giving open source code feedback      Cache   Translate Page      
Published at LXer: In the previous article I gave you tips for how to receive feedback, especially in the context of your first free and open source project contribution. Now it's time to talk...
          LXer: How to Install and Configure OpenLiteSpeed Server on Ubuntu 18.04 along with MariaDB      Cache   Translate Page      
Published at LXer: OpenLiteSpeed is a lightweight and open source version of the popular LiteSpeed Server. In this tutorial, we will learn about how to Install OpenLiteSpeed Server on Ubuntu 18.04...
          LXer: 6 tips for receiving feedback on your open source contributions      Cache   Translate Page      
Published at LXer: In the free and open source software world, there are few moments as exciting or scary as submitting your first contribution to a project. You've put your work out there and now...
          Living Open and Proud: “Once I came out, everything changed.”      Cache   Translate Page      

At Intel we support employees being their full authentic selves, because that’s when true creativity and innovation is possible. Meet Miki Demeter, a security researcher for Open Source in the Intel Product Assurance and Security (IPAS) group, and an openly transgender employee. When Miki first joined Intel in 2007, she was not out. Starting at [...]

Read More...

The post Living Open and Proud: “Once I came out, everything changed.” appeared first on We Are Intel.

[...]

Read More...

The post Living Open and Proud: “Once I came out, everything changed.” appeared first on Blogs@Intel.


          Episode 336: Equihax | TechSNAP 336      Cache   Translate Page      
Equifax got hacked, some top tips for staying safe & a debate over just who's to blame for vulnerable open source software. Then Google's breaking up with Symantec & we take a little time for Sysadmin 101, this time, ticketing systems.#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 328: LetsEncrypt is a SNAP | TechSNAP 328      Cache   Translate Page      

The recent ‘Devil’s Ivy’ vulnerability has caused quite a rash in the security journalism community. Is it as bad as poison ivy or just a bunch of hyperbole? We discuss. Plus you’ve heard of public key encryption, but what lies beyond? We cover some possible alternatives and the problem of identity.

Then Dan’s got the latest on his Let’s encrypt setup including a brand new open source tool you too can use!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 287: Open Source Botnet | TechSNAP 287      Cache   Translate Page      

The Source code for a historic botnet has been released, the tale of a DNS packet & four ways to hack ATMs.

Plus your hard questions, our answers, a rockin' roundup & more!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 257: Fixing the Barn Door | TechSNAP 257      Cache   Translate Page      

We’ll tell you about the real world pirates that hacked a shipping company, the open source libraries from Mars Rover found being used in malware & Microsoft’s solution for that after-hack hangover.

Plus great questions, a packed round up & much more!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          Episode 231: Leaky RSA Keys | TechSNAP 231      Cache   Translate Page      

Red Hat highlights how leaky many open source RSA implementations are, Netflix releases Sleepy Puppy & the Mac is definitely under attack.

Plus some quick feedback, a rockin' roundup & much, much more!

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000
          multidasher      Cache   Translate Page      

What is MultiDasher?
MultiDasher is an open source admin dashboard for MultiChain blockchains, based on Drupal 8. Drupal and the Adminimal theme provide a GUI for managing and interacting with permissioned MultiChain blockchains, and the content management capabilities of Drupal allow for a more visual and intuitive experience with chains, and for other extensions to be added to the system.

MultiDasher is developed, tested and designed to be run on Ubuntu 18.04.


          ownCloud Delivers Easy-to-Use Collaboration Solution Through Partnership with ONLYOFFICE      Cache   Translate Page      
.Nuremberg, Germany – October 18, 2018 — /BackupReview.info/ — ownCloud, the world’s leading Open Source Enterprise File Sync and Share (EFSS) software, and ONLYOFFICE announce a future close partnership to provide their users with the easiest combination of file sharing and document collaboration. The ONLYOFFICE integration enables access and collaborative editing of Microsoft Office file formats from [...] Related posts:
  1. Strategic Partnership Adds ownCloud Secure Enterprise File Sync & Share to Datto’s Industry-Leading Data Protection Solution
  2. ownCloud and Collabora Make LibreOffice Online Available for ownCloud Enterprise
  3. ownCloud Delivers Secure Authentication and Authorization with the Integration of OAuth2
  4. More Configuration Options, Individual Security Features: ownCloud Introduces Improved Server and Client Versions
  5. ownCloud Announces Formation of the ownCloud Foundation

          27 monthly most popular JS repositories      Cache   Translate Page      

Originally I shared this digest to Syndicode blog .

There were too many cool projects popular this month. And it was extremely hard to select those worth your attention. Of course, it's unlikely you will use even the halfof them. But I will deliver them to you in hope that they still will be somehow useful. Among these open source GitHub javascript repositories you'll find popular JS frameworks, useful resources to build apps, utility libraries, npm packages and data-driven animations, next-generation databases and PWA storefronts... And many other great open source JavaScript repositories you should know!

Here we are with the most interesting

Monthly most popular JavaScript repositories: Electron is a framework to build cross-platform desktop apps with JavaScript, HTML, and CSS. 65,322 stars by now. Create React App repository consists of useful resources help to buildappswith no build configuration. 57,074 stars by now. date-fns is a modern JavaScript utility library. It's the Moment.js competitor thatprovides comprehensive, yet simple and consistent toolset for manipulating JavaScript dates in a browser and Node.js. 14,200 stars by now. Husky is an npm package that lets you define npm scripts that correlate to local Git events such as a commit or push. It makes Git hooks easy. 9,896 stars by now. PostGraphile is an instant GraphQL API for PostgreSQL database. It pairs these two technologies together to build faster applications. 6,007 stars by now. React Move is a set of data-driven animations for React. It provides built-in support for interpolating of strings, numbers, colors,SVG paths and transforms, animating HTML, SVG and React-Native. 5,502 stars by now. AngularFire is the official library for Firebase and Angular. 4,204 stars by now. WatermelonDB is a Next-gen database for powerful React and React Native apps that scales to 10,000s of records and remains fast. It is a new way of dealing with user data in React Native and React web apps. 4,183 stars by now. Vue Storefront is a standalone PWA storefront for eCommerce, possible to connect with any eCommerce backend (eg. Magento, Pimcore, Prestashop or Shopware) through the API. 3,134 stars by now. Ky is a tiny and elegant HTTP client based on the browser Fetch API. It targets modern browsers and has simpler API, JSON option, timeout support,URL prefix option, and other neat benefits. 2,771 stars by now. Pigeon Maps - ReactJS maps without external dependencies. 2,671 stars by now. BundlePhobia is a web service to find out how much will annpm package cost your project. It can measure the size of CSS/Sass libs too and report whether the module supports tree shaking. The same service is available via CLI tool and an atom plugin. 1,992 stars by now. Ring UI is acollection of UI components aims to provide all of the necessary building blocks for web-based products built inside JetBrains, as well as third-party plugins developed for JetBrains' products. 1,920 stars by now. React-Proto is a React application prototyping tool for developers and designers. It allows the user to visualize/setup their application architecture upfront and eject this architecture as application files. 1,750 stars by now. NLP.js is an NLP library built in Node.js over Natural, with entity extraction, sentiment analysis, automatic language identity, and more. 1,501 stars by now. Sqorn is a Javascript library for building SQL queries. It's composable, intuitive (tagged template), concise (for common CRUD operations), fast and secure (it generates parameterized queries safe from SQL injection). 1,480 stars by now. Apify SDK is the scalable web crawling and scraping library for JavaScript that enables the development of data extraction and web automation jobs with headless Chrome and Puppeteer. 1,257 stars by now. WWWBasic is an implementation of BASIC (Beginner's All-purpose Symbolic Instruction Code) designed to be easy to run on the Web. 1,032 stars by now. Tabulator is interactive tables and data grids for JavaScript. It allows you to create interactive tables in seconds from any HTML Table, Javascript Array or JSON formatted data. 980 stars by now. tiptap is a renderless and extendable rich-text editor for Vue.js. 958 stars by now. worker-plugin automatically bundles and compiles Web Workers within Webpack. 839 stars by now. Express ES2017 REST API Boilerplate is aBoilerplate/Generator/Sta
           Comment on WordPress Accessibility Team Lead Resigns, Cites Political Complications Related to Gutenberg by Peter Knight       Cache   Translate Page      
Maybe this will be a wake up call that was needed. Decisions made by core developers in the name of immediate productivity and in pursuit of perverting goals (50% market share) under arbitrary timelines have tons of unintended consequences. Inclusiveness, open source ideals, democratising development... These are still ideals of the WP project are they not? There's a big gap between the ideals and the direction WP has been going in. Gutenberg has raised barriers to entry for anyone looking to contribute or work with WordPress code itself, this seems like a prime example. We're now leaving it up to people who are the narrow minority of people, enlarging the gap between contributors, users, 3rd party developers on one hand and core developers on the other. Nothing signifies this more to me than the fact that Gutenberg in a departure from standard practice has been shipping with code that can't be easily debugged on the spot (with only built/minified code on offer) without numerous extra steps. It has made a significant part of the codebase the preserve of people who are both comfortable with react and have the inclination and confidence to setup a development environment. Effectively, a signal this sends is that working on that code is for core developers, or those with their abilities. It alters how people behave when they run into problems. I can't help but note the gap between what well-meaning core team members say and what is the actual case. When React was chosen, one of the arguments was that was easy enough to learn and that React being so popular would be an advantage in courting attention from the wider JS community. That certainly doesn't seem to have been the case here, with highly motivated contributors struggling to get productive and struggling to find people who do have the required expertise to help. It was also said that React would only be a layer underneath, with WordPress providing its own more agnostic api layer in between, playing down the need to basically master React to be productive working on core and develop stuff on top of Gutenberg. This just isn't true and it's echoed in this story. If you want to be productive, you need to know React, full stop. It's been said that Gutenberg has been in "stable release" for a good while now, but every release we see the roll out of new features and major tweaks, while many people routinely encounter bugs that result in lost work and time. It's one thing to break some eggs in pursuit of a better WordPress experience for the whole, but I find it very hard to think this is the way to go about it. WordPress became a success because the barrier to entry was low and invited people in. We now seem to have switched to a model that seems preoccupied by having a small number of makers produce an end product that is going to wow everyone, particularly the new users who would be choosing between WordPress and hosted solutions like Wix. In that model, development happens more like in a company and users are more like customers who do not need to know how the sausage is made. The real brilliance happens in the core team, who divine their own solutions and parse what customers say with the understanding that the customer doesn't really know what they are talking about. The response to feedback so far tells the story. You get the same response a company might give. Bad reviews can be treated as noise. It can be labeled as 'interesting data' to the CEO. Or it can be taken as a signal that the coding rockstars just need to work a little harder to get it right. This furthers grows the mental divide between core developers and the wider community, while many community members feel even more divorced from how WordPress evolves. It's not by design but with the dominant influence of those with high level js skills and react familiarity working in core means all other people are effectively obstacles that slow things down. It's happening between all categories of WordPressers. Even experienced people who've done tons of core work or are noted contributors in other ways have adopted an overly deferential position because they don't feel as competent as others js-centric devs. Contributors with different expertises (like accessibility or privacy) feel like they are at a disadvantage because they are not the first class citizens in core development world. One of the worst things about this is that the culture just might continue to change to match this dynamic. Users may start treating WordPress more and more like some product produced by a technical team of workers and less like a co-owned community project in which everyone is a potential contributor. The most bewildering thing is how Matt is helping drive this, by pointing at market share ambitions and being preoccupied with competitors that most in the WP community don't even care about. There's no profound vision unfolding here, if it were we'd be building the best editor possible for the sake of users, not for the sake of 'survival' concerns, growth of market share and certainly not under the pressure of artificial time limits while deprioritising other important things, like accessibility and the approachability of WordPress in general.
          Microsoft StyleCop is now Open Source      Cache   Translate Page      

Originally posted on: http://blog.crip.ch/archive/2010/04/24/microsoft-stylecop-is-now-open-source.aspx

I have previously talked about Microsoft StyleCop. For those that might not know about it, StyleCop is a source analysis tool (different from the static analysis that FxCop performs) that analyzes the source code directly. As a result, it focuses on more design (or style) issues such as layout, readability and documentation.

In an interesting move (and one that I am happy to see), Microsoft has decided to make StyleCop an open source project (under the MS-PL license) available on CodePlex. (The project site isn’t up  yet, but should be in the next few weeks.) This will give the development community the opportunity to help contribute to the project, expanding support to other languages (currently it only supports C#), adding features, and fixing bugs. Shortly after the CodePlex site is up, StyleCop 4.4 will be released which includes support for C# 4.0 as well as a large number of bug fixes and other improvements.

 
Digg This

          Microsoft open-sources Infer.Net machine learning      Cache   Translate Page      

Microsoft has released through open source its Infer.Net cross-platform framework for model-based machine learning.

Infer.Net will become part of the ML.Net machine learning framework for .Net developers, with Infer.Net extending ML.Net for statistical modeling and online learning. Several steps toward integration already have been taken, including the setting up of a repo under the .Net Foundation.

Microsoft cited the applicability of Infer.Net to three use cases:

To read this article in full, please click here


          Georgine Knicely      Cache   Translate Page      
4C9C15BCBD2CA0C8233475E1400E95270C30724A If you are in need of WordPress support, please take a look here. You can get instant WordPress support for your website and have anything fixed that is not working. Get access to a WordPress help desk to troubleshoot your WordPress issue and get WordPress help fast. WP Fix has been providing WordPress support since 2009 and are always open ready to give you WordPress help when you need it. Our owner, Jarrett Gucci also know in the WordPress support world as Quicksilver has the superhuman ability to troubleshoot WordPress issues at great speeds. He is a mutant that was born in darkest depths of open source with superhuman support powers. He is the product of a genetic experimentation with the goal of solving WordPress issues as fast as possible. Jarrett Gucci, Quicksilver is most commonly know as the owner and founder of WP Fix It where he and his superhero team have serviced over 72,000 WordPress issues since 2009. We have a saying at WP Fix It that goes \"We do not just provide WordPress support, WE CHANGE LIVES\". There is nothing more exciting than providing WordPress help to those in need and doing this as fast as possible. wordpress help
          Senior Technology Architect JEE / Open Source and Cloud - Axius Technologies - Plano, TX      Cache   Translate Page      
Axius Technologies was started in the year 2007 and has grown at a rapid rate of 200% year on year. This opportunity is with a multibillion $ Global giant based...
From Axius Technologies - Fri, 05 Oct 2018 03:25:30 GMT - View all Plano, TX jobs
          Grant Evaluation Analyst and Coordinator - Lawrence University of Wisconsin - Appleton, WI      Cache   Translate Page      
Proficiency with web-based survey software (Qualtrics), open source web content management systems (Drupal), and web enabled enterprise reporting systems...
From Lawrence University of Wisconsin - Sat, 06 Oct 2018 06:49:40 GMT - View all Appleton, WI jobs
          Assistant Director of Research Administration - Lawrence University of Wisconsin - Appleton, WI      Cache   Translate Page      
Proficiency with web-based survey software (Qualtrics), open source web content management systems (Drupal), and web enabled enterprise reporting systems...
From Lawrence University of Wisconsin - Fri, 14 Sep 2018 06:52:02 GMT - View all Appleton, WI jobs
          6 tips for receiving feedback on your open source contributions      Cache   Translate Page      

opensource.com: Receiving feedback can be hard


          Best Angular js online training || Angular js online course - iteducationalexperts.com (Hyderabad)      Cache   Translate Page      
[Angularjs is a open source management][1]. Developing dynamic web applications through [angularjs][2] is the best career of technical competency. We are offering best [Angularjs online training][3] with real time experts. We are going to start a new batc...
          The best open source software 2018      Cache   Translate Page      
Not all 'free' software is the same. A program's developer might have chosen to distribute it free of charge, but that doesn't necessarily...
          Open Source Runtime Developer- Internship (Markham, ON) - IBM - Markham, ON      Cache   Translate Page      
Through several active collaborative academic research projects with professors and graduate students from a number of Canadian and foreign universities....
From IBM - Tue, 18 Sep 2018 10:49:47 GMT - View all Markham, ON jobs
          Software Systems Engineer - IV      Cache   Translate Page      
NJ-Piscataway, Responsibilities: As a Senior Java Middleware Developer, you will design and implement a highly scalable, highly available application.You will implement next gen platform using open source / open systems, which is cloud enabled and based on Micro services architecture. You will design and develope an application to handle -1 billion transactions per day. Design, Architect and create solutions usi
          Penguin: Open Source Office Alternatives to Microsoft UPDATE 1      Cache   Translate Page      
Open-source office software suites for the enterprise that can rival MS Office 2019 An open-source office software allows users to update to its newer version without a one-time licence fee Best of the Best: Libre Office Apache OpenOffice Only Office Neo Office WPS Office UPDATE 1: Alert Reader writes in: There is SoftMaker’s FreeOffice (German), …
          Senior Technology Architect JEE / Open Source and Cloud - Axius Technologies - Plano, TX      Cache   Translate Page      
Axius Technologies was started in the year 2007 and has grown at a rapid rate of 200% year on year. This opportunity is with a multibillion $ Global giant based...
From Axius Technologies - Fri, 05 Oct 2018 03:25:30 GMT - View all Plano, TX jobs
          Cloud Foundry expands its support for Kubernetes      Cache   Translate Page      
Not too long ago, the Cloud Foundry Foundation was all about Cloud Foundry, the open source platform as a service (PaaS) project that’s now in use by most of the Fortune 500 enterprises. This project is the Cloud Foundry Application Runtime. A year ago, the Foundation also announced the Cloud Foundry Container Runtime that helps businesses […]
          Cloud Foundry expands its support for Kubernetes      Cache   Translate Page      
Not too long ago, the Cloud Foundry Foundation was all about Cloud Foundry, the open source platform as a service (PaaS) project that’s now in use by most of the Fortune 500 enterprises. This project is the Cloud Foundry Application Runtime. A year ago, the Foundation also announced the Cloud Foundry Container Runtime that helps businesses […]
          Home Assistant - Open source Python3 home automation      Cache   Translate Page      
Replies: 6875 Last poster: TrailBlazer at 10-10-2018 09:32 Topic is Open JayOne schreef op woensdag 10 oktober 2018 @ 09:13: [...] Dat is op zich vreemd dat je nog steeds de groepen kan importeren, met versie 0.79 zijn de groepen niet meer beschikbaar in HA (bron). In mijn configuratie waren de groepen van de gateway verdwenen met de upgrade naar 0.79.Ik zal eens het hele tradfri component er uit gooien en dan weer toevoegen.
          Microsoft open-sources Infer.Net machine learning      Cache   Translate Page      

Microsoft has released through open source its Infer.Net cross-platform framework for model-based machine learning.

Infer.Net will become part of the ML.Net machine learning framework for .Net developers, with Infer.Net extending ML.Net for statistical modeling and online learning. Several steps toward integration already have been taken, including the setting up of a repo under the .Net Foundation.

Microsoft cited the applicability of Infer.Net to three use cases:

To read this article in full, please click here


          Selenium WebDriver is being Way out - Selenium versus Cypress      Cache   Translate Page      
There's another child on the square for open source test computerization apparatuses, and everybody's discussing how Cypress might be an option in contrast to Selenium. However, while some have represented that Cypress flags the finish of a period, we're not entirely certain that Selenium is going anyplace at any point in the near future.
          Admin/Presales Support Specialist      Cache   Translate Page      

Virtual Nigeria is a Red Hat training partner. Red Hat is an American software company and is the world’s most trusted provider of Linux and open source technology. We provide enterprises with an open stack of trusted, high-performing technologies and services made possible by the open source model. Currently the only certified training partner (CTP) in West Africa, we have an unwavering commitment to equip your organisation with the knowledge, skills, people resources and tools to succ

          Cloud Foundry expands its support for Kubernetes      Cache   Translate Page      
Not too long ago, the Cloud Foundry Foundation was all about Cloud Foundry, the open source platform as a service (PaaS) project that’s now in use by most of the Fortune 500 enterprises. This project is the Cloud Foundry Application Runtime. A year ago, the Foundation also announced the Cloud Foundry Container Runtime that helps businesses […]
          BlockScout: The first full featured open source Ethereum blockchain explorer      Cache   Translate Page      
Both what you need and what you want at your fingertips.
          Senior Application Developer      Cache   Translate Page      
NY-New York, RESPONSIBILITIES: Kforce has a client in search of a Senior Application Developer in New York, NY. Scope of Services: Perform detailed application design, coding and unit/integration/performance testing Develop and integrate responsive design-based web applications using full stack Ruby on Rails with other Open Source technologies such as Ruby, RVM, HTML5, CSS3, JavaScript, LeafletJS, jQuery, Data
          ODUG lands DotNetNuke guru Nik Kalyani as a speaker      Cache   Translate Page      

Originally posted on: http://bobgoedkoop.nl/archive/2010/04/22/odug-lands-dotnetnuke-guru-nik-kalyani-as-a-speaker.aspx

 

If you are in the Orlando, FL area during the first week of May then you should head over to the Orlando DotNetNuke user group meeting.
Nik Kalyani will be the speaker and you will learn a great deal from him.

DotNetNuke Module Development with the MVP Pattern

This session focuses on introducing attendees to the Model-View-Presenter pattern, support for which was recently introduced in the DotNetNuke Core. We'll start with a quick overview of the pattern, compare it to MVC, and then dive right into code. We will start with fundamentals and then develop a full-featured module using this pattern. In order to do justice to the pattern, we will use ASP.NET WebForms controls minimally and implement most of the UI using jQuery plug-ins. Finally, to increase audience participation (both present at the meeting and remote), we will use a hackathon-style model and allow anybody, anywhere to follow along with the presentation and code their own MVP-based solution that they can share online during or after the session. A URL with full instructions for the hackathon will be posted online a few days prior to the meeting.

About Our Speaker

Nik KalyaniNik Kalyani is Co-founder and Strategic Advisor for DotNetNuke Corp., the company that manages the DotNetNuke Open Source project. Kalyani is also Founder and CEO of HyperCrunch. He is a technology entrepreneur with over 18 years of experience in the software industry. He is irrationally exuberant about technology, especially if it has anything to do with the Internet. HyperCrunch is his latest startup business that builds on his knowledge and experience from prior startups, two of them venture-funded.

Kalyani is a creative tinkerer perpetually interested in looking around the corner and figuring out new and interesting ways to make the world a better place. An experienced web developer, he finds the business strategy and marketing aspects of the software business more exciting than writing code. When he does create software, his primary expertise is in creating products with compelling user experiences. Kalyani is most proficient with Microsoft technologies, but has no religious fanaticism about them.

Kalyani has a bachelor’s degree in computer science from Western Michigan University. He is a frequent speaker at technology conferences and user group meetings. He lives in Mountain View, California with his wife and daughters. He blogs at http://www.kalyani.com and is @techbubble on Twitter.


          Need ideas for a DotNetNuke Christmas gift?      Cache   Translate Page      

Originally posted on: http://bobgoedkoop.nl/archive/2009/11/26/136554.aspx

 

I thought I would give a list of DotNetNuke Christmas gifts you could possibly buy for a client, co-worker or yourself.

You can start with some Christmas skins that you could put on your site. The folks at All Dnn Skins has some skins for you to look at. Snowcovered has an array of Holiday skins as well. If you only want a Christmas skin for a period of time and want your skins to change automatically then you can take a look at PageChameleon which will do that for you. You can set the special day, the occurring frequency, skin and container. Then PageChameleon will change the skin and container in the special day and change back to normal skin after that day. And users of your web site can select their favorite skins.

Subscribe or renew to DNN Creative Magazine. There are many good tutorials listed here and what I like is that the information is very current.

There are several very good DotNetNuke books that you can get. Here’s a list:

Professional DotNetNuke 5: Open Source Web Application Framework for ASP.NET (Wrox Programmer to Programmer) by Shaun Walker, Brian Scarbeau, Darrell Hardy, and Stan Schultes

DotNetNuke 5 User's Guide: Get Your Website Up and Running (Wrox Programmer to Programmer) by Christopher J. Hammond, Patrick Renner, and Shaun Walker

Professional DotNetNuke Module Programming by Mitchel Sellers and Shaun Walker

Beginning DotNetNuke Skinning and Design (Programmer to Programmer) by Andrew Hay and Shaun Walker

You could consider a gift certificate to a co-worker with the words Come With Me to OpenForce 2010. You do have two options to consider for this gift. Will it be in Europe or Las Vegas? Gift certificates could also be used for consulting with top DotNetNuke professionals like Mitch Sellers, Michael Washington, SEO specialist Tom Kraak,or Skinning pro Ryan Morgan to name a few.

Finally, get out of the winter cold and come to an Orlando DotNetNuke User Group meeting. What a great gift to give. We have several good topics and some great speakers too! DotNetNuke Corp. Co-Founder/Technical Fellow, Joe Brinkman will be here in Orlando on January 12th.

Merry Christmas and Happy New Year!

 

Technorati Tags: DotNetNuke

          Kudos ODUG      Cache   Translate Page      

Originally posted on: http://bobgoedkoop.nl/archive/2009/07/01/133199.aspx

Today marks the day when the Orlando DotNetNuke User group officially has 200 members.

image

It was a year and half ago that I started this group with about 7 members who came and met on a Saturday morning to discuss issues relating to the best open source web portal, DotNetNuke. Will Strohl was there along with a co-worker. Will has been a great member and now great leader of the group.

More recently, Will was in charge of putting together the Day of DotNetNuke held last month at the Microsoft Office in Tampa, FL.

Many members are benefitting from this user group and I wish it continued grown and success.

 

Technorati Tags: ODUG,DotNetNuke

          Free DotNetNuke Webinars      Cache   Translate Page      

Originally posted on: http://bobgoedkoop.nl/archive/2009/05/19/132269.aspx

DotNetNuke Co-Founder, Nik Kalyani,  has added some new webinars to attend. The most popular is a demonstration of DotNetNuke. Here's the webinar information on that one:

DotNetNuke is the leading open source solution for website content management and web application development on Microsoft ASP.NET. Nik will demonstrate how easy it is to create and maintain your website using DotNetNuke. He will discuss the flexible open framework and show how easy it is to add functionality to your website by adding modules. He will also talk about the concept of skinning and how easy it makes customizing the look of a website for novices or website professionals alike.

Tom Kraak, from Seablick Consultion is doing one on Search Engine Optimization. Tom was recently at the Orlando DotNetNuke User Group meeting and he knows his stuff about search engines and DotNetNuke so you won't want to miss this webinar.

Join us as we discuss DNN search engine optimization with expert Tom Kraak. We'll start with the basics of SEO, then learn how search engines work and the difference between on-page and off-page SEO. Next we'll dive into a discussion on specific aspects impacting SEO in DNN and what you can do to ensure that your DNN-based website gets high ranking in search results. The session will last for one hour followed by a 30-min Q&A segment.

Nik is doing a good one on Module Development, Skinning, and an Introduction to the Professional Version of DotNetNuke.

Modular Development:

In this DNN Fundamentals session, we describe the process of creating a DNN module. Targeted to ASP.NET developers who want to get started with DNN module development, this session will help you understand how modules fit into the overall DNN framework, how they are developed, packaged and deployed. The session will use C# as the development language, but the concepts can also be used for developing in VB.NET or any other CLR-compliant language.

Skin Design:

In this DNN Fundamentals session, we describe the process of creating a DNN skin. Targeted at web designers who want to get started with DNN skin design, this session will help you understand the role of skins in the the overall DNN framework, how they are designed, packaged and deployed. We'll also visit some great-looking DNN-based websites and discuss the skins used on these sites.

Professional Edition Overview:

In this DNN Fundamentals session, we review DNN Professional Edition. We'll start with an overview of DNN's capabilities and architecture, then explore features specific to the Professional Edition. We'll explain why Professional Edition is important for businesses that want to use DNN for mission-critical scenarios and answer your questions about this edition of DNN.

Nik is a DotNetNuke pro and does a great job in his webinar training. All these are for free!

Technorati Tags: DotNetNuke, Webinar

          Как мы сдавали экзамен Certified Kubernetes Administrator      Cache   Translate Page      
В прошлом году у организации CNCF (Cloud Native Computing Foundation), помогающей развиваться таким Open Source-проектам, как Kubernetes и Prometheus, появилась программа сертификации CKA (Certified Kubernetes Administrator). В начале этого лета мы решили в ней поучаствовать и получили первые сертификаты для своих сотрудников. О том, что это, зачем и как происходит, с удовольствием рассказываем всем любопытствующим […]
          12种公开资源情报(OSINT)信息收集技巧分享      Cache   Translate Page      

前言

公开资源情报计划(Open source intelligence ),简称OSINT,是美国中央情报局(CIA)的一种情报搜集手段,从各种公开的信息资源中寻找和获取有价值的情报。

有各种各样的数据可以被归类为OSINT数据,但从渗透测试者的角度来看所有这些数据都不重要。作为渗透测试人员,我们更感兴趣的是那些可以为我们实际所利用的数据信息。例如:

可增加攻击面的信息(域,网块等);

凭据(电子邮件地址,用户名,密码,API密钥等);

敏感信息(客户详细信息,财务报告等);

基础架构详情(技术栈,使用的硬件设备等);

image

12种OSINT信息收集技巧

1. SSL/TLS证书通常都会包含域名,子域名和电子邮件地址。这使得它成为了攻击者眼中的一座宝库。

证书透明度Certificate Transparency是谷歌力推的一项拟在确保证书系统安全的透明审查技术,只有支持CT技术签发的EV SSL证书,谷歌浏览器才会显示绿色单位名称,否则chrome浏览器不显示绿色单位名称。

新的签发流程规定:证书必须记录到可公开验证、不可篡改且只能附加内容的日志中,用户的网络浏览器才会将其视为有效。通过要求将证书记录到这些公开的 CT 日志中,任何感兴趣的相关方都可以查看由授权中心签发的所有证书。这意味着任何人都可以公开获取和查看这些日志。为此,我专门编写了一个用于从给定域CT日志中找到的SSL/TLS证书中提取子域的脚本

提取子域的脚本

另外,再推荐大家一款工具SSLScrape。这是一款将网络块(CIDR)作为输入,来查询各个IP地址获取SSL/TLS证书,并从返回的SSL证书中提取主机名的工具。

sudo python sslScrape.py TARGET_CIDR

SSLScrape

2. WHOIS服务在渗透测试期间,通常用于查询注册用户的相关信息,例如域名或IP地址(块)。WHOIS枚举对于在互联网上拥有大量资源的组织尤为有效。

一些公共WHOIS服务器支持高级查询,我们可以利用这些查询来收集目标组织的各种信息。

下面我以icann.org为例,通过查询ARIN WHOIS server获取目标域所有包含email地址的条目,并从中提取email地址。

whois -h whois.arin.net "e @ icann.org" | grep -E -o "\b[a-zA-Z0-9.-]+@[a-zA-Z0–9.-]+\.[a-zA-Z0–9.-]+\b" | uniq

提取email地址

通过查询RADB WHOIS server获取属于自治系统号(ASN)的所有网络块。

whois -h whois.radb.net -- '-i origin AS111111' | grep -Eo "([0-9.]+){4}/[0-9]+" | uniq

RADB WHOIS server

通过查询ARIN WHOIS server获取给定关键字的所有POC,ASN,组织和终端用户。

whois -h whois.arin.net "z wikimedia"

获取给定关键字的所有信息

3. 自治系统(AS)号可以帮我们识别属于组织的网络块,以及发现网络块中主机上运行的服务。

使用dig或host解析给定域的IP地址:

dig +short google.com

获取给定IP的ASN:

curl -s http://ip-api.com/json/IP_ADDRESS | jq -r .as

我们可以使用WHOIS服务或NSE脚本,来识别属于ASN号码的所有网络块:

nmap --script targets-asn --script-args targets-asn.asn=15169

识别属于ASN号码的所有网络块

4. 如今使用云存储的企业越来越多,尤其是对象/块存储服务,如Amazon S3,DigitalOcean Spaces和Azure Blob Storage。但在过去的几年里,由于S3 buckets错误配置导致的数据泄露事件频出。

根据我们的经验,我们看到人们将各种数据存储在安全性较差的第三方服务上,从纯文本中的凭据到他们私人的宠物照片。

这里有一些工具例如SlurpAWSBucketDumpSpaces Finder,可用于搜索可公开访问的对象存储实例。Slurp和Bucket Stream等工具将证书透明度日志数据与基于置换的发现相结合,以识别可公开访问的S3 buckets。

识别可公开访问的S3 buckets

识别可公开访问的S3 buckets

5. Wayback Machine是一个国外的网站,它是互联网的一个备份工具,被称为互联网时光机,自从1996年以来Wayback Machine已经累计备份了超过4350亿个网页。

你只要在它的网站上输入一个网站地址,就能看到这个网站在过去的不同时期分别长什么样。通过Wayback CDX Server API我们可以轻松的搜索存档。waybackurls是一个用于搜索目标站点相关数据的工具。

挖掘Wayback Machine存档对我们识别给定域的子域,敏感目录,敏感文件和应用中的参数非常有用。

go get github.com/tomnomnom/waybackurls waybackurls icann.org

挖掘Wayback Machine存档

6. Common Crawl网站为我们提供了包含超过50亿份的网页数据,并且任何人都可以免费访问和分析这些数据。

我们可以通过Common Crawl API在索引的抓取数据中搜索我们感兴趣的站点。这里有一个小脚本(cc.py)可以帮你完成搜索任务。

python cc.py -o cc_archive_results_icann.org icann.org

Common Crawl

7. Censys是一款用以搜索联网设备信息的新型搜索引擎,安全专家可以使用它来评估他们实现方案的安全性,而黑客则可以使用它作为前期侦查攻击目标、收集目标信息的强大利器。

Censys将数据集分为三种类型 – IPv4主机,网站和SSL/TLS证书。如果你能熟练的掌握Censys的搜索技巧,那么它将发挥与Shodan不相上下的强大作用。

Censys为我们提供了一个API,我们可以用它来查询数据集。这里我编写了一个连接调用Censys API的Python脚本,用于查询给定域的SSL/TLS证书,并提取属于该域的子域和电子邮件地址,点此获取

Censys

Censys

8. Censys会从多个数据源来收集SSL/TLS证书。

使用的技术之一是探测443端口上的公共IPv4地址空间上的所有计算机,并聚合它们返回的SSL/TLS证书。Censys提供了一种关联提供证书的IPv4主机获取的SSL/TLS证书的方法。

通过关联SSL/TLS证书和提供证书的IPv4主机,可以找到受Cloudflare等服务保护的域的源服务器。

收集SSL/TLS证书

9. 源码对于安全评估来说是一座信息宝库。

通过源码我们可以获取到大量信息,例如凭据,潜在漏洞,基础架构等信息。GitHub是一个非常受欢迎的版本控制和协作平台,在上面存有大量组织的源码。另外,Gitlab和Bitbucket同样也非常的受欢迎。总之,不要错过任何可能的地方。

GitHubCloner可以自动化的为我们克隆Github帐户下所有的存储库。

$ python githubcloner.py --org organization -o /tmp/output

除了GitHubCloner之外,你还可以尝试使用Gitrob,truffleHog,git-all-secrets等工具。

10. Forward DNS dataset是作为Rapid7 Open Data项目的一部分发布的。

数据格式为gzip压缩的JSON文件。虽然数据集很大(20+GB压缩,300+GB未压缩),但我们仍可以解析数据集并查找给定域的子域。最近,数据集已根据数据包含的DNS记录类型被分为了多个文件。

Forward DNS dataset

11. 内容安全策略(CSP)定义Content-Security-Policy HTTP头,允许我们创建可信内容源的白名单,并指示浏览器仅从这些可信源执行或呈现资源。

Content-Security-Policy头将为我们列出作为攻击者可能感兴趣的一堆源(域)。这里我编写了一个用来解析CSP头中列出域名的简单脚本,点此获取该脚本

内容安全策略(CSP)

12. SPF是为了防垃圾邮件的一种DNS记录类型,是一种TXT类型的记录。

SPF记录的本质就是向收件人宣告本域名的邮件从清单上所列IP发出的都是合法邮件,并非冒充的垃圾邮件,可以防止别人伪造你来发邮件,是一个反伪造性邮件的解决方案。

简而言之,SPF记录为我们列出了所有授权发送电子邮件的主机。有时,SPF记录还会泄漏内部网络块和域名。

一些服务如Security Trails可以为我们提供DNS记录的历史快照。我们可以通过查看历史SPF记录,发现SPF记录中列出的给定域的内部网络块和域名。

Security Trails

这里有一个我们编写的脚本,可以帮助你从给定域的SPF记录中提取网络块和域名。当使用-a选项运行时,该脚本还可以为我们返回ASN详细信息,点此获取该脚本

python assets_from_spf.py icann.org -a | jq .

返回ASN详细信息

总结

在本文中,我为大家介绍了我们在日常安全评估中经常用到的一些信息收集的方法。虽然涵盖面很广,但仍有许多需要补充的地方。因此,如果你有更好的技巧方法,欢迎你在评论中与我分享!

参考文献:

https://blog.appsecco.com/open-source-intelligence-gathering-101-d2861d4429e3

https://blog.appsecco.com/certificate-transparency-part-3-the-dark-side-9d401809b025

https://blog.appsecco.com/a-penetration-testers-guide-to-sub-domain-enumeration-7d842d5570f6

https://www.certificate-transparency.org

https://www.arin.net/resources/services/whois_guide.html

https://index.commoncrawl.org/

https://www.upguard.com/breaches/cloud-leak-accenture

https://www.0xpatrik.com/censys-guide/

https://www.0xpatrik.com/osint-domains/

https://opendata.rapid7.com/sonar.fdns_v2/

*参考来源:appsecco , FB小编 secist 编译,转载请注明来自FreeBuf.COM


          Как мы сдавали экзамен Certified Kubernetes Administrator      Cache   Translate Page      


В прошлом году у организации CNCF (Cloud Native Computing Foundation), помогающей развиваться таким Open Source-проектам, как Kubernetes и Prometheus, появилась программа сертификации CKA (Certified Kubernetes Administrator). В начале этого лета мы решили в ней поучаствовать и получили первые сертификаты для своих сотрудников. О том, что это, зачем и как происходит, с удовольствием рассказываем всем любопытствующим читателям хабры. Читать дальше →
          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Plotly is hiring Pythonistas!...
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          Open Source Intelligence (OSINT) Analyst, JBLM WA Job - SAIC - Fort Lewis, WA      Cache   Translate Page      
For information on the benefits SAIC offers, see My SAIC Benefits. Headquartered in Reston, Virginia, SAIC has annual revenues of approximately $4.5 billion....
From SAIC - Fri, 05 Oct 2018 02:47:27 GMT - View all Fort Lewis, WA jobs
          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Plotly is hiring Pythonistas!...
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          Salary Survey Extra: Deep Focus on Red Hat Certified System Administrator      Cache   Translate Page      

Red Hat is quite possibly the largest tech corporations in the world to focus entirely on open source technology, with Red Hat Enterprise Linux (RHEL) forming the cornerstone of its open source empire. ...

The post Salary Survey Extra: Deep Focus on Red Hat Certified System Administrator appeared first on Certification Magazine.


          Kunnen Linux-ontwikkelaars de GPLv2 intrekken als ze het met de Code of Conduct oneens zijn?      Cache   Translate Page      

Door ICT-jurist Arnoud Engelfriet.

Herrie in de Linux-tent: een nieuwe Code of Conduct in het project heeft veel ophef veroorzaakt, inclusief dreigementen dat men bijdragen aan de kernel gaat intrekken. Deze zijn door vrijwilligers ingebracht onder de GPLv2 open source licentie, maar volgens partijen die het oneens zijn met de Code is het nu mogelijk deze licentie weer in... Lees verder

Het bericht Kunnen Linux-ontwikkelaars de GPLv2 intrekken als ze het met de Code of Conduct oneens zijn? verscheen eerst op Ius Mentis.


          Digital Experience Delivery Manager - West Bend Mutual Insurance - West Bend, WI      Cache   Translate Page      
Understanding of UX; Must be familiar with evolving trends in digital UX, open source software and best practices. Summary of Responsibilities....
From West Bend Mutual Insurance - Mon, 10 Sep 2018 23:23:43 GMT - View all West Bend, WI jobs
          Как мы сдавали экзамен Certified Kubernetes Administrator      Cache   Translate Page      


В прошлом году у организации CNCF (Cloud Native Computing Foundation), помогающей развиваться таким Open Source-проектам, как Kubernetes и Prometheus, появилась программа сертификации CKA (Certified Kubernetes Administrator). В начале этого лета мы решили в ней поучаствовать и получили первые сертификаты для своих сотрудников. О том, что это, зачем и как происходит, с удовольствием рассказываем всем любопытствующим читателям хабры. Читать дальше →
          Comment on Sony PS3 Emulators by HYPERTiZ      Cache   Translate Page      
dont sign up - download it at their official open source website https://rpcs3.net "OPEN-SOURCE PLAYSTATION 3 EMULATOR RPCS3 is an experimental open-source Sony PlayStation 3 emulator and debugger written in C++ for Windows and Linux. RPCS3 began development in May of 2011 by its founders, DH and Hykem."
           Comment on New Part Day: The RISC-V Chip With Built-In Neural Networks by Adam       Cache   Translate Page      
Thanks for your response! 30mW looks good but on Kendryte site it is <300mW. Have you done some tests what is minimal power consumption without losing memory? Can you disable one core, slow down clock or stop it completely? If it is really static RAM it should theoretically be possible to stop all clocks completely. Open source APU is great news.
          The Indian Express Script | Firstpost Script      Cache   Translate Page      
Our India times clone is mainly developed for the people to take up their news-portal business through on-line to provide a brand new professional news-portal script with advanced features and functionality to enhance the business to make latest and trends easier access to the users and this script will also help the new entrepreneur who likes to do on-line business and to provide the latest trending trusted news service with reliable and robust script, this India times script makes much easier for the users to access the site without any technical knowledge because our script is made as user-friendly. The Indian Express Script is designed with Open Source PHP platform to make the script as much as efficient to the user, this script can be customized to the users as globalised or local to make their reach to the worldwide and here the new user can simply register their account with their valid mail id and password to make authentication account
          The evolution of open source contributors: from hobbyists to professionals      Cache   Translate Page      

The most recent episode of Command Line Heroes is all about the process of contributing to open source, from the point of view of a contributor and a maintainer. It's a great episode - don't just take my word for it, you can listen here - but you can only tackle so much in one episode! One thing that the episode sparked for me is just how much the nature of open source contribution has changed since Netscape first took the plunge with Mozilla in 1998.


          Committed to open source game development      Cache   Translate Page      

This is the third post in our series on open source game development.


          Moving Forward with Eclipse GlassFish at Jakarta EE      Cache   Translate Page      

In our last blog on Jakarta EE we discussed that moving Java EE technologies to Jakarta EE was a huge undertaking. We have made enormous progress over the past few months, contributing all of Oracle GlassFish sources to Jakarta EE, as well as contributing sources for the Java EE Technology Compatibility Kits (TCKs).   Completion of this step in the process represents a significant milestone for the Jakarta EE initiative, so we thought we'd take some time to review why we're doing this and what we've done.

We're contributing this software for a number of reasons.   First, we want Jakarta EE to evolve based on community directions, without the perception of control by any single vendor.   Moving Oracle GlassFish and its components into Eclipse projects enables these technologies to evolve in a vendor-neutral manner.   Second, as we establish Jakarta EE as a set of standards and a brand, we want to demonstrate that we have real implementations that deliver on our compatibility goals.   Eclipse GlassFish, built from the Oracle GlassFish sources contributed to the Eclipse Foundation, will not be the reference implementation of Jakarta EE - there will not be a single reference implementation - but it is likely to become one of the first Jakarta EE implementations, and it will be a tangible demonstration that Jakarta EE technologies are compatible with Java EE.   Finally, we intend to establish a well defined set compatibility tests for Jakarta EE implementations.   We felt the best place to start was with Java EE 8 TCKs, which are the latest version of the Java EE compatibility tests that have been used to establish Java EE compatibility for years.   With the contribution of these TCKs, we in Jakarta EE are empowered to immediately define a first set of compatibility tests for Jakarta EE 8, and to evolve these tests as evolution of the standards dictates, by evolving the TCK sources.

To deliver these sources, Oracle first organized all of the Oracle GlassFish 5.0 sources into a set of components that could be mapped to specific Eclipse projects under the umbrella EE4J project.   There are 39 EE4J projects, including recognizable components such as Jersey, JAX-RS, Mojarra, JSON-P, a project for the TCKs, and the Krazo MVC project - a new Jakarta EE technology not included in GlassFish.   For each of these projects we created a set of GitHub repositories at Eclipse to host contributed sources.  Oracle went through a process of reviewing Oracle GlassFish sources and associated licenses to ensure readiness to contribute, and then executed the Eclipse process for contributions, including review of sources by the Eclipse Foundation.  As contributions completed this review, they have been published to the corresponding GitHub repositories.   Per the summary status page we have published contributions to all of the project repositories, except for Krazo and the Platform project which is intended to host Jakarta EE specifications.  So we now have open sources that can be used for Jakarta EE implementations, hosted at the Eclipse Foundation.   Some of these projects have already completed the Eclipse release review process.

We have begun creating Eclipse GlassFish builds using these sources and will be posting downloads of nightly builds.   We will run the Java EE 8 TCKs against these nightly builds with the goal of creating a release of Eclipse GlassFish that is Java EE 8 compatible.  We hope to complete this process per the schedule posted here.  Become a committer for EE4J and reach out to project leads if you'd like to help.

You will have the opportunity to here more about this at EclipseCon Europe and Oracle Code One.   See the Cloud Native Java track at EclipseCon Europe and Jakarta EE sessions in the Java Server-Side Development and Microservices track at Oracle Code One.  Hope to see you at one or the other!


          IBM And NVIDIA Collaborate To Expand Open Source Machine Learning Tools For Data Scientists      Cache   Translate Page      

Click to view a price quote on IBM.

Click to research the Computer Software & Services industry.

          MRS 065: Nathan Kontny      Cache   Translate Page      

Panel: Charles Max Wood

Guest: Nathan Kontny

This week on My Ruby Story, the panel talks with Nathan Kontny who has been in the Ruby community since 2005. He once was a chemical engineer, and then got into programming after a broken ankle incident; after that...the rest is history! Today, Nathan and Chuck talk about Ruby, how to begin a startup company, Rockstar Coders, balancing life, and much more!

In particular, we dive pretty deep on:

1:05 – Chuck: E365 is the past episode you’ve been featured on.

1:14 – Nathan comments.

1:20 – Chuck.

1:56 – Nathan: Been in the community since 2005. I am a developer and entrepreneur. I do a lot of YouTube and videos nowadays.

2:50 – Chuck: How did you get into this field?

2:55 – Guest: It’s weird. I was a chemical engineer in the past. Back in the day 1996 I was learning...

My love for it started through an internship. It was kind of a scary place dealing with harmful materials. Make sure you aren’t carrying uranium with you, and wear multiple gas masks at all times. There was an acid leak through someone’s shoulder. I didn’t love it, but something fortunate happened. I broke my ankle in one summer, and when I showed-up they made me go to this trail where I couldn’t be near the chemicals. Well, the director had computer problems and asked him to help with him. I put in code and out came results. In the chemical industry it was/is: “Maybe the chemicals will react to this chemical in this way...?” It was this dopamine rush for me. After that summer, I wanted to do programming.

7:16 – Chuck: Same thing for me. This will manifest and then boom. I had a friend change to computer major – and this led me to the field.

8:45 – Guest: Yeah, I had a different career shown to me and then I had a choice.

9:02 – Chuck: How did you find Ruby?

9:05 – Guest: I got a job but they wouldn’t let me program because I didn’t have enough experience. I had to teach myself. I taught myself Java – 9 CDs back in the day. I stayed up late, and did anything I could to teach myself. I taught myself Java. I got promoted in the business and became a Java developer. After 5 years of that I started doing freelance work.

I love Ruby’s language and how simple it was to me. I have flirted with other languages, but I keep coming back to Ruby.

13:00 – Chuck: The same for me, too. Oh, and this makes this so much easier, and it extends so much easier. I have questions about being an entrepreneur. Anyways, you get into Ruby and Rails, you’ve done a bunch of things. What are you proud of and/or interested in with Rails? How do you feel like Rails helps with building things?

14:00 – Guest shares his past projects. 

I was proud of just hosting Rails, because there were so many changes back in the day. I have helped with open source contributions back in 2009. There was a security problem and I discovered this. Nothing happened and I just went in and fixed the bug; an infamous contribution. I am proud of my performance work. I made a plug-in for that, etc. Also, work with Highrise.

17:23 – Chuck: Yep, Highrise people will know. I’ve used Highrise in the past.

17:38 – Nathan: Yeah.

17:50 – Chuck and Nathan go back and forth.

17:58 – Chuck: You’ve done all these different things. So for a start-up what advice would you give? People are doing their own thing – what’s your advice on an incubator, or doing it alone or raising capitol?

18:41 – Nathan: I take a middle road approach. You do what makes sense with your business. What works for you? I would do that. It’s hard to pick-on what incubators could be.

Ownership is everything – once you don’t own it – you loose that control. Don’t loose your equity. I wanted more control over my box. I would be careful raising money – do that as a last effort. Keep your ownership as far as you can. But if you are up against the wall – then go there.

22:29 – Chuck: Now I have 2 jobs: podcasting and developing this course. I guess my issue is how do you find the balance there between your fulltime job and your new fulltime job?

23:01 – Nathan: Yeah it’s tough. I do, too, now I am building something and trying to balance between that and Rockstar Coders. Clients have meetings and there are fires. There is no magic to it. I thought bunching your days into clusters would help me with focus, but it’s not good for the business. I don’t think the batch thing isn’t working for me. A little bit on, a little bit off. I think MT on Rockstar. Wednesday I take a half-day. Thursday all start-up, etc. It’s just balance. It can’t be lopsided one way or the other. Just living with my girlfriend and now wife was easy, but having a kid in the evening is tricky. I create nice walls that don’t interfere. I don’t know that’s it.

25:55 – Chuck: It sounds like they are completely separate. What I am building affects my people at work. I find the balance hard, too.

26:21 – Nathan: It’s also good to have partners who support you.

27:19 – Chuck: Do you start looking for help with marketing, or...?

27:27 – Nathan: Yeah that’s hard, too. Maybe? Some people aren’t in the US and they might be more affordable. My friend found someone in Europe who is awesome and their fees are cheaper. Their cost of living is cheaper than the U.S. There are talented folks out there.

28:50 – Chuck: Yeah, I had help with a guy from Argentina. I am in Utah and he was an hour ahead. So scheduling was easy.

29:27 – Nathan: I have a hard time giving that up, too. It’s hard to hire someone through startup work. Startup work needs to be done quickly, etc. BUT when things solidify then get help.

30:28 – Chuck: They see it as risky proposition. It seems like the cost is getting better so the risk is there.

30:48 – Nathan: There is tons of stops and goes if I look back into my career. In the moment they feel like failures, but really it was just a stepping-stone. It was just a source for good ideas, and writings, and things to talk at podcasters about, etc. I just feel like short-term they feel risky but in the long-term you can really squeeze out value from it. I am having trouble, right now, finding customers, it could be risky, and there might not be a market for this. But I am learning about x, y, and z. Everything is a stepping-stone for me now. I don’t feel like it’s a failure anymore to me.

32:50 – Chuck: What are you doing now?

32:55 – Guest: Rockstar.

3 / 4 teenagers want to be YouTubers! That’s just crazy and that will keep going. I want to be apart of that. I am making programs so people can make their own videos. That’s what I am fooling around with now.

35:06 – Chuck: Yeah we will have a channel. There is album art. I’m working on it.  I will start recording this week.

35:43 – Nathan: It is hard to get traction there. I don’t know why? Maybe video watchers need quicker transitions to keep interested.

36:12 – Chuck: I could supply some theories but I don’t know. I think with YouTube you actually have to watch it. Podcasts are gaining traction because you can go wherever with it.

36:51 – Nathan: Right now commuting can only be an auditory experience. When we get self-driving cars then videos will take off.

37:14 – Chuck: Picks!

37:19 – Advertisement! 

Links:

Sponsors:

Picks:

Charles

Nathan


          System Strengthening/EMIS Officer at Education Development Center (EDC)      Cache   Translate Page      
Education Development Center (EDC) is one of the world&rsquo;s leading nonprofit research and development firms. EDC designs, implements and evaluates programs to improve education, health, and economic opportunity worldwide. Collaborating with both public and private partners, we strive for a world where all people are empowered to live healthy, productive lives.We are recruiting to fill the position below:Job Title:&nbsp;System Strengthening/EMIS OfficerLocation:&nbsp;Sokoto State.DescriptionThe Northern Education Initiative Plus invites application from suitably qualified candidates for the post of system Strengthening/EMIS Officer in its Sokoto OfficeThe System Strengthening/EMIS Officer is responsible for establishing and managing all system strengthening/EMIS related activities through tracking Project progress, results, indicators and targetsThe position requires all system strengthening/EMIS activities to meet program mandates as well as track progress.The Strengthening/EMIS Officer will provide technical assistance on education information management systems project-wide and will be responsible for overseeing policy-related activitiesThis position is based in Sokoto, Nigeria, and will report to the Sokoto State Team Leader.Primary ResponsibilitiesThe System Strengthening/EMIS Officer is expected to:Support the development of a common web-based EMIS using existing open source software that can be customized by state and LGEAs to meet unique information needs;Provide ongoing technical support to ministries, departments, and agencies staff to review progress, &ldquo;trouble shoot&rdquo; problems, and support use of EMIS data.Provide support for inter-state training for SMOE, SUBEB planners, analysts, and policy makers focusing on gathering, analyzing and using data from the EMIS, assessments, evaluations and research to make decisions and plan education improvement initiatives;Support planners and policy makers to use data for quality decision-making;Provide assistance in setting up systems to gather data for access to quality education at the state level.Facilitate a process with state governments and other stakeholders to map and review existing education policy frameworks, especially around systems, access, reading instruction and assessment.Work with FMOE, SMOEs, UBEC, NMEC, SUBEB, SAME, MORA to provide support to sustain existing policy initiatives and facilitate development and implementation of new policies, particularly in the areas of systems, access, reading instruction, assessment, and accountability;Facilitate policy review meetings with regard to reading and access to track progress in implementing new policies, identify corrective actions to speed implementation, and develop tools to assess impact;Improve the capacity of government education officials to develop leadership, managerial, and supervisory skills to effectively implement policies and regulations with regard to access and reading;Provide coaching on various issues including data analysis and evidence-based decision making, monitoring, and staff mentoring;Support dissemination of new policies and opportunities for bottom-up feedback at all levels through state-level workshops involving government education officials; community meetings; IT; paper publications; and ongoing media campaigns&nbsp;

Apply at https://ngcareers.com/job/2018-10/system-strengthening-emis-officer-at-education-development-center-edc-520/


          RE: igNumericEditor percentage      Cache   Translate Page      
Hello Karthik, Thank you for contacting the Infragistics support! The ‘includeKeys’ option was indeed removed in the new editors. In order to meet your scenario, I can suggest two approaches: You may use the igTextEditor, instead, in order to handle the input. Or what I think is better – you can update the numeric editor’s source code in order to allow the ‘%’ sign. As the editors are included in the open source Ignite UI package, you can get the latest version from github ( github.com/.../infragistics.ui.editors.js ). Inside the ‘_initialize’ internal method of the igNumericEditor you may find the ‘numericChars’ variable: var numericChars = " 0123456789" You would add the percent sign to the list of digits. I hope this would be suitable for your scenario. Please, let me know if you face any troubles implementing this or in case you have any other questions. Thank you once more for using Ignite UI!
          תגובה לפוסט: "גוגל חשפה רשמית את ה-Pixel 3 וה-Pixel 3 XL" מאת "רק אני"      Cache   Translate Page      
תוכל להסביר את הטענה שלך בין ההתאמה של התכנה לחומרה ומן הסתם לתת הסברים שמוכיחים למה במכשיר הזה ההתאמה בין התכנה לחומרה יותר טובה מכל מכשיר אנדרואיד אחר ? הדריברים של ה845 משוחררים ע"י קוואלקום לכל הייצרניות דרך caf. הדרייברים האחרים משוחררים ע"י יצרניות הרכיבים, גם באופן כזה שהם לא מייצרות דרייבר פחות טוב לחברה כזו או רחקת.. הקרנל הוא בכלל open source וכולם משתמשים באותו הקרנל, אולי עושים לו כמה שינויים לאופטימיזציה כזו או אחרת ברמת התכנה אני כל הזמן רואה שמגיבים עם הטיעון הזה ומרגיש שמי שטוען את זה פשוט לא מבין על מה הוא מדבר (או שאני לא מבין על מה הוא מדבר). כן אנדרואיד נקי של גוגל יותר מהיר מאנדרואיד שמלא בבלואטוור מסמסונג, שיאומי או וואווי מהסיבה המאוד פשוטה שהוא לא מפוצץ בבלואטוור.
          Block me, Amadeus: Falco to perform in CNCF sandbox      Cache   Translate Page      

Sysdig's container runtime security project gets solid foundation

Falco, Sysdig's open source project for monitoring container runtimes, is slated to join the Cloud Native Computing Foundation on Wednesday, becoming the first runtime security tool to be added to the Cloud Native Sandbox project, a home for early stage projects.…


          Domoticz - open source domotica systeem - deel 3      Cache   Translate Page      
Replies: 11335 Last poster: RoTeK70 at 10-10-2018 12:37 Topic is Open vwtune schreef op woensdag 10 oktober 2018 @ 11:35: [...] Die 'off' status hoeft er volgens mij niet bij hoor?Inderdaad ook geprobeerd maar de "off" functie heeft geen voordeel wat ik zo kon zien.
          rambox (0.6.1)      Cache   Translate Page      
Free, Open Source and Cross Platform messaging and emailing app that combines common web applications into one. ALL WILL BE DIFFERENT...

          tribler (7.1.0)      Cache   Translate Page      
Tribler is an open source anonymous decentralized BitTorrent client.

          Open Source Runtime Developer- Internship (Markham, ON) - IBM - Markham, ON      Cache   Translate Page      
Through several active collaborative academic research projects with professors and graduate students from a number of Canadian and foreign universities....
From IBM - Tue, 18 Sep 2018 10:49:47 GMT - View all Markham, ON jobs
          (USA-CO-Aurora) Web Developer (includes ~ 25% Profit Sharing)      Cache   Translate Page      
Job Description Who we are and what you get to do: The FADE program designs state of the art analysis applications, providing game-changing mission critical capabilities to our customers. We achieve this through empowering our engineers, maintaining an operational focus, embracing open standards, utilizing rapid development strategies and minimizing process overhead. More about your Web Developer role: In this position, you will research, design and develop computer software systems used for geospatial and temporal data analysis. The process of software development in this position adheres to the Agile process, and includes planning and estimation of task complexity, development of new capabilities and bug fixes, and demonstrations of completed capabilities. Software development occurs using modern web technologies, including JavaScript, AngularJS, HTML5, CSS, Bootstrap, Cesium, and OpenLayers, among other technologies, with the candidate producing high-quality code. Code quality is measured by automated processes and human review, and no code is accepted into the baseline without passing these checkpoints. Success for newly developed capabilities is measured in part using unit, functional and regression testing strategies. Upon completion, all software changes will be submitted for review by at least one other team member, and individuals are responsible for reviewing submitted changes from other members of the team. In addition to new capabilities, candidates for this position participate in the maintenance of deployed capabilities, aiding support personnel in diagnosing, proposing, and implementing fixes to reported problems. Maintaining software requires occasional contact with end users, as part of a coordinated support strategy. + Perform in all phases of the application lifecycle to include systems engineering and requirement analysis, technical design, system integration, implementation, and deployment. + Use industry proven design patterns and open source tools is encouraged, along with a dedication to staying educated on current technology trends. + Provide software design and development expertise in support of both new application development tasks and maintenance. + You will be part of an Agile team where communication skills and the ability to execute within the established development process are paramount to yours and the team’s success. You’ll Bring These Qualifications: + Requires broad technical job knowledge typically obtained through advanced education. + Completion of a selected codility experience. + Typically has a Bachelor’s degree or Associates/Vocational/Technical education and minimum of 5 years of related full-time work experience. + MUST BE A US CITIZEN and MUST BE ABLE TO OBTAIN A Top Secret/SCI clearance level. + Strong JavaScript skills, including use of Angular, Bootstrap, Cesium, OpenLayers, etc. + Strong understanding of Linux and / or Windows environments + Familiarity with Agile Software Development process and methodologies + Experience designing User Interfaces and User Experiences (UI / UX) + Familiarity with GIT source control + Excellent technical acumen and be highly self-directed and motivated. These Qualifications Would be Nice to Have: + OpenGL / WebGL development experience + Experience with Geospatial Information Systems (GIS) and OGC WMS / WFS standards + Continuous Integration and Automated Testing tools like Jenkins / Hudson, Selenium, JUnit, etc. + Experience with build automation tools, including NPM, Nexus, and / or Archiva + Experience with ticketing and Agile process management tools (e.g.: JIRA) + Experience with UI / UX design through Balsamiq or similar applications COMPANY DESCRIPTION: BIT Systems Inc. is a CACI company, which provides all the benefits of a small company but with the stability of a multi-billion-dollar company. The commitment of our employees to "Engineering Results" is the catalyst that has propelled BITS to becoming a leader in software development, R&D, sensor development, and signal processing. Our engineering teams are highly adept at solving complex problems with the application of leading-edge technology solutions, empowering our employees to make better mission-critical decisions. We provide the tools you need to succeed. Our work environment is mission-focused but relaxed; is professional but casual; and requires individual responsibility within team constructs. We offer an environment where business casual and shorts/flip flops work side by side successfully; we let you be you. Additional BITS Information: Not to mention BITS benefits are quite unique. Basically, BITS benefits equate to 50% of salary on TOP of your base salary. The first part is a tax-qualified profit sharing retirement plan to which BITS annually contributes up to 25% of your base salary (not more than applicable IRS limits) to your retirement account under the plan. The second part consists of BITS' Individual Benefit Account Plan (the IBA), which is used for premiums, medical reimbursements, dependent care, education and BITS' Paid Time Off (PTO) Policy. Both components of the BITS benefit package are paid for by BITS in addition to your base salary and potential performance bonuses. DACOHP Other CACI Highlights: + We’ve been named a Best Place to Work by the Washington Post. + Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. + We offer competitive benefits and learning and development opportunities. + We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. + For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. Job Location US-Aurora-CO-DENVER CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          (USA-VA-Chantilly) Deployment Systems Engineer      Cache   Translate Page      
Job Description What You’ll Get to Do: CACI is hiring a Deployment Systems Engineer to support our DHS customer. You are a recognized subject matter expert in systems engineering and will assist in the integration, customization, implementation, testing and deployment of innovative Cyber Security Solutions in support of DHS and according to DHS Systems Engineering Life Cycle (SELC) requirements. More About the Role: + Provision or decommission servers and virtual machines based on pre-defined, as-built documentation + Recognized subject matter expert who manages large projects with limited oversight from manager and can coach, review and delegate work to lower level technical team members + Problems faced are difficult and often complex + Influences others regarding policies, practices, and procedures + Install, configure, tailor, and deploys tools and sensors based on operational and functional requirements + Works to achieve operational targets with major impact on the departmental results. + Contributes to the development of goals for the department and planning efforts (budgets, operational plans, etc.). + May manage large projects or processes that span outside of immediate job area. Work is performed with minimal oversight. + Conduct testing activities following Test and Evaluation Master Plan (TEMP) to include initial user acceptance testing + Develop detailed operations manuals and provide training to O&M staff and system administrators + Provide technical leadership for RFS implementation Iteration 7, Discovery/Preparation, to Iteration 12, Handover to Operations + Demonstrates innovative influence for Business Groups or projects. + Responsible for making moderate to significant improvements of systems or products to enhance performance of program/project. + Responsible for researching, designing, developing, testing and supporting new systems, applications, and solutions for enterprise-wide cyber systems and networks. + Validate target environment against solution requirements, identifying Solution Dependencies (Del. 42) + Communicates with parties within and outside of own job function. Typically has responsibility for communicating with parties external to the organization (e.g., customers, vendors, etc.) + Ensure operational and functional requirements are tested during initial rollout + Applies engineering disciplines to the design, development, integration, and support of innovative solutions or products that identify, exploit, protect against, or mitigate cyber security vulnerabilities. + Oversee execution of service assurance activities including execution of Test and Evaluation Master Plan (TEMP) and Independent Verification & Validation (IV&V) support + Ensure training and document Handover to Operations (CACI and DHS/Component service provider) following ORR and deployment of CDM capabilities + Responsible for all Request for Service (RFS) Deployment services and Handover to Operations + Responsible for “Request for Service (RFS)” deployment project success and implementing processes and procedures + Accountable for successful knowledge transfer and training of all deployed CDM capabilities, both CACI Tier II/III and DHS/Component service providers You’ll Bring These Qualifications: + 12+ years' experience + Familiarity with NIST Risk Management Framework and Security Controls (NIST SP 800-53 and DHS4300A/B) + Experience or familiarity with troubleshooting networking issues at the hands-on technical level + Experience (or Familiarity) with Cloud and Mobile technologies + Strong understanding of Routing, Switching, and Cisco iOS, JunOS, or equivalent Linux experience helpful + Data Center experience including experience (or Familiarity) with the concepts of Tech Refresh, Configuration Management, Change Management, Agile, ITIL, COBIT, or PMBoK methodologies + Experience (or Familiarity) with Microsegmentation, Advanced Data Protection, Digital Rights Management tools technologies + Experience with the extensive use of virtualization using VMware vSphere, MS Hyper-V, Oracle Virtual Box, oVirt or another commercial or Open Source virtualization technology + Experience (or Familiarity) with the concepts of Tech Refresh, Configuration Management, Change Management, Agile, ITIL, COBIT, or PMBoK methodologies + Ability to obtain and maintain an active security clearance + Ability to obtain DHS Entrance on Duty These Qualifications Would be Nice to Have: + Certified Information Systems Security Professional (CISSP) Certification + Microsoft Certified Solutions Expert (MCSE) or Microsoft Certified IT Professional (MCITP) Certification + VMware Certified Professional (VCP) + Splunk, Cloud, or Agile based certifications, e.g. Splunk User/Power User, Scrum Master + Python scripting experience preferred or Perl, Bash, etc. + ITIL Certification(s) What We Can Offer You: - We’ve been named a Best Place to Work by the Washington Post. - Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. - We offer competitive benefits and learning and development opportunities. - We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. - For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. CDMHP Job Location US-Chantilly-VA-VIRGINIA SUBURBAN CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          Microsoft open-sources Infer.Net machine learning      Cache   Translate Page      

Microsoft has released through open source its Infer.Net cross-platform framework for model-based machine learning.

Infer.Net will become part of the ML.Net machine learning framework for .Net developers, with Infer.Net extending ML.Net for statistical modeling and online learning. Several steps toward integration already have been taken, including the setting up of a repo under the .Net Foundation.

Microsoft cited the applicability of Infer.Net to three use cases:

To read this article in full, please click here


          (USA-DC-Washington) Senior IT and Compliance Auditor      Cache   Translate Page      
*Position Description*: Mathematica Policy Research is dedicated to improving public well-being by bringing the highest standards of quality, objectivity, and excellence to bear on information and analysis for our partners and clients. The company has been at the forefront of design and assessment of public policies and programs since 1968. Our analytic solutions have yielded actionable information to guide decisions in wide-ranging policy areas, from health, education, early childhood, and family support to nutrition, employment, disability, and international development. Join our vibrant and growing IT Services group, and make important contributions to Mathematica’s security capabilities. We are looking for *a Senior IT and Compliance Auditor* join our Information Technology Services group in either our Princeton, NJ headquarters or our Washington, D.C. office. The *Senior IT and Compliance Auditor* will contribute to the success of Mathematica’s security strategy through continuous improvement of Mathematica’s corporate security program, leading IT audit activity across the enterprise, and effectively communicating corporate security policies and procedures to the enterprise. *Position Responsibilities:* * Lead IT audit and testing of security controls for design and effectiveness, and coordinate third party initiated security assessments, such as SOC2 and client specific assessments. * Coordinate preparation and collection of test plans, work papers, artifacts, test results and reports to management and Board of Director’s Audit Committee. * Update corporate security standards and procedures and associated plans such as incident reporting and response and continuity of operations plans. * Coordinate documentation of standard security procedures and identify opportunities to improve efficiency of procedures.  * Periodically audit internal security procedures and technology implementations to confirm continued compliance with regulatory standards agreements and procedures. * Manage security incident reporting and response and facilitate reporting to Security Officer. * Support responses to client requests for information about compliance with security requirements. * Keep up-to-date with on-premises and cloud developments related to security and make appropriate recommendations for improving in-house and cloud-hosted computer and network systems. * Provide oversight of annual corporate security awareness training and support role-based security training. *Position Requirements*: * Bachelor’s Degree in computer programming, management information systems or other computer related field preferred.  Will consider a combination of education and computer/IT skills developed through progressively responsible positions in technology or consulting roles. * Minimum four years’ experience in IT audit or other role with significant security controls assessment experience. * Minimum four years’ experience with security and privacy domains such as FISMA, HIPAA and HITECH and frameworks such as COBIT * Big 4 or equivalent IT audit experience a plus * Certified Information Systems Auditor (CISA), Certified Information Security Manager (CISM), Certified in Risk and Information Systems Control (CRISC), or Certified Information Systems Security Professional (CISSP) preferred * Knowledgeable about IT audit best practices * Knowledgeable about on-premises and cloud-based networking, .NET and open source application development * Accuracy with work, strong organizational skills, and attention to detail * Excellent written and verbal communication skills * Ability to deal tactfully and diplomatically with others * Excellent project management skills to handle multiple priorities, sometimes simultaneously, under deadline pressure * Ability to work independently for long periods of time * Willingness to travel to other locations as necessary We offer our employees a stimulating, team-oriented work environment, competitive salaries, and a comprehensive benefits package, as well as the advantages of employee ownership. We provide generous paid time off and an on-site fitness center at several locations. As a federal government contractor, all staff working in our central ITS group with access to corporate computer systems are required to successfully undergo a background investigation or security clearance as a condition of employment. To apply, please submit cover letter, resume and salary requirements at time of application. Available locations: Princeton, NJ; Washington, DC We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.
          (USA-NJ-Princeton) Senior IT and Compliance Auditor      Cache   Translate Page      
*Position Description*: Mathematica Policy Research is dedicated to improving public well-being by bringing the highest standards of quality, objectivity, and excellence to bear on information and analysis for our partners and clients. The company has been at the forefront of design and assessment of public policies and programs since 1968. Our analytic solutions have yielded actionable information to guide decisions in wide-ranging policy areas, from health, education, early childhood, and family support to nutrition, employment, disability, and international development. Join our vibrant and growing IT Services group, and make important contributions to Mathematica’s security capabilities. We are looking for *a Senior IT and Compliance Auditor* join our Information Technology Services group in either our Princeton, NJ headquarters or our Washington, D.C. office. The *Senior IT and Compliance Auditor* will contribute to the success of Mathematica’s security strategy through continuous improvement of Mathematica’s corporate security program, leading IT audit activity across the enterprise, and effectively communicating corporate security policies and procedures to the enterprise. *Position Responsibilities:* * Lead IT audit and testing of security controls for design and effectiveness, and coordinate third party initiated security assessments, such as SOC2 and client specific assessments. * Coordinate preparation and collection of test plans, work papers, artifacts, test results and reports to management and Board of Director’s Audit Committee. * Update corporate security standards and procedures and associated plans such as incident reporting and response and continuity of operations plans. * Coordinate documentation of standard security procedures and identify opportunities to improve efficiency of procedures.  * Periodically audit internal security procedures and technology implementations to confirm continued compliance with regulatory standards agreements and procedures. * Manage security incident reporting and response and facilitate reporting to Security Officer. * Support responses to client requests for information about compliance with security requirements. * Keep up-to-date with on-premises and cloud developments related to security and make appropriate recommendations for improving in-house and cloud-hosted computer and network systems. * Provide oversight of annual corporate security awareness training and support role-based security training. *Position Requirements*: * Bachelor’s Degree in computer programming, management information systems or other computer related field preferred.  Will consider a combination of education and computer/IT skills developed through progressively responsible positions in technology or consulting roles. * Minimum four years’ experience in IT audit or other role with significant security controls assessment experience. * Minimum four years’ experience with security and privacy domains such as FISMA, HIPAA and HITECH and frameworks such as COBIT * Big 4 or equivalent IT audit experience a plus * Certified Information Systems Auditor (CISA), Certified Information Security Manager (CISM), Certified in Risk and Information Systems Control (CRISC), or Certified Information Systems Security Professional (CISSP) preferred * Knowledgeable about IT audit best practices * Knowledgeable about on-premises and cloud-based networking, .NET and open source application development * Accuracy with work, strong organizational skills, and attention to detail * Excellent written and verbal communication skills * Ability to deal tactfully and diplomatically with others * Excellent project management skills to handle multiple priorities, sometimes simultaneously, under deadline pressure * Ability to work independently for long periods of time * Willingness to travel to other locations as necessary We offer our employees a stimulating, team-oriented work environment, competitive salaries, and a comprehensive benefits package, as well as the advantages of employee ownership. We provide generous paid time off and an on-site fitness center at several locations. As a federal government contractor, all staff working in our central ITS group with access to corporate computer systems are required to successfully undergo a background investigation or security clearance as a condition of employment. To apply, please submit cover letter, resume and salary requirements at time of application. Available locations: Princeton, NJ; Washington, DC We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.
          (USA-TX-Austin) Data Scientist      Cache   Translate Page      
Job Description Ticom Geomatics, a CACI Company, delivers industry leading Signals Intelligence and Electronic Warfare (SIGINT/EW) products that enable our nation’s tactical war fighters to effectively utilize networked sensors, assets, and platforms to perform a variety of critical national security driven missions. We are looking for talented, passionate Engineers, Scientists, and Developers who are excited about using state of the art technologies to build user-centric products with a profound impact to the US defense and intelligence community. We are seeking to grow our highly capable engineering teams to build the best products in the world. The successful candidate is an individual who is never satisfied with continuing with the status quo just because “it’s the way things have always been done”. What You'll Get to Do: The prime responsibility of the Data Scientist position is to provide support for the design, development, integration, test and maintenance of CACI’s Artificial Intelligence and Machine Learning product portfolio. This position is based in our Austin, TX office. For those outside of the Austin area, relocation assistance is considered on a case by case basis Duties and Responsibilities: - Work within a cross-disciplinary team to develop new machine learning-based software applications. Position is responsible for implementing machine learning algorithms by leveraging open source and custom machine learning tools and techniques - Use critical thinking to assess deficiencies in existing machine learning or expert system-based applications and provide recommendations for improvement - Generate technical documentation to include software description documents, interface control documents (ICDs) and performance analysis reports - Travel to other CONUS locations as required (up to 25%) You’ll Bring These Qualifications: - Degree in Computer Science, Statistics, Mathematics or Electrical & Computer Engineering from an ABET accredited university with a B.S degree and a minimum of 7 years of related experience, or a M.S. degree and 5 years of experience, or a PhD with a minimum of 2 years of academic or industry experience. - In depth knowledge and practical experience using a variety of machine learning techniques including: linear regression, logistic regression, neural networks, state vector machines, anomaly detection, natural language processing and clustering techniques - Expert level knowledge and practical experience with C++, Python, Keras, TensorFlow, PyTorch, Caffe, Docker - Technical experience in the successful design, development, integration, test and deployment of machine learning based applications - Strong written and verbal communication skills - Self-starter that can work with minimum supervision and has good team interaction skills - US citizenship is required along with the ability to obtain a TS/SCI security clearance Desired Qualifications: - Basic understanding and practical experience with digital signal processing techniques - Experience working with big data systems such as Hadoop, Spark, NoSQL and Graph Databases - Experience working within research and development (R&D) environments - Experience working within Agile development teams leveraging DevOps methodology - Experience working within cross-functional teams following a SCRUM/Sprint-based project execution - Experience implementing software within a Continuous Integration, Continuous Deployment environment - Experience delivering software systems for DoD customers What We can Offer You: - We’ve been named a Best Place to Work by the Washington Post. - Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. - We offer competitive benefits and learning and development opportunities. - We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. - For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. Ticom Geomatics (TGI) is a subsidiary of CACI International, Inc. in Austin, Texas with ~200 employees.” We’ve recently been named by Austin American Statesman as one of the Top Places to Work in Austin. We are an industry leader in interoperable, mission-ready Time and Frequency Difference of Arrival (T/FDOA) Precision Geolocation systems and produce diverse portfolio of Intelligence, Surveillance and Reconnaissance (ISR) products spanning small lightweight sensors, rack-mounted deployments, and cloud-based solutions which are deployed across the world. The commitment of our employees to "Engineering Results" is the catalyst that has propelled TGI to becoming a leader in software development, R&D, sensor development, and signal processing. Our engineering teams are highly adept at solving complex problems with the application of leading-edge technology solutions. Our work environment is highly focused yet casual with flexible schedules that enable each of our team members to achieve the work life balance that works for them. We provide highly competitive benefits package including a generous 401(k) contribution and Paid Time Off (PTO) policy. See additional positions at: http://careers.caci.com/page/show/TGIJobs Job Location US-Austin-TX-AUSTIN CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          (USA-CO-Aurora) Web Developer (includes ~ 25% Profit Sharing)      Cache   Translate Page      
Job Description Who we are and what you get to do: The FADE program designs state of the art analysis applications, providing game-changing mission critical capabilities to our customers. We achieve this through empowering our engineers, maintaining an operational focus, embracing open standards, utilizing rapid development strategies and minimizing process overhead. More about your role: In this position, you will research, design and develop computer software systems used for geospatial and temporal data analysis. The process of software development in this position adheres to the Agile process, and includes planning and estimation of task complexity, development of new capabilities and bug fixes, and demonstrations of completed capabilities. Software development occurs using modern web technologies, including JavaScript, AngularJS, HTML5, CSS, Bootstrap, Cesium, and OpenLayers, among other technologies, with the candidate producing high-quality code. Code quality is measured by automated processes and human review, and no code is accepted into the baseline without passing these checkpoints. Success for newly developed capabilities is measured in part using unit, functional and regression testing strategies. Upon completion, all software changes will be submitted for review by at least one other team member, and individuals are responsible for reviewing submitted changes from other members of the team. In addition to new capabilities, candidates for this position participate in the maintenance of deployed capabilities, aiding support personnel in diagnosing, proposing, and implementing fixes to reported problems. Maintaining software requires occasional contact with end users, as part of a coordinated support strategy. + Perform in all phases of the application lifecycle to include systems engineering and requirement analysis, technical design, system integration, implementation, and deployment. + Use industry proven design patterns and open source tools is encouraged, along with a dedication to staying educated on current technology trends. + Provide software design and development expertise in support of both new application development tasks and maintenance. + You will be part of an Agile team where communication skills and the ability to execute within the established development process are paramount to yours and the team’s success. You’ll Bring These Qualifications: + Requires broad technical job knowledge typically obtained through advanced education. + Typically has a Bachelor’s degree or Associates/Vocational/Technical education and minimum of 5 years of related full-time work experience. + ACTIVE CLEARANCE: TS/SCI + Strong JavaScript skills, including use of Angular, Bootstrap, Cesium, OpenLayers, etc. + Strong understanding of Linux and / or Windows environments + Familiarity with Agile Software Development process and methodologies + Experience designing User Interfaces and User Experiences (UI / UX) + Familiarity with GIT source control + Excellent technical acumen and be highly self-directed and motivated. These Qualifications Would be Nice to Have: + OpenGL / WebGL development experience + Experience with Geospatial Information Systems (GIS) and OGC WMS / WFS standards + Continuous Integration and Automated Testing tools like Jenkins / Hudson, Selenium, JUnit, etc. + Experience with build automation tools, including NPM, Nexus, and / or Archiva + Experience with ticketing and Agile process management tools (e.g.: JIRA) + Experience with UI / UX design through Balsamiq or similar applications COMPANY DESCRIPTION: BIT Systems Inc. is a CACI company, which provides all the benefits of a small company but with the stability of a multi-billion-dollar company. The commitment of our employees to "Engineering Results" is the catalyst that has propelled BITS to becoming a leader in software development, R&D, sensor development, and signal processing. Our engineering teams are highly adept at solving complex problems with the application of leading-edge technology solutions, empowering our employees to make better mission-critical decisions. We provide the tools you need to succeed. Our work environment is mission-focused but relaxed; is professional but casual; and requires individual responsibility within team constructs. We offer an environment where business casual and shorts/flip flops work side by side successfully; we let you be you. Additional BITS Information: Not to mention BITS benefits are quite unique. Basically, BITS benefits equate to 50% of salary on TOP of your base salary. The first part is a tax-qualified profit sharing retirement plan to which BITS annually contributes up to 25% of your base salary (not more than applicable IRS limits) to your retirement account under the plan. The second part consists of BITS' Individual Benefit Account Plan (the IBA), which is used for premiums, medical reimbursements, dependent care, education and BITS' Paid Time Off (PTO) Policy. Both components of the BITS benefit package are paid for by BITS in addition to your base salary and potential performance bonuses. DACOHP Other CACI Highlights: + We’ve been named a Best Place to Work by the Washington Post. + Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. + We offer competitive benefits and learning and development opportunities. + We are mission-oriented and ever vigilant in aligning our solutions with the nation’s highest priorities. + For over 55 years, the principles of CACI’s unique, character-based culture have been the driving force behind our success. Job Location US-Aurora-CO-DENVER CACI employs a diverse range of talent to create an environment that fuels innovation and fosters continuous improvement and success. At CACI, you will have the opportunity to make an immediate impact by providing information solutions and services in support of national security missions and government transformation for Intelligence, Defense, and Federal Civilian customers. CACI is proud to provide dynamic careers for employees worldwide. CACI is an Equal Opportunity Employer - Females/Minorities/Protected Veterans/Individuals with Disabilities.
          World-Class Contributors Announced for Open Source Healthcare Journal...      Cache   Translate Page      

GoInvo, a healthcare design studio located in Arlington, Massachusetts, announces the debut issue of Open Source Healthcare Journal, a magazine advocating open-source solutions in health. The first...

(PRWeb October 10, 2018)

Read the full story at https://www.prweb.com/releases/world_class_contributors_announced_for_open_source_healthcare_journal_debut/prweb15825971.htm


          Digital Experience Delivery Manager - West Bend Mutual Insurance - West Bend, WI      Cache   Translate Page      
Understanding of UX; Must be familiar with evolving trends in digital UX, open source software and best practices. Summary of Responsibilities....
From West Bend Mutual Insurance - Mon, 10 Sep 2018 23:23:43 GMT - View all West Bend, WI jobs
          Browzwear and Vizoo Standardize Digital Materials with New Open...      Cache   Translate Page      

Browzwear and Vizoo announce the release of a new open source format that will standardize digital materials and serve as a bridge between software applications in the apparel industry.

(PRWeb October 10, 2018)

Read the full story at https://www.prweb.com/releases/browzwear_and_vizoo_standardize_digital_materials_with_new_open_material_format/prweb15824758.htm


          Interesting Stuff - Week 40      Cache   Translate Page      

Throughout the week, I read a lot of blog-posts, articles, and so forth, that has to do with things that interest me:

data science data in general distributed computing SQL Server transactions (both db as well as non db) and other “stuff”

This blog-post is the “roundup” of the things that have been most interesting to me, for the week just ending.

.NET Update on .NET Core 3.0 and .NET Framework 4.8 . A blog post from the .NET engineering team, where they talk about the future of the .NET Framework and .NET Core. I wonder if this post was prompted by speculations recently about the future of the .NET Framework, where there were questions whether the .NET Framework 4.8 would be the last version, and all development would be concentrated on .NET Core. Azure Enabling real-time data warehousing with Azure SQL Data Warehouse . This post is an announcement how Striim now fully supports SQL Data Warehouse as a target for Striim for Azure. Striim is a system which enables continuous non-intrusive performant ingestion of enterprise data from a variety of sources in real time. Streaming Is Event Streaming the New Big Thing for Finance? . An excellent blog post by Ben Stopford where he discusses the use of event streaming in the financial sector. Troubleshooting KSQL Part 2: What’s Happening Under the Covers? . The second post by Robin Moffat about debugging of KSQL. In this post - Robin, as the title says, goes under the covers to figure out what happens with KSQL queries. 6 things to consider when defining your Apache Flink cluster size . This post discusses how to plan and calculate a Flink cluster size. In other words; how to define the number of resources you need to run a specific Flink job. MS Ignite Syllabuck: Ignite 2018 Conference . A great list of MS Ignite sessions that Buck Woody found interesting! Now I know what to do in my spare time! Data Science Customized regression model for Airbnb dynamic pricing . This post by Adrian is about a white-paper which details the methods that Airbnb use to suggest prices to listing hosts. Cleaning and Preparing Data in python . A post which lists Python methods and functions that helps to clean and prepare data. The Microsoft Infer.NET machine learning framework goes open source . A blog post from Microsoft Research, in which they announce the open-sourcing of Infer.NET . Is anyone else but me somewhat confused about the various data science frameworks that Microsoft has? How to build a Simple Recommender System in Python . A blog post which discusses what a recommender system is and how you can use Python to build one. What Is Niels Doing (WIND)

That is a good question! As you know, I wrote two blog posts about SQL Server 2019:

What is New in SQL Server 2019 Public Preview SQL Server 2019 for linux in Docker on windows

My plan was to relatively quickly follow up those two posts with a third post how to run SQL Server Machine Learning Services on SQL Server 2019 on Linux , and do it inside a Docker container. After having spent some time trying to get it to work, (with no luck), I gave up and contacted a couple of persons in MS asking for help. The response was that, right now in SQL Server 2019 on Linux CTP 2.0 , you cannot do it - bummer! The functionality will be in a future release.

I am now reworking the post I had started on to cover SQL Server Machine Learning Services in an Ubuntu based SQL Server 2019 on Linux . I should be able to publish something within a week or two.

I am also working on the third post in the Install R Packages in SQL Server ML Services series (still). Right now I have no idea when I can publish it - Sorry!

~ Finally

That’s all for this week. I hope you enjoy what I did put together. If you have ideas for what to cover, please comment on this post or ping me.

Blog Feed: To automatically receive more posts like this, please subscribe to my RSS/Atom feed

in your feed reader!


          Open Source Runtime Developer- Internship (Markham, ON) - IBM - Markham, ON      Cache   Translate Page      
Through several active collaborative academic research projects with professors and graduate students from a number of Canadian and foreign universities....
From IBM - Tue, 18 Sep 2018 10:49:47 GMT - View all Markham, ON jobs
          Shotcut 18.10.08 64-bit      Cache   Translate Page      
Shotcut is an open source, cross-platform video editor with a wonderfully sleek, intuitive interface. With Shotcut you work with numerous panels that can be docked and undocked as you see fit. Viewable information includes details regarding media p...
          Shotcut 18.10.08 32-bit      Cache   Translate Page      
Shotcut is an open source, cross-platform video editor with a wonderfully sleek, intuitive interface. With Shotcut you work with numerous panels that can be docked and undocked as you see fit. Viewable information includes details regarding media p...
          IBM (IBM) Announces Collaboration with NVIDIA (NVDA) to Expand Open Source Machine …      Cache   Translate Page      
IBM (NYSE: IBM) today announced that it plans to incorporate the new RAPIDS™ open source software into its enterprise-grade data science platform ...
          IBM (IBM) Announces Collaboration with NVIDIA (NVDA) to Expand Open Source Machine …      Cache   Translate Page      
IBM (NYSE: IBM) today announced that it plans to incorporate the new RAPIDS™ open source software into its enterprise-grade data science platform ...
          IBM and NVIDIA Collaborate to Expand Open Source Machine Learning Tools for Data Scientists      Cache   Translate Page      
With IBM's vast portfolio of deep learning and machine learning solutions, it is best positioned to bring this open-source technology to data scientists ...
          IBM and NVIDIA Collaborate to Expand Open Source Machine Learning Tools for Data Scientists      Cache   Translate Page      
With IBM's vast portfolio of deep learning and machine learning solutions, it is best positioned to bring this open-source technology to data scientists ...
          “It is for publishers to provide Plan S-compliant routes to publication in their journals.”      Cache   Translate Page      
An interview with Robert-Jan Smits, with preface

Robert-Jan Smits
On 4th September, Robert-Jan Smits, the Open Access Envoy of the European Commission, and Marc Schiltz, the President of Science Europe, announced Plan S, a radical new initiative designed to ensure that by 2020 all research papers arising from funding provided by 11 (now 13) European funders are made open access immediately on publication.

The plan is based on 10 clear principles but short on detail as to how those principles will be implemented. If successful, however, Plan S could have a dramatic impact on both publishers and researchers. For instance, reported Nature, as written the 10 principles could see European researchers barred from publishing in 85% of journals, including influential titles such as Nature and Science.

The ‘S’ in Plan S can stand for ‘science, speed, solution, shock’, Smits told Nature. Shock would certainly seem to be an appropriate word, and shock was surely what publishers felt when Plan S was announced. After all, they have successfully managed to delay and subvert open access for some 25 years now. They perhaps assumed they could continue doing so. But if successful, Plan S could bring this dilly-dallying to a dramatic end. Alternatively, Plan S could fail to achieve its objectives.

That publishers do not like Plan S is, of course, no surprise. That was doubtless what the architects of the initiative anticipated. What they perhaps did not anticipate was that they would face pushback from researchers. Yet just a week after the announcement nine researchers published a critical article entitled, A Response to Plan-S from Academic Researchers: Unethical, Too Risky! This appears to have shocked the Plan S architects as thoroughly as their plan must have shocked publishers. Smits immediately summoned two of the authors to Brussels, and Schiltz took to Twitter to suggest that the article was “slanderous”.

Since then there has been a torrent of commentary and critique of Plan S, and the initiative is proving uncomfortably divisive. While publishers (and at least some researchers) are appalled by Plan S, open access advocates, as could be expected, have welcomed the initiative.

Disappointed


The problem right now, however, is that there is too little information on how Plan S would work in practice. This means it is nigh impossible for informed commentary to take place, and we are seeing frequent calls for clarification.

On 12th September, therefore, I invited Smits to do an interview with me, in the hope that he could provide that clarification. I suggested we do this either by telephone or email. Smits agreed and said he would prefer to do it by email. So, I emailed him a list of questions and waited for his replies. These arrived on Monday this week.  

I have to confess to being disappointed on reading them. From my perspective, they do not provide the clarification I was hoping they would. I accept that my questions are long and somewhat sceptical (some might say tendentious) but, as I see it, they offered Smits a chance to dispel some of the confusion around Plan S and to demonstrate that my scepticism is misplaced. I don’t feel he did either.

When I shared my disappointment with Smits he expressed surprise and replied, “I looked once more to the replies I sent to your questions and remain of the opinion that they address the issues you put forward.” He added that he is getting over 300 emails a day and tries to reply to each one – signalling that time is in short supply for him. 

I am, of course, entirely sympathetic to the challenge Smits faces with Plan S, and the huge demand it must be making on his time. Nevertheless, there is a great deal of confusion out there, and on reading his replies I could not help but feel he had missed an opportunity to clarify matters.

This seemed the more striking given his assurance to me that, “my door is open to anyone who wants to drop by to discuss Open Access and Plan S”. He added that he prefers a ‘face to face’ approach to using email. Perhaps it might have helped, I thought, if he had taken up my offer of a telephone interview – or even a face to face on Skype.

Meat in the sandwich


But maybe I was missing the point. What we learn from what Smits says below perhaps is something that had not occurred to me, and perhaps has not occurred to the many commentators on social media calling for clarification. In two of his answers, Smits indicates that the ball is now in the publishers’ court. As he puts it the first time he says it, “We expect publishers to come forward with offerings which comply with the principles outlined in Plan S”. Later he says, “It is for publishers to provide Plan S-compliant routes to publication in their journals so that researchers can choose where to publish when accepting funding from those who sign Plan S.”

This would seem to imply that Smits and the members of cOAlition S (the group of funders who have signed up to Plan S) are not currently especially concerned about the details of how the 10 principles are implemented, just that they will be implemented. In other words, cOAlition S is saying to publishers: “These are the conditions that we plan to insist on when funding research in Europe from 2020. So long as you can meet these conditions the details of how you do so are not of great interest to us right now. It is for you publishers to tell us how you plan to implement the principles.”

Presumably, it is only after publishers have done this that cOAlition S expects to publish the implementation plan that Smits talks about below. And while he says he is happy to discuss Plan S with “all interested groups”, Smits insists that cOAlition S will “stand by the principles set out in Plan S”. We might of course wonder what benefit there is in discussions if cOAlition S is immoveable on the 10 principles.

That said, at first blush, this might seem like a good approach. In fact, I have myself on a number of occasions argued that publishers should not be treated as stakeholders, but as service providers. As such, the research community should be telling them what services it would like them to provide, and then inviting them to quote for providing them. Only if the conditions and the price are acceptable should the research community then proceed to contract any particular publisher to provide the service tendered for.

The problem with the way in which Plan S seems intent on doing this, however, is two-fold. First, regardless of the talk of (unspecified) APC caps, it is not clear how these can be effectively applied – particularly if the rest of the world does not follow Europe’s example. As a result, the research community may find it has to shell out even more money for publishers’ services, regardless of any caps. And what happens if publishers decide not to engage with Plan S in any meaningful way?

Second (and more importantly), it could be argued that European funders are not part of the research community. They are its paymasters. And in the neoliberal environment that universities now have to operate in researchers’ interests are not fully aligned with those of research funders. This means that in its battle with publishers, cOAlition S may end up punishing European researchers as, or more, brutally than it punishes the real target – publishers. Researchers will be the meat in the sandwich of Plan S, collateral damage in a war that the vast majority of them never signed up to, or wanted to see take place. There are also very real concerns that Plan S will wreak havoc on learned societies.

It is for these reasons, I believe, that in opposing Plan S researchers often do so in terms of a threat to academic freedom.

That OA advocates tend to ridicule and deride those who fret over academic freedom suggests to me that they have become so focused on the ends of OA that they are blind to the damage that the means might inflict on their peers, and on many societies.

Highly unlikely


Finally, I want to answer one of the questions that Smits puts back to me below. He asks, “Why do you keep on saying that Plan S is about Gold Open Access? Do read the 10 principles again and you will notice that the plan does not use Gold or Green terminology. The plan welcomes self-archiving and repositories.”

My answer is this: It doesn’t matter whether the terms Green or Gold are used. Principle 1 of Plan S states, “Authors [must] retain copyright of their publication with no restrictions. All publications must be published under an open license, preferably the Creative Commons Attribution Licence CC BY.” And cOAlition S insists that this applies to both gold and green OA, and in all cases OA must be immediate.

It seems highly unlikely to me that for-profit legacy publishers will offer green OA on those terms. Instead, they will focus on gold OA and seek to extract as much money as possible from the research community, caps or no caps, even as many non-profit learned societies face an existential financial threat.

Importantly, as Peter Suber has pointed out, there is no acknowledgement in Plan S that repositories can provide OA. This suggests they are seen as archival tools alone.

The good news is that when I expressed my disappointment with his answers Smits said he would be happy to meet with me when he is in London next month. If that meeting takes place perhaps I will be able to get a clearer view of how Smits sees Plan S achieving its objectives, and why he routinely pooh-poohs any talk of academic freedom in relation to his initiative.


The interview begins …


RP: As I understand it, the background to Plan S is that during the Dutch Presidency of the Council of the European Union in 2016 the EU issued the Amsterdam Call for Action on Open Science. (At the time you were Director-General for Research and Innovation at the EU). This called for “immediate” open access to all scientific papers by 2020. But by the end of 2017 it was clear that this goal was not going to be achieved unless more drastic action was taken, not least because the Amsterdam Call offered no specific strategy for achieving OA by 2020. Earlier this year, therefore, you were appointed Special Adviser on Open Access and Innovation and charged with making sure it happened. Plan S is your solution and it consists of 10 principles, the key one being that “After 1 January 2020 scientific publications on the results from research funded by public grants provided by national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms.” In fact, Plan S is a list of principles, not a detailed action plan. That is, it is not a mandate but what OA advocate Peter Suber describes as “a plan for a mandate”. I realise that some transition arrangements are envisaged, but how realistic is it to expect that in 15 months’ time all European research will be made immediately available on an OA basis, particularly if legacy publishers prove reluctant to co-operate in a meaningful way?

R-J S: The 2016 Amsterdam Call set the 2020 target and, since little progress is being made, Plan S provides the specific strategy which you mention to achieve that target. However, Plan S cannot and will not override contracts which are in place before 1/1/20 and of course, we are willing to respect short-term transitional arrangements and on-going discussions on such arrangements.

RP: I wonder if it might be more accurate to say that Plan S is intended to frighten legacy publishers into moving more quickly towards OA, with the end game (presumably) of having them flip all their subscription journals to open access (as some have proposed), and reducing their prices in the process? Either way, do you expect legacy publishers to accept all the principles outlined in Plan S (and incorporated into cOAlition S under the aegis of Science Europe?). As things stand, it would seem that the International Association of STM Publishers does not accept the proposal that hybrid OA be outlawed. And I do not think it expects publishers to reduce their prices. As STM puts it, “in the absence of adequate funding a transition to Open Access as envisaged by cOAlition S is unlikely to happen in practice.”

R-J S: We expect publishers to come forward with offerings which comply with the principles outlined in Plan S. We make no statement about whether publishers are ‘legacy’ or whether they are new providers with new platforms.

RP: I assume you anticipated there would be pushback from publishers, but perhaps you did not expect pushback from researchers? Either way, we are seeing pushback. Last week, for instance, one of the leading open access advocates Stevan Harnad called for Plan S (and all OA policies) to drop any requirement for gold OA publishing and focus exclusively on mandating green OA (self-archiving). This would seem to envisage Plan S being reversed since it is currently almost exclusively focused on gold OA. Three days earlier, eight researchers published a highly-critical article about Plan S, denouncing it as unethical and too risky. I assume such criticism is of concern as I am told you called the lead author of the article Lynn Kamerlin and invited her to Brussels to discuss Plan S. How confident are you that you can address her and other researchers’ concerns? They clearly feel they are becoming the meat in the sandwich in the struggle between research funders and legacy publishers. And might we see resistance amongst researchers grow as the implications of Plan S are more widely publicised and become clearer? Combined with publisher resistance might this necessitate significantly watering down or even abandoning key Plan S principles?

R-J S: You probably have seen the many positive reactions from researchers and representatives of the science community to Plan S. Of course, there are also critical voices. I have indeed invited Britt [J Britt Holbrook, one of the co-authors of the above paper] and Lynn for a meeting to see why it is we differ of opinion. In developing the implementation plans there will, of course, be discussion with all interested groups. We will, however, stand by the principles set out in Plan S.

The role of repositories


RP: One of the concerns being expressed is that Plan S portrays green OA and repositories as having little more than an archival role, not as providers of OA. This is one of the concerns expressed by de facto leader of the OA movement Peter Suber, who has written of Plan S, “There's no acknowledgement of their [repositories and green OA] importance for OA itself! This is the same mistake made by the Finch Group in 2012, which was inexcusable even at the time, and should never be repeated by informed, high-level policy-makers.” I assume, however, that the reality is that green OA inevitably conflicts with the principles of Plan S, which calls, amongst other things, for papers to be made OA “immediately” and with a CC BY licence attached. I cannot envisage many legacy publishers agreeing to this. So I guess the point is that if green OA cannot conform to the principles of Plan S then it cannot be viewed as providing open access, and that is presumably why Plan S does not view it as such. Would that be right? If not, how can this circle be squared?

R-J S: Plan S does not talk about Gold, Green, Diamond or Platinum Open Access. Plan S is entirely supportive of pre-prints and repositories and welcomes those journals where the final publication is published without paywalls and no embargo, being also published under a CC-BY or similar licence.

RP: Another concern that has been raised is that Plan S is contrary to long-standing principles of academic freedom. For instance, since Plan S says that hybrid OA is not compliant with its principles European researchers will be banned from publishing in a great many journals that they currently publish in and love. As Nature put it, “as written, Plan S would bar researchers from publishing in 85% of journals, including influential titles such as Nature and Science.” This concern about academic freedom might seem a genuine grievance in light of a 1997 UNESCO document that states, “higher-education teaching personnel should be free to publish the results of research and scholarship in books, journals and databases of their own choice”.

Concern about academic freedom is also being cited as one of the reasons why some countries (notably Germany) have not signed up to Plan S. Indeed, researchers at the University of Konstanz have taken their university to court for simply trying to mandate them to self-archive their papers in their institutional repository, which might seem far less of an imposition than telling them that they are henceforth barred from publishing in 85% of the journals they currently publish in.

Some also argue that requiring researchers to publish their work with a CC BY licence attached raises issues of academic freedom.

On the other hand, in the Plan S document signed by Science Europe President Marc Schiltz it says, “We recognise that researchers need to be given a maximum of freedom to choose the proper venue for publishing their results and that in some jurisdictions this freedom may be covered by a legal or constitutional protection.”

How do you respond to the claims that Plan S threatens to infringe researchers’ academic freedom? And how does Schiltz’s statement about freedom to choose fit with the principles of Plan S? Once again, how can this square be circled?

R-J S: Strong mandates have been in place from many funders in different countries for many years so the principle of funder mandates in the research system is well-established. See what Peter Suber writes about this. It is for publishers to provide Plan S-compliant routes to publication in their journals so that researchers can choose where to publish when accepting funding from those who sign Plan S.

RP: Plan S also argues that current researcher evaluation systems need to be changed so that publishing in prestigious legacy journals is no longer encouraged. Might it not have been better to change evaluation systems before banning publication in subscription journals? Would this not have been fairer than suddenly telling them to stop publishing in 85% of the journals that their universities are still incentivising them to publish in, and have been doing for many decades?

R-J S: The San Francisco Declaration on Research Assessment and the Leiden Manifesto both pre-date the Amsterdam Call for Action. There is nothing ‘suddenly’ happening. That during all years not much action was undertaken, is exactly the reason why Plan S was developed.

Costs


RP: Another reason why some European countries appear to be dragging their heels over signing up to Plan S is that they assume it will increase the costs of publishing rather than reduce it. The DFG, for instance, says that “it surmises that open access mandates can lead to increased article processing charges (APC), an effect that the DFG strives to minimise.” I understand Plan S envisages APCs being capped, but what in your view is a reasonable APC? And how would a cap work in practice? (Presumably, for instance, universities and researchers could decide to themselves pay more than the cap in order to have their papers published, and indeed to publish them in expensive hybrid journals produced by Springer, Wiley, Taylor & Francis, and Elsevier?).

R-J S: Caps on APCs will be considered as part of the implementation of Plan S. Publications arising from our funding must be Plan S-compliant.

RP: Plan S says that it will support the creation of open access journals or platforms and open access infrastructures where necessary. Another concern raised by Suber is that this does not include a commitment to creating and supporting open infrastructure. I.e., he says, “platforms running on open-source software, under open standards, with open APIs for interoperability, preferably owned or hosted by non-profit organizations.” As such, Suber says, Plan S, will not prevent open infrastructure being appropriated by legacy publishers in the way that SSRN and bepress were acquired. As you will know, a number of funders (Wellcome, Gates etc.) have created their own publishing platforms but outsourced fulfilment to the for-profit company F1000. The F1000 platform, I believe, is a proprietary, and details of what it charges funders are secret, which does not seem to fit with the ethos of open science. The EU also plans to create its own publishing platform. I wonder, therefore, if the reference to platforms and OA infrastructures in Plan S is essentially a reference to the planned EU Open Research Publishing Platform? As I understand it, this will not necessarily be open source, and some believe that the exacting requirements specified in the tender document means that it could only be operated by a large legacy publisher or similar. Can you comment on these points?

R-J S: Plan S sets out the principles for an open access funding system. It says nothing about the ownership of journals and platforms. It does not mention any particular platform.

 

The same trap?


RP: Open access is a hugely complex topic. I believe you suggested to Kamerlin that in order to better understand the issues she should watch the movie Paywall. However, I wonder if the issues are more complicated than either Paywall or Plan S assumes? The movie, we could note, was made by an OA advocate, funded by an OA advocacy organisation, and consists of interviews primarily with other OA advocates. It includes interviews with just two legacy publishers. As such, as I pointed out in the review I did for Nature, Paywall is an advocacy film, not one intended to explore the complexities of open access. At no point, for instance, does the movie mention APCs or explain how OA can be funded. As such, it tells us what OA advocates want, but fails to explain how this can be achieved financially. The OA movement has a history of making declarations, issuing calls, and offering up what in the movie John Wilbanks calls “witness and testimony” but it has consistently failed to come up with financially feasible solutions. Is there a danger that Plan S has fallen into the same trap?

R-J S: Plan S does not mention the Paywall movie. All of the parties involved in Plan S will have their own views about the publishing industry but Plan S states what we have collectively signed up to.

RP: How likely do you think it is that all European countries will sign up to Plan S? Neither Germany nor Switzerland has yet done so, and researchers in Norway are asking whether the likely consequences of the proposed changes are proportionate to what can realistically be achieved in such a short period of time. Meanwhile, those European countries with limited research budgets will surely be unhappy to commit to paying for gold OA. I understand you also hope to get the US to buy into the Plan, which would seem to be an even greater challenge since the US has historically preferred green OA and it does not have the same centralised system as Europe. As Roger Schonfeld has put it, “[T]he higher education sector in most of North America is very different from Europe, in one key element: North America is as decentralized as Europe is, at a national level, centrally coordinated.” The challenge here surely is that Plan S can only achieve its objectives if the whole world signs up to it, or at least all those countries with large research budgets? Unless they do, for instance, Europe will find it is having to pay for gold OA plus continue to pay subscriptions in order to access the research produced in countries that do not sign up. Would you agree? How hopeful are you that you will manage to sign up a sufficient number of countries to make Plan S workable?

R-J S: Why do you keep on saying that Plan S is about Gold Open Access? Do read the 10 principles again and you will notice that the plan does not use Gold or Green terminology. The plan welcomes self-archiving and repositories. I am confident that Plan S is workable.

The global South


RP: On the other hand, if Plan S does succeed it will further marginalise and disadvantage those in the global South. If all the world’s subscription journals flipped to gold OA, for instance, where today researchers in the global South are not able to afford to access the world’s research, in future they would be unable to afford to publish their own research – which might seem a worse position to be in. Does Plan S have a solution to this problem? Will it provide money to enable those in the global South to publish their research? I am not aware that this issue is discussed in the various Plan S documents.

R-J S: Getting rid of paywalls will help researchers in the global South to access publicly funded research without charge. This huge advantage cannot be denied. Furthermore, there are many routes to publishing research available to all countries including no-embargo open access.

RP: It seems to me that one thing most people agree on today is that legacy publishers have become too powerful and have acquired indefensible monopoly powers. Is it not time to hand the matter over to the EU Commissioner for Competition Margrethe Vestager with a view to, say, breaking up these monsters?

R-J S: I still am optimistic that through plan S, we can accelerate the transition to full and immediate Open Access in partnership, including in partnership with the publishers you are referring to.

RP: I understand that on 1st March you will be moving on, to become President of TU-Eindhoven. Would it not be better to stay with the project until it is clear that it has been a success? 

R-J S: Plan S is carried by a consortium of funders under the umbrella of Science Europe. It is not the work of one person. Furthermore, I am far from being indispensable.

          Supply Chain Risk Intelligence Analyst - Resilience360 - DHL Customer Solutions & Innovation Americas - Plantation, FL      Cache   Translate Page      
We are looking for a Supply Chain Risk Intelligence Analyst who will assess supply chain risk intelligence needs, identify sources for valid open source risk...
From DHL - Wed, 10 Oct 2018 05:12:33 GMT - View all Plantation, FL jobs
          Linux Day Milano 2018 in Milan, Italy (2018-10-27)      Cache   Translate Page      
The FSFE Local Group Milano will celebrate the Italian Linux Day event in Milan together with local associations about Free Software and open source culture. The topic of Linux Day this year is the world wide web and FSFE we will be present with two talks. Stefano Costa will talk about "Libera il tuo router!" (Free your router) and Lorenzo Losa from Wikimedia Italia about "Discussion and implications about new EU copyright directive". The event will be located at University Bicocca Milano, Building U7 with free entry and no registration needed. For more info and questions about logistics, please refer to the event website. Besides the talks, we will also be around at the event and look forward to interesting chats and questions about Free Software. Join us!
          JAVA EN OPEN SOURCE ONTWIKKELAAR KONTICH (Antwerpen) - Xplore Group - Kontich      Cache   Translate Page      
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Fri, 28 Sep 2018 08:41:49 GMT - Toon alle vacatures in Kontich
          JAVA EN OPEN SOURCE ONTWIKKELAAR HASSELT - Xplore Group - Kontich      Cache   Translate Page      
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Fri, 28 Sep 2018 08:41:49 GMT - Toon alle vacatures in Kontich
          Deploying a Python App to Oracle ACCS by Blaine Carter      Cache   Translate Page      

image

Let's take a look at how to deploy a Python app to Oracle's Application Container Cloud Service by way of an example!

ACCS provides a pre-configured platform (Platform as a Service) where you can quickly deploy and host your applications. For many of today’s applications, the hosting server is just that, a place to host the application. Most of the time the only thing an application needs from the server is to have it support the application’s programming language and to provide in and out connections through ports. Using a PAAS such as ACCS frees you from all of the extra work of configuring and maintaining a server and allows you to focus on perfecting your application.

ACCS supports multiple languages but for this post, I’ll focus on Python.

DinoDate

For the examples, I will be deploying the DinoDate application. DinoDate was written as an open source learning application that can be used to demonstrate database concepts with multiple programming languages. It currently has both Python and NodeJS mid-tier applications and is backed by an Oracle Database.

The following instructions show how to deploy the Python version of DinoDate to an Oracle ACCS instance. Read the complete article here.

Developer Partner Community

For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn Forum Wiki

Technorati Tags: PaaS,Cloud,Middleware Update,WebLogic, WebLogic Community,Oracle,OPN,Jürgen Kress


          JAVA EN OPEN SOURCE ONTWIKKELAAR MERELBEKE (GENT) - Xplore Group - Kontich      Cache   Translate Page      
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Fri, 28 Sep 2018 08:41:49 GMT - Toon alle vacatures in Kontich
          Start contributing to open source: Pro tips from Felix      Cache   Translate Page      

Two weeks ago, Joost shared his vision on open source . Today, we introduce yet another WordPress fanatic: Felix Arntz. As a freelancer Felix works part-time for Yoast. Half of that time he’s working on and consulting with the SEO plugin, and the other half Yoast sponsors him to contribute to WordPress core, mainly focusing on the multisite functionality. Learn what open source means to him and get pro tips on how to start contributing to an open source project yourself!

Q. Why is open source so important to you? “For me personally, there are tons of reasons other than that I simply believe in it. Open source has given me work, new friends, the chance to travel the world, the trust and resources to improve as a developer and as a person. It has given me a passion, and every day when I go to work (which means I get up out of my bed and turn on the computer, in whichever country that may be), I’m looking forward to it.”
Start contributing to open source: Pro tips from Felix
Q. In what way do you contribute to open source projects?

“I have been active in the contributing to open source through the WordPress ecosystem for over five years now. It really escalated when I started contributing to WordPress core, which was in mid-2015 when I went to my first WordCamp. I am a core committer and regularly involved in WordPress core development, with weekly meetings, discussing tickets, writing and reviewing patches. I also quite regularly publish open source plugins or libraries, and even small code snippets that have helped me, but might also help someone else that’s the beauty of open source: in some ways you’re crowdsourcing your development. Occasionally I also contribute to open source projects that I’m interested in outside of the WordPress bubble, to get some knowledge about other projects and how they are organized.”

Q. Who is your open source hero?

“Phew, that’s a tough one. There surely are many folks I admire. For a long time, Joost de Valk was my biggest idol, no kidding! He achieved so much from initially just writing a simple, good plugin, which is amazing and Yoast is now able to influence the ecosystem in so many great ways. By now I’ve personally come to realize that running my own company is not something I strive for because I prefer to focus on development full-time. I guess we’re all different in our visions, and I am beyond grateful to be a part of the team that he, and the board, have created and shaped.

More recently, I’d say Alain Schlesser and Jeremy Felt are two people I want to highlight. I have learned a lot from them about development and open source, and they have enabled me to do great things around open source. I’m happy to call them friends, as much as the distance permits it, and to collaborate with them in the respective WordPress core areas, and I hope that through our discussions and with my contributions I am able to give them back something and support them as well.”

Q. Does open source say something about the quality of the product?

“I wouldn’t generalize that open source has better quality than closed source software. We all know how WordPress core is written, right? However, in my opinion, open source software has the better foundation to achieve high quality. Open source is powered by the developers, designers, accessibility experts, marketers, project managers, copywriters, translators, ambassadors, contributors of any kind, of the entire world while closed source is usually powered by the folks from a single company.

Something else I want to highlight issecurity. Sometimes you hear arguments like “WordPress can so easily be hacked because its code is public”. While it is true that people with evil intentions can find a security hole easier in that way, the same goes for all the hackers that want to use their powers in a good way, and, believing in the good in the world, I think there are way more of the latter category. A large number of security issues in WordPress are uncovered by people who aren’t even typically active in the WordPress community, and this is thanks to open source. While companies that run a popular software usually have a solid security team, there is no chance that those few people are better than the entire pool of security experts who look at open source.”

Q. When and what was your first open source contribution?

“That would probably be the first WordPress plugin that I ever published, which was in early 2013. It made embeds in WordPress responsive, back then responsive was more of a buzzword than it is today. Many WordPress themes back then didn’t do a great job at it themselves. I would think that the plugin has been redundant for several years now, but there are still more than 2,000 people using it at this point, and even though I cannot maintain it anymore, it still has a special place in my heart.”

Q. How do you learn from open source? And how can others learn from open source?

“There are so many talented people in open source from which you can learn. Like I said, talents from all over the world can participate. In the same way, other people who contribute to open source will learn from you. Especially for me as a freelancer, contributing to open source meant being part of a team, which I didn’t have in my day-to-day job otherwise. The open source community and its spirit has elevated me to become a much better developer, and maybe even a better person.”


Start contributing to open source: Pro tips from Felix
Q. Why is open source important to everyone?

“WordPress’ goal is to “democratize publishing”. In that regard, I see the goal of open source to be democratizing software development. Anyone can get involved and influence a project in ways that would be impossible to do in a closed source project. If you see the project moving into a direction where it contradicts your vision, you are free to create a fork, and either maintain it just for your own usage or gather fellow folks who share the same ideas. The GPL for example, the license that WordPress is based on, allows you to do pretty much anything with open source software. The important restriction is that anything derived from it needs to follow the GPL itself, which in my opinion isn’t a restriction though. It just causes more people to learn about the benefits of open source.”

Q. I want to contribute to open source! Where do I start?

I love to hear that! How you start of course somewhat depends on the project you want to contribute to. Most open source communities I have gotten in touch with were supportive and welcoming. They always hope to chat to a new contributor that will stick around and get more involved no pressure though! Due to my involvement with the WordPress community, I can only give more precise tips about that specific community, but I’m sure that a lot applies to other communities as well.

Even better SEO with Yoast SEO Premium!

Optimize your site for the right keywords Never a dead link in your site again Previews for Twitter and Facebook Get suggestions for links as you write

$ 89 - Buy now


          MapR Solutions Architect - Perficient - National, WV      Cache   Translate Page      
Design and develop open source platform components using Spark, Java, Oozie, Kafka, Python, and other components....
From Perficient - Wed, 03 Oct 2018 20:48:20 GMT - View all National, WV jobs
          to modify Openbravo Java-based ERP      Cache   Translate Page      
I am looking for a Java programmer to modify Openbravo Java-based open source ERP ! Some for mobile App, or printing report, or SQL database program! (Budget: $30 - $250 USD, Jobs: Database Programming, Java, MySQL, Software Architecture)
          Offer - Make more profit using Tinder clone open source - RUSSIA      Cache   Translate Page      
View your ideas and dreams for online dating business live with the best Tinder Clone Open Source- Howzu by Appkodes. Howzu can resonate with your community of online users with features like unlimited swipes, location-based search and much more to find their perfect match. Dating apps are becoming a most engaging social platform and so Howzu is incorporated with easy social login and share features. Howzu can be easily customized and can generate revenue through premium account subscription and ad banners.Contact: +917708068697Mail ID: sales@hitasoft.comSkype : live:appkodesalesPhone: +914524220611Website: https://appkodes.com/tinder-clone-app/
          Cheap VPS Linux VPS Perfect for Online Business      Cache   Translate Page      
Cheap VPS Linux is one of the most popular hosting plans that will be managed with our expert technical team. Linux VPS Server is perfect for the every online businesses portals because it is an open source as well as available at very affordable price. For more Information: Contact us. Call us +9[...]
          Domoticz - open source domotica systeem - deel 3      Cache   Translate Page      
Replies: 11338 Last poster: Sp33dFr34k at 10-10-2018 14:09 Topic is Open wimmme schreef op woensdag 10 oktober 2018 @ 13:18: [...] Je kan Dashticz als interface gebruiken, is dat niet makkelijker ? Of als je Android gebruiker bent, kan je met de Domoticz app ook al veel. En als je het helemaal wil customizen kan je met Tasker gaan werken (via Json domoticz aansturen), met shortcuts naar taken, of een beperkte interface maken ...Thanks voor de tips, ze vindt het echter niets dat ze telkens haar telefoon moet pakken, ze gebruikt liever een remote (die ze ook had voor KaKu apparaten)... ik weet dat Fibaro een remote heeft, maar die is vrij prijzig en z-wave gebaseerd. Had liever iets soortgelijks maar goedkoper en wifi, als dat überhaupt bestaat?
          Cloud Foundry expands its support for Kubernetes      Cache   Translate Page      

Not too long ago, the Cloud Foundry Foundation was all about Cloud Foundry, the open source platform as a service (PaaS) project that’s now in use by most of the Fortune 500 enterprises. This project is the Cloud Foundry Application Runtime. A year ago, the Foundation also announced the Cloud Foundry Container Runtime that helps businesses […]

The post Cloud Foundry expands its support for Kubernetes appeared first on RocketNews | Top News Stories From Around the Globe.


          Offer - Make more profit using Tinder clone open source - RUSSIA      Cache   Translate Page      
View your ideas and dreams for online dating business live with the best Tinder Clone Open Source- Howzu by Appkodes. Howzu can resonate with your community of online users with features like unlimited swipes, location-based search and much more to find their perfect match. Dating apps are becoming a most engaging social platform and so Howzu is incorporated with easy social login and share features. Howzu can be easily customized and can generate revenue through premium account subscription and ad banners.Contact: +917708068697Mail ID: sales@hitasoft.comSkype : live:appkodesalesPhone: +914524220611Website: https://appkodes.com/tinder-clone-app/
          Offer - Make more profit using Tinder clone open source - RUSSIA      Cache   Translate Page      
View your ideas and dreams for online dating business live with the best Tinder Clone Open Source- Howzu by Appkodes. Howzu can resonate with your community of online users with features like unlimited swipes, location-based search and much more to find their perfect match. Dating apps are becoming a most engaging social platform and so Howzu is incorporated with easy social login and share features. Howzu can be easily customized and can generate revenue through premium account subscription and ad banners.Contact: +917708068697Mail ID: sales@hitasoft.comSkype : live:appkodesalesPhone: +914524220611Website: https://appkodes.com/tinder-clone-app/
          Facebook releases Safety Check crisis response tool for Workplace      Cache   Translate Page      

In any emergency or crisis, sending and receiving real-time alerts efficiently and at scale is mission critical. One of the reasons Facebook’sSafety Check feature has been successful is because of its ability to immediately identify those likely to be in an affected area, collect their safety status, and send that information out to friends and family in real time.

Facebook Enterprise Engineering has developed a similar feature for companies via our Workplace business collaboration tool. Safety Check for Workplace incorporates important modifications to enable management by a company’s designated security team and to ensure accuracy even when multiple companies are affected simultaneously, as is likely in the case of a major crisis.

How Safety Check for Workplace functions

In Workplace, designated Safety Operators create and send out a Safety Check notification to their work community in a three-step process:

Locate : Identify whom the crisis may affect. Notify : Alert affected employees via Workplace Chat, a post atop their News Feed, push notifications, and email. Ask them to confirm whether they are safe or need assistance. Iterate : Continue attempting to make contact with any who are not yet confirmed safe ― through different channels until everyone is accounted for.
Facebook releases Safety Check crisis response tool for Workplace

To ensure that the right people in an organization (i.e., security or human resources) manage Safety Check for Workplace, a company’s system administrators enable the tool, and can then designate a safety team.


Facebook releases Safety Check crisis response tool for Workplace
Evolving an internal tool into an external product at scale

The idea for this Workplace feature began at a company hackathon in our London office. As we thought about how our external customers might use Safety Check for Workplace, we had to reevaluate our entire design with enhanced scale, speed, and accuracy for enterprises in mind. We realized this would require some new design considerations and a revision of the entire product for the Workplace platform.

At its core, our internal tool was built on a database that was not scalable for a product which might need to support millions of people across multiple companies. To address this, we moved our storage to TAO , Facebook’s distributed data store for the social graph, where Workplace information resides. With the move to an external product, we could no longer rely on our standard internal access management and data security controls. So we created our own module for access management and strong privacy checks. As we continued our journey to build a life safety system at scale, we focused on three major parts of the product workflow: locate, notify, and iterate.

Locate

Once a crisis response is initiated, security or human resources staff must first locate the people who are affected. Security and HR staff pull from a range of different and unique sources for their seated and traveling employees. We quickly realized that we want to augment that data with systemic ways of locating people.

Our next step was to pick the Workplace profile location, as populated by the respective company, as one of the main sources of information. Our primary problem is that this field is free-form text, and every company can have its own text. As a result,we could not guarantee valid locations. To solve for that, we created a location parser, which can run for a given company and try to index all its locations. These indexed locations would be used, along with the crisis location, to locate employees.

Given that more locator information might be collected on the platform in the future, we implemented our locate classes as extendable. For our internal use case, we are able to extend the systematic locate function by connecting to other internal tools such as the travel system, calendar, etc. It is critical to reduce the process time to ensure that people are located and added to the crisis as quickly as possible. We create multiple asynchronous jobs that run simultaneously to locate and add people to the crisis based on the crisis location. Using TAO as a backend helps us make the parallel writes work faster across multiple shards.

Notify

To increase the reach to affected people, we implement the notifications across many channels and surfaces: We span the notification jewel, post on top of the feed, chat, and email across platforms such as web, mobile, and mobile site. We use the standard notification framework, which processes millions of notifications for all of the Facebook and Workplace notifications.


Facebook releases Safety Check crisis response tool for Workplace

This multichannel distribution for large numbers of people can result in large volumes of responses. Every time a security or HR administrator sends a notification, it goes to our generalized worker pool of machines, called the “async tier,” which handles creating and executing our notify jobs. Initially, we designed a single job to process some fixed number of notifications, but we quickly noticed that we needed special handling for all failures. As a result, we altered our design to one job per person, which makes it very easy to manage high volumes of jobs with straightforward retry logic in case of failures. We also fan out the async jobs across different batches while still ensuring all notifications are sent within seconds. For bot notifications, we use Workplace Chat, which is built off of our Messenger infrastructure, allowing us to send notifications reliably and scale well beyond our immediate needs.

Iterate

As designated security operators use Safety Check to locate, notify, and iterate, we have an additional technical challenge to solve how we handle responses. An important consideration is to ensure that the counts fully match the responses in real time. To do this, we store people’s statuses as edges on a graph to a Safety Check, for quick and accurate querying. Furthermore, when dealing with high request volume, we funnel all the requests via a queuing system to make sure we don’t lose any numbers, but we update the actual status immediately to provide current information for security and HR teams.

This system is built with our open source technologies such as React , Relay , GraphQL , and Hack . Our status updates leverage the GraphQL subscription, which, as part of page load, subscribes to all status change events so they update in real time.

Building on what we do with Safety Check for the Facebook community, we also perform extensive simulations to assess resiliency for extreme notification and response workloads. These simulations are run programmatically to ensure we continually test the underlying infrastructure and services on which we rely. We do this through scripts that run at various times throughout the day (as a crisis can occur at any time in any place) on a test instance that creates a crisis, locates large volumes of people, notifies them, and receives random spikes of responses.

With Safety Check for Workplace, we have a tool that started at a hackathon in London and has evolved into an enterprise employee safety feature. We’re excited to make it available to Workplace Premium users in the coming months. For more details, please visit facebook.com/workplace .


          (USA-CA-Sunnyvale) Technical Program Manager      Cache   Translate Page      
Technical Program Manager LinkedIn was built to help professionals achieve more in their careers, and every day millions of people use our products to make connections, discover opportunities and gain insights. Our global reach means we get to make a direct impact on the world’s workforce in ways no other company can. We’re much more than a digital resume – we transform lives through innovative products and technology. Searching for your dream job? At LinkedIn, we strive to help our employees find passion and purpose. Join us in changing the way the world works. The Streams team is looking for a TPM to help them reach their ambitious goals. They own top tier products, including Kafka, Samza and Brooklin. Kafka, built at LinkedIn, open sourced and now used by 100s of companies. Among the many projects on their roadmap is the next generation Kafka, putting LinkedIn Kafka on Azure and scaling their existing technology to serve LinkedIn 3. . As the team continues to grow, the role of TPM is becoming more and more significant in helping the team remain organized and optimized Responsibilities: -Lead planning, execution, and delivery of large cross-functional initiatives -Create a collaborative work environment that fosters autonomy, transparency, mastery, innovation and learning -Promote value-driven delivery, innovation, and continuous improvement at multiple levels of the organization -Communicate shared understanding of direction and objectives into multiple domains -Identify, analyze, and mitigate risk exposure at all levels -Interact comfortably in one-to-one meetings with individual technical contributors, senior engineering leaders (technology and business) -Build strategic relationships with key technology and business leaders to ensure success -Drive quarterly roadmap & resource planning activities and deliverables -Manage and maintain team backlogs and ensure forward project momentum -Lead efforts to resolve key project conflicts and establish appropriate resolution paths -Mentors individuals in best practices for planning and execution -Prepare and present program and project status to key stakeholders at regular intervals Basic Qualifications: -2+ years of full time program management experience within an engineering or technical department -2+ years of experience managing large scale web-based programs -B. . /B. . in a technical field, or equivalent practical experience. Preferred Qualifications: -Strong practical understanding of managing the full life-cycle of large-scale modern production software programs and projects -Excellent balance of people, organizational, technical, and communication skills -Ability to manage multiple concurrent major projects -High proficiency in planning, collaboration, communication, and reporting tools -Fluid ability to traverse both vertically and horizontally in a large organization -Excellent trade-off management in achieving tactical and strategic objectives -Ability to successfully influence others to best outcomes without having authority -Proven leadership skills in highly volatile cross-functional and cross-organizational environments -Exposure to Product Management best practices and requirements elicitation is a plus -Strong experience and understanding of software engineering practices -Adept at proactive change management -Familiarity with Agile methods and practices through multiple teams (Scrum and Kanban) -Advanced technical or business degree
          Open Source Cryptocurrency Exchange Platform | Open Source Bitcoin Exchange Software      Cache   Translate Page      
The Commercial Cryptocurrency Exchange Software will help the professional traders to get the real-time execution of Cryptocurrency Exchanger for the exchange of digital currencies that holds up diverse crypto coin platform. This Open Source Cryptocurrency Exchange Platform is easy for managing the server with decentralized exchange and no third person or unauthorized person to join as the...
          South African-based Tari Labs unveils free online university      Cache   Translate Page      
Tari Labs, a contributor to Tari, the South African-based blockchain protocol, has launched a free online university to help incubate open source projects and train blockchain developers, both locally and globally. “Tari Labs University aims to become a go-to destination for easily-accessible learning material for blockchain, digital currency and digital assets, from beginner to advanced [&hellip
          Creating Wait stats widget on Azure Data Studio for macOS      Cache   Translate Page      

A couple of weeks ago, Microsoft released a new multi-platform tool called Azure Data Studio, this tool is the final version of SQL Operations Studio. If you are familiar to SQLOps, you probably recall that this tool 100% open source, and because of that you can customize the JSON code to do certain things the way it works best for you.

In my personal opinion, this is of the best features of Azure Data Studio are widgets. It gives the option to DBA’s or Database developers to create their own custom widgets to access SQL Server data using simple charts. I personally don’t like the very buggy reports from SSMS which take time to load and are not fully customizable … they are like a black box to me.

I decided to give it a try and create a custom widget of wait stats, because I use them everyday and also is one of the most common troubleshooting methodologies across SQL Server data professionals. If you are not familiar with wait stats, you should … it provides diagnostic information that can help to determine the root cause or symptom of a potential bottleneck in multiple areas like:

  • Locking \ blocking issues
  • CPU pressure
  • Memory pressure
  • Disk IO problems

In this post, I will walk you through the creation of a custom widget to display the top 5 wait stats from your system but first  you must have Azure Data Studio installed on your computer already. So in case you don’t please make yourself a favor and download the macOS version on this link.

From now on, I will assume you have Azure Data Studio installed also a working connection to a SQL Server instance. All the Azure Data Studio widgets uses a query file as a source, in this case I created a short version of world-famous Paul Randal’s (b|t) script to determine the wait stats for a SQL Server instance, you can download the modified script here from my GitHub repository.

This short version will show only the top five wait types from your system, I also I removed the links from Paul’s version where he explains each ignored wait type just to make the code easier to read but I strongly recommend you to check Paul Randal’s blog post to learn more about.

Download the script and move it to a known location rather than your Downloads folder, this will make things easier while creating the widget. A folder location like /Users/MyUser/Documents/wait_stats.sql would work.

Open the script with Azure Data Studio and execute it, then go to the results window look at the top right corner of this section and click on the Chart icon:

Once the chart is displayed in the screen, look for the chart options from the right panel and make the following changes:

  • Chart Type = horizontalBar
  • Data Direction = Vertical
  • Legend Position = none
  • X Axis Label = PERCENTAGE
  • Y Axis Label = WAIT TYPE

As you may noticed at this point, a nice horizontal bar type char is displayed on your screen, but this is just halfway now let’s move ahead and get the widget created. Click on the Create Insight button at the top of the RESULTS \ CHART grid to generate the JSON code for this chart.

Azure Data Studio will open a new windows with some JSON code, probably you noticed the way it is showed in the screen is not that readable. Don’t you worry, we can address the format problem just select all the JSON code using the ? A (Command + A) shortcut then just run the Format document function of Azure Data Studio using the ?? F (Shift + Alt + F) shortcut and voilà the code is now looking perfectly formatted:

Noticed that I have highlighted the name key, because the auto-generated code puts My-Widget as value by default in our case we want to call this widget something like Wait stats, also noticed the queryFile key value it points the location you saved the wait stats script at the beginning of this post.

Finally it’s time to add our custom Wait stats widget to Azure Data Studio, open the Azure Data Studio command palette using the ??P (Command + Shift + P) and then write Preferences you will see a list of options displayed as a list choose Preferences: Open User Settings from the list.

In the Search Settings text box write dashboard.server.widgets then click on dashboard.server.widgets and select the Edit option:

The left panel will turn yellow in Edit mode, now look for the last bracket from the last widget on the last panel in my case the all-database-size-server-insight. Carefully add a comma symbol after the last bracket, then copy the JSON code from the other window and paste it. It should look something like this:

Then simply save the changes to the user settings using the ?S (Command + S) shortcut, close all the open windows. Re-connect to one of your SQL Server instances, right click then choose manage it will display the server dashboard now just look for the widget we just created.

I hope you find this tip useful, in case you want to grab a copy of the SQL and JSON file used in this post you can find it here.

Stay tuned for more DBA mastery tips!


          WebKit 236995 - Open-Source Web-browser engine. (Free)      Cache   Translate Page      
WebKit is an open source web browser engine. WebKit is also the name of the Mac OS X system framework version of the engine that's used by Safari, Dashboard, Mail, and many other OS X applications. WebKit's HTML and JavaScript code began as a branch of the KHTML and KJS libraries from KDE.

Version 236995:
  • Release notes were unavailable when this listing was updated.


  • OS X 10.6 or later



More information

Download Now
          Google Chrome 69.0.3497.100 - Modern and fast Web browser. (Free)      Cache   Translate Page      

Google Chrome is a Web browser by Google, created to be a modern platform for Web pages and applications. It utilizes very fast loading of Web pages and has a V8 engine, which is a custom built JavaScript engine. Because Google has used parts from Apple's Safari and Mozilla's Firefox browsers, they made the project open source.



Version 69.0.3497.100:
  • This release contains bug fixes and improvements


  • OS X 10.10 or later



More information

Download Now
          Link: ‘The gulf between apps and infrastructure is blurring’ says boss of DevOps darling Puppet      Cache   Translate Page      
Their portfolio: Puppet Bolt, the company’s simplified open source automation framework, hit version 1.0; Puppet Insights, a tool for measuring how fast and how well teams commit code, showed up as a private beta; Puppet Discovery, for corralling IT resources, moved on to version 1.6; Pipelines for Containers 3.3 got Helm support; and Puppet Enterprise … Continue reading Link: ‘The gulf between apps and infrastructure is blurring’ says boss of DevOps darling Puppet
          Supply Chain Risk Intelligence Analyst - Resilience360 - DHL Customer Solutions & Innovation Americas - Plantation, FL      Cache   Translate Page      
We are looking for a Supply Chain Risk Intelligence Analyst who will assess supply chain risk intelligence needs, identify sources for valid open source risk...
From DHL - Wed, 10 Oct 2018 05:12:33 GMT - View all Plantation, FL jobs
          Microsoft joins Open Invention Network to help protect Linux and open source      Cache   Translate Page      

I’m pleased to announce that Microsoft is joining the Open Invention Network (“OIN”), a community dedicated to protecting Linux and other open source software programs from patent risk.

Since its founding in 2005, OIN has been at the forefront of helping companies manage patent risks.  In the years before the founding of OIN, open source licenses typically covered only copyright interests and were silent about patents. OIN was designed to address this gap by creating a voluntary system of patent cross-licenses between member companies covering Linux System technologies. OIN has also been active in acquiring patents at times to help defend the community and to provide education and advice about the intersection of open source and intellectual property. Today, through the stewardship of its CEO Keith Bergelt and its Board of Directors, the organization provides a license platform for roughly 2,400 companies globally. The licensees range from individual developers and startups to some of the biggest technology companies and patent holders on the planet.

We know Microsoft’s decision to join OIN may be viewed as surprising to some, as it is no secret that there has been friction in the past between Microsoft and the open source community over the issue of patents. For others who have followed our evolution as a company, we hope this will be viewed as the next logical step for a company that is listening to its customers and is firmly committed to Linux and other open source programs.   

At Microsoft, we take it as a given that developers do not want a binary choice between Windows vs Linux, or .NET vs Java – they want cloud platforms to support all technologies. They want to deploy technologies at the edge that fit within the constraints of the device and meet customer needs. We also learned that collaborative development through the open source process can accelerate innovation. Following over a decade of work to make the company more open (did you know we open sourced parts of ASP.NET back in 2008?), Microsoft has become one of the largest contributors to open source in the world. Our employees contribute to over 2,000 projects, we provide first-class support for all major Linux distributions on Azure, and we have open sourced major projects such as .NET Core, TypeScript, VS Code, and Powershell.

Joining OIN reflects Microsoft’s patent practice evolving in lock-step with the company’s views on Linux and open source more generally. We began this journey over two years ago through programs like Azure IP Advantage, which extended Microsoft’s indemnification pledge to open source software powering Azure services. We doubled down on this new approach when we stood with Red Hat and others to apply GPL v. 3 “cure” principles to GPL v. 2 code, and when we recently joined the LOT Network, an organization dedicated to addressing patent abuse by companies in the business of assertion.

Now, as we join OIN, we believe Microsoft will be able to do more than ever to help protect Linux and other important open source workloads from patent assertions. We bring a valuable and deep portfolio of over 60,000 issued patents to OIN for the benefit of Linux and other open source technologies. We also hope that our decision to join will attract many other companies to OIN, making the license network even stronger for the benefit of the open source community. 

We look forward to making our contributions to OIN and its members, and to working with the community to help open source developers and users protect the Linux ecosystem and encourage innovation with open source software.


          NVIDIA dévoile la plateforme open source RAPIDS : Dell EMC, HPE, IBM et Oracle comme partenaires      Cache   Translate Page      

Ces dernières années, NVIDIA a montré que le GPU pouvait être exploité dans bien plus que le jeu vidéo. Un pari qui a surtout pris forme avec CUDA et qui est à l'origine de l'expansion de la société à de nombreux marchés, mais aussi de son succès. RAPIDS doit lui permettre d'aller encore plus loin.


          How should you use funding for your open source project?      Cache   Translate Page      

I think the consensus agrees that sustaining open source software takes more than just money. And yet money often remains a crucial part of a larger need for open source to sustain AND thrive. So, if that's the case...how should you use funding for your open source project? Brenna Heaps writes on the Tidelift blog:

We’ve been speaking with a lot of open source maintainers about how to get paid and what that might mean for their project, and the same question keeps popping up: What do I do with the money?

The tldr?

Fund the project, community engagement, and pay it forward...

But, it's a short read and worth it — so go read this and then share it with your fellow maintainers.


          The $100M+ revenue commercial open source software company index      Cache   Translate Page      

Have you seen this spreadsheet of open source software companies from Joseph Jacks? The criteria to be added to the sheet is; the company generates $100M+ revenue (recurring or not) OR generate the equivalent of $25M of revenue per quarter.

These companies have found a way to build a very large business around one or many open source software projects. Anyone on this index surprise you?


          Microsoft Joins the Open Invention Network, NVIDIA Announces RAPIDS, Asterisk 16.0.0 Now Available, BlockScout Released and Security Advisory for Debian GNU/Linux 9 "Stretch"      Cache   Translate Page      

News briefs for October 10, 2018.

Microsoft has joined the Open Invention Network (OIN), an open-source patent consortium. According to ZDNet, this means "Microsoft has essentially agreed to grant a royalty-free and unrestricted license to its entire patent portfolio to all other OIN members." OIN's CEO Keith Bergelt says "This is everything Microsoft has, and it covers everything related to older open-source technologies such as Android, the Linux kernel, and OpenStack; newer technologies such as LF Energy and HyperLedger, and their predecessor and successor versions."

NVIDIA has just announced RAPIDS, its open-source data analytics/machine learning platform, Phoronix reports. The project is "intended as an end-to-end solution for data science training pipelines on graphics processors", and NVIDIA "laims that RAPIDS can allow for machine learning training at up to 50x and is built atop CUDA for GPU acceleration".

The Asterisk Development Team announces that Asterisk 16.0.0 is now available. This version includes many security fixes, new features and tons of bug fixes. You can download it from here.

BlockScout, the first full-featured open-source Ethereum block explorer tool, was released yesterday by POA Network. The secure and easy-to-use tool "lets users search and explore transactions, addresses, and balances on the Ethereum, Ethereum Classic, and POA Network blockchains". And, because it's open source, anyone can "contribute to its development and customize the tool to suit their own needs".

Debian has published another security advisory for Debian GNU/Linux 9 "Stretch". According to Softpedia News, CVE-2018-15471 was "discovered by Google Project Zero's Felix Wilhelm in the hash handling of Linux kernel's xen-netback module, which could result in information leaks, privilege escalation, as well as denial of service". The patch also addresses CVE-2018-18021, a privilege escalation flaw. The Debian Project recommends that all users of GNU/Linux 9 "Stretch" update kernel packages to to version 4.9.110-3+deb9u6.


          Cloud Foundry 2018 European Summit Begins in Switzerland, Launches Certified Systems Integrator Program      Cache   Translate Page      
Cloud Foundry Foundation, home of a family of open source projects including Cloud Foundry Application Runtime, Cloud Foundry Container Runtime and Cloud Foundry BOSH, opened its European Cloud Foundr...
       

          Huawei Cloud Becomes a Cloud Foundry Infrastructure Provider      Cache   Translate Page      
Cloud Foundry Foundation, home of a family of open source projects including Cloud Foundry Application Runtime, Cloud Foundry Container Runtime and Cloud Foundry BOSH, today announced at the 2018 Euro...
       

          Cloud Foundry Focus on Interoperability Continues with Two New Projects Integrating Kubernetes      Cache   Translate Page      
Cloud Foundry Foundation, home of a family of open source projects including Cloud Foundry Application Runtime, Cloud Foundry Container Runtime, and Cloud Foundry BOSH, announced two new projects have...
       

          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Plotly is hiring Pythonistas!...
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          6 tips for receiving feedback on your open source contributions      Cache   Translate Page      
In the free and open source software world, there are few moments as exciting or scary as submitting your first contribution to a project. You've put your work out there and now it's subject to review ... - Source: opensource.com
          4 best practices for giving open source code feedback      Cache   Translate Page      
In the previous article I gave you tips for how to receive feedback , especially in the context of your first free and open source project contribution. Now it's time to talk about the other side of t ... - Source: opensource.com
          What was your first open source pull request or contribution?      Cache   Translate Page      
Contributing to an open source project can be... Nervewracking! Magical. Boring? Regardless of how you felt that first time you contributed, the realization that the project is open and you really can ... - Source: opensource.com
          Grant Evaluation Analyst and Coordinator - Lawrence University of Wisconsin - Appleton, WI      Cache   Translate Page      
Proficiency with web-based survey software (Qualtrics), open source web content management systems (Drupal), and web enabled enterprise reporting systems...
From Lawrence University of Wisconsin - Sat, 06 Oct 2018 06:49:40 GMT - View all Appleton, WI jobs
          Assistant Director of Research Administration - Lawrence University of Wisconsin - Appleton, WI      Cache   Translate Page      
Proficiency with web-based survey software (Qualtrics), open source web content management systems (Drupal), and web enabled enterprise reporting systems...
From Lawrence University of Wisconsin - Fri, 14 Sep 2018 06:52:02 GMT - View all Appleton, WI jobs
          Microsoft Joins the Open Invention Network, NVIDIA Announces RAPIDS, Asterisk 16.0.0 Now Available, BlockScout Released and Security Advisory for Debian GNU/Linux 9 "Stretch"      Cache   Translate Page      

News briefs for October 10, 2018.

Microsoft has joined the Open Invention Network (OIN), an open-source patent consortium. According to ZDNet, this means "Microsoft has essentially agreed to grant a royalty-free and unrestricted license to its entire patent portfolio to all other OIN members." OIN's CEO Keith Bergelt says "This is everything Microsoft has, and it covers everything related to older open-source technologies such as Android, the Linux kernel, and OpenStack; newer technologies such as LF Energy and HyperLedger, and their predecessor and successor versions."

NVIDIA has just announced RAPIDS, its open-source data analytics/machine learning platform, Phoronix reports. The project is "intended as an end-to-end solution for data science training pipelines on graphics processors", and NVIDIA "laims that RAPIDS can allow for machine learning training at up to 50x and is built atop CUDA for GPU acceleration".

The Asterisk Development Team announces that Asterisk 16.0.0 is now available. This version includes many security fixes, new features and tons of bug fixes. You can download it from here.

BlockScout, the first full-featured open-source Ethereum block explorer tool, was released yesterday by POA Network. The secure and easy-to-use tool "lets users search and explore transactions, addresses, and balances on the Ethereum, Ethereum Classic, and POA Network blockchains". And, because it's open source, anyone can "contribute to its development and customize the tool to suit their own needs".

Debian has published another security advisory for Debian GNU/Linux 9 "Stretch". According to Softpedia News, CVE-2018-15471 was "discovered by Google Project Zero's Felix Wilhelm in the hash handling of Linux kernel's xen-netback module, which could result in information leaks, privilege escalation, as well as denial of service". The patch also addresses CVE-2018-18021, a privilege escalation flaw. The Debian Project recommends that all users of GNU/Linux 9 "Stretch" update kernel packages to to version 4.9.110-3+deb9u6.


          Microsoft joins Open Invention Network and open sources its patent portfolio      Cache   Translate Page      
Microsoft has joined the "largest patent non-aggression community in history", the Open Invention Network (OIN), effectively open-sourcing almost its entire patent portfolio. The company has shown increasing warmth to the open source community in recent years, and this latest move means that other OIN members will have access to its patents -- with the exception of those relating to Windows and desktop applications. The OIN embraces -- as Microsoft has done of late -- Linux "as a key element of open source software". See also: Microsoft has bumped up the price of Windows 10 Home by nearly $20 Microsoft fixes… [Continue Reading]

          KeeWeb - An Open Source, Cross Platform Password Manager      Cache   Translate Page      

ostechnix: KeeWeb is an open source, cross platform password manager


          Support Engineer - Microsoft - Las Colinas, TX      Cache   Translate Page      
Open Source – Linux, Red Hat, etc. Business Division Specific:. HDInsight/Hadoop, Machine Learning, Azure Stream Analytics....
From Microsoft - Thu, 19 Jul 2018 07:31:47 GMT - View all Las Colinas, TX jobs
          Firefox to support WebP, plus Custom Elements coming to Edge      Cache   Translate Page      

#361 — October 10, 2018

Read on the Web

Frontend Focus

Start Performance Budgeting — A review of performance budgeting, the metrics to track, trade-offs to consider, plus budget examples. “For success, embrace performance budgets and learn to live within them..”

Addy Osmani

Custom Elements Now 'In Development' on Microsoft Edge — Not a lot to see here, but Edge is the last major browser to get on board with custom elements. Shadow DOM is being worked on, too.

Microsoft

⚛️ New Course: Complete Intro to React, v4 — Learn to build real-world applications in React. Much more than an intro, you’ll start from the ground up all the way to using the latest features in React 16+ like Context and Portals. We also launched a follow up course, Intermediate React.

Frontend Masters sponsor

Use Cases for Flexbox — A look at some of the common uses for Flexbox. What should we use Flexbox for, and what it is not so good at, especially now that we have CSS Grid too?

Rachel Andrew

How I Remember CSS Grid Properties — A method to remember the most common CSS Grid properties. “This will help you use CSS Grid without googling like a maniac.”

Zell Liew

Firefox to Support Google's WebP Image Format — Now Apple’s Safari is the only major holdout, since Edge now supports it too. (Caution: CNet has annoying autoplaying ads.)

CNet

Understanding the Difference Between grid-template and grid-auto — It pays to understand the difference between implicit and explicit grids. grid-template properties adjust placement on an explicit grid, whereas grid-auto properties define an implicit grid’s properties.

Ire Aderinokun

💻 Jobs

Sr. Fullstack Engineer (Remote) — Sticker Mule is looking for passionate developers to join our remote team. Come help us become the Internet’s best place to shop and work.

Sticker Mule

Work on Uber's Open Source Design Language — We're developing Base UI, a new React component library for web applications at Uber and beyond. Join our team.

Uber

Join Our Career Marketplace & Get Matched With A Job You Love — Through Hired, software engineers have transparency into salary offers, competing opportunities, and job details.

Hired

📘 Articles & Tutorials

How to Use the Animation Inspector in Chrome Developer Tools — A rundown of which animation dev tools are available in Chrome, how to access them, and what they can do for you.

Kezz Bracey

How One Invalid Pseudo Selector Can Equal an Entire Ignored Selector — Did you know that “if any part of a selector is invalid, it invalidates the whole selector”? Thankfully things are beginning to change.

Chris Coyier

Create a Serverless Powered API in 10 Minutes

Cloudflare Workers sponsor

Moving Backgrounds Around According to Mouse Position

Chris Coyier

Adaptive Serving using JavaScript and the Network Information API — Serve content based on the user’s effective network connection type.

Addy Osmani

Writing a JavaScript Tweening Engine with Between.js — This developer decided to try their hand at writing their own tweening engine.

Alexander Buzin

The Ultimate Guide to Proper Use of Animation in UX

Taras Skytskyi

Let a MongoDB Master Explain Users and Roles

Studio 3T sponsor

Getting Started with WordPress's New Gutenberg Editor By Creating Your Own 'Block' — Gutenberg is a new content editor coming to WordPress 5.0.

Muhammad Muhsin

▶  Chrome 70: What’s New in DevTools

Kayce Basques (Google)

CSS Floated Labels with :placeholder-shown Pseudo Class

Nick Salloum

Bad Practices on Birthdate Form Fields

Anthony Tseng

🔧 Code and Tools

CSS Stats: A Web Tool to Visualize and Show Stats on the CSS Sites Use

Morse, Jackson and Otander

Automated Visual UI Testing — Replace time-consuming manual QA and catch visual UI bugs before your users do. Get started with our free 14-day trial.

Percy sponsor

Hover.css: A Collection of CSS3 Powered Hover Effects — For use on all sorts of page elements like links, logos, buttons, and more. Demos here.

Ian Lunn

a11y-dialog: A Very Lightweight and Flexible Accessible Modal Dialog

Edenspiekermann

Baffle: A Library for Obfuscating then Revealing Text

Cam Wiegert

WorkerDOM: The DOM API, But For Inside Web Workers — Still a work-in-progress.

AMP Project


          Senior Back End Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:37 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Now - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in software engineering and open source technologies, and can intuit the fine line between promising new practice and overhyped fad....
From Axon - Fri, 05 Oct 2018 23:18:02 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Sat, 29 Sep 2018 05:17:37 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Sat, 22 Sep 2018 05:17:32 GMT - View all Seattle, WA jobs
          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 06 Aug 2018 23:18:47 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page      
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Tue, 12 Jun 2018 23:18:07 GMT - View all Seattle, WA jobs
          Updated Joomla to 3.8.13      Cache   Translate Page      
Joomla (ID : 413) package has been updated to version 3.8.13. Joomla is an award-winning content management system (CMS), which enables you to build Web sites and powerful online applications. Many aspects, including its ease-of-use and extensibility, have made Joomla the most popular Web site software available. Best of all, Joomla is an open source … Continue reading "Updated Joomla to 3.8.13"
          Updated LimeSurvey to 3.15.0+      Cache   Translate Page      
LimeSurvey (ID : 60) package has been updated to version 3.15.0+. LimeSurvey (formerly PHPSurveyor) is an open source online survey application written in PHP based on a MySQL, PostgreSQL or MSSQL database. It enables users without coding knowledge to develop, publish and collect responses to surveys. Surveys can include branching, custom preferred layout and design … Continue reading "Updated LimeSurvey to 3.15.0+"
          Updated Bludit to 3.1      Cache   Translate Page      
Bludit (ID : 549) package has been updated to version 3.1. Bludit is a simple web application to make your own blog in seconds, it’s completly free and open source. Bludit uses flat-files (text files in JSON format) to store the posts and pages, you don’t need to install or configure a database. Review, Rate … Continue reading "Updated Bludit to 3.1"
          Updated OpenSupports to 4.3.0      Cache   Translate Page      
OpenSupports (ID : 612) package has been updated to version 4.3.0. OpenSupports is an open source ticket system for giving support to your clients. It provides you with a better management of your users inquiries. The software has tools to manage the tickets, like departments, staff members, custom responses, multi-language support. Review, Rate and View … Continue reading "Updated OpenSupports to 4.3.0"
          Web Developer      Cache   Translate Page      
MA-Cambridge, Web Developer A Web Developer is needed in Cambridge, MA for a 3-month contract position. You'll be responsible for the design, development and implementation of web applications. Design and publish tool backend systems and front-end user facing websites. Requirements Advanced level of HTML, CSS and JavaScript. Knowledge of open source frameworks and libraries including jQuery, XML, and XSL. Exper
          Elastic IPO: Free Software Really Is Lucrative      Cache   Translate Page      

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Last week, Elastic launched one of this year’s most successful IPOs. The company, which is a provider of open source software for search, has been growing at a rapid clip. And the IPO should provide even more momentum. So here’s a look at the deal.

The post Elastic IPO: Free Software Really Is Lucrative appeared first on InvestorPlace.


          How To Install Jenkins on Debian 9      Cache   Translate Page      
Jenkins is an open source automation server that offers an easy way to set up a continuous integration and continuous delivery (CI/CD) pipeline.
          Bug introduced in the of_get_named_gpiod_flags function.      Cache   Translate Page      
Wojciech_Zabołotny writes: (Summary) Hi,
Hi,
The function of_get_named_gpiod_flags in older versions of the kernel (up to 4.7.10 - https://elixir.bootlin.com/linux/v4.7.10/source/drivers/gpio/gpiolib-of.c#L75 ) contained an important workaround:
contained an important workaround:
/* .of_xlate might decide to not fill in the flags, so clear it. the Xilinx AXI GPIO driver: https://github.com/Xilinx/linux-xlnx/blob/c2ba891326bb472da59b6a2da29aca218d337687/drivers/gpio/gpio-xilinx.c#L262 ) the random, unitialized value from the stack in of_find_gpio ( https://elixir.bootlin.com/linux/v4.18.13/source/drivers/gpio/gpiolib-of.c#L228 ) is used, which results in random settings of e.g., OPEN DRAIN or OPEN SOURCE mode.
          Bug introduced in the of_get_named_gpiod_flags function.      Cache   Translate Page      
wzab writes: (Summary) Hi,
Hi,
The function of_get_named_gpiod_flags in older versions of the kernel (up to 4.7.10 - https://elixir.bootlin.com/linux/v4.7.10/source/drivers/gpio/gpiolib-of.c#L75 ) contained an important workaround:
contained an important workaround:
/* .of_xlate might decide to not fill in the flags, so clear it. the Xilinx AXI GPIO driver: https://github.com/Xilinx/linux-xlnx/blob/c2ba891326bb472da59b6a2da29aca218d337687/drivers/gpio/gpio-xilinx.c#L262 ) the random, unitialized value from the stack in of_find_gpio ( https://elixir.bootlin.com/linux/v4.18.13/source/drivers/gpio/gpiolib-of.c#L228 ) is used, which results in random settings of e.g., OPEN DRAIN or OPEN SOURCE mode.
          Supply Chain Risk Intelligence Analyst - Resilience360 - DHL Customer Solutions & Innovation Americas - Plantation, FL      Cache   Translate Page      
We are looking for a Supply Chain Risk Intelligence Analyst who will assess supply chain risk intelligence needs, identify sources for valid open source risk...
From DHL - Wed, 10 Oct 2018 05:12:33 GMT - View all Plantation, FL jobs
          Czy Postgres jest w stanie rozpoznać i „cachować” powtarzające się wyliczenia? Studium przypadku      Cache   Translate Page      

Ostatnio na liście mailingowej Postgresa pojawiło się interesujące pytanie dotyczące możliwości cachowania powtarzających się wyliczeń. Przypuszczam, że jest to problem, z którym część z Was już się zetknęła, dlatego postanowiłam go przybliżyć – tym bardziej że na wątek odpowiedział niekwestionowany autorytet: Tom Lane. Do zilustrowania problemu posłużono się zapytaniem widocznym poniżej: SELECT (a + b) […]

Artykuł Czy Postgres jest w stanie rozpoznać i „cachować” powtarzające się wyliczenia? Studium przypadku pochodzi z serwisu Linux Polska - Open Source Company.


          Senior DevOps Engineer - Long Term Contract - Ignite Technical Resources - Burnaby, BC      Cache   Translate Page      
Integrate, manage and support a diverse range of Open Source and commercial middleware, tools, platforms and frameworks to enable continuous product delivery....
From Ignite Technical Resources - Thu, 20 Sep 2018 08:14:37 GMT - View all Burnaby, BC jobs
          Digital Library Federation: DuraCloud at the DLF Forum: Working for an Engaged and Global Open Source Project      Cache   Translate Page      

This featured sponsor post is from Heather Greer Klein, DuraSpace Services Coordinator.

The DuraCloud team from DuraSpace is excited to be a part of the Digital Library Federation Forum 2018 and NDSA’s Digital Preservation 2018. Our participation wraps up a year of focus on expanding the community participation model of the DuraCloud open source project. We hope to share what we have learned about broadening participation in an open community project, and to learn from others about what digital preservation needs remain unmet by current community initiatives and software projects.

I will present at the DLF Forum 2018 to share what we learned this year when looking to expand who participates in open source initiatives, and how to incorporate non-technical staff into focused work to move the project forward.

We began this work in earnest when the ‘Open Sourcing DuraCloud: Beyond the License’ project was chosen for the Mozilla Open Leaders program. As a result of the 12-week mentorship & training program, DuraSpace staff developed an easy introduction to what DuraCloud is and why it matters; contribution guidelines; a roadmap for future development; and detailed opportunities for contribution. The DuraCloud team believes this community contribution model will accelerate DuraCloud’s growth into a truly open source project.

We have also expanded the reach of the DuraCloud service globally by bringing on our first ever Certified DuraSpace Partner offering DuraCloud services, 4Science, to offer DuraCloud Europe. This partnership has enhanced the software to allow for storage providers in multiple regions, and meets a critical need for preservation storage located outside of the United States.

Please visit our websites to learn more about the DuraCloud hosted service, find out how to contribute to the DuraCloud open source project, or learn about our DuraCloud Europe partnership. You can always email us at info@duracloud.org or DuraCloud Europe at duracloud@4science.it, with thoughts, questions, or suggestions. We look forward to seeing you in Las Vegas!

The post DuraCloud at the DLF Forum: Working for an Engaged and Global Open Source Project appeared first on DLF.


          Nvidia RAPIDS accelerates analytics and machine learning      Cache   Translate Page      
New open source libraries from Nvidia provide GPU acceleration of data analytics an machine learning. Company claims 50x speed-ups over CPU-only implementations.
          Comment on Open Source CMS: 12 Great Website Creation Tools by jmj      Cache   Translate Page      
I cant speak for PyroCMS here, I have tried one day to get it setup on IIS using Plesk Onyx for Windows and I cant get it to work properly. The farest I got was the site viewing without errors, but backend throwing errors over errors. Reading here that focus on simplicity is a pro, I cannot agree to this. It might be easy on linux or other web servers than IIS but for windows with IIS its a pain.
          Comment on LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI by house      Cache   Translate Page      
How to make the MacBook as display and keyboard of a PC?I'm eager to know it. Would you send me a email?Thank you very much
          Comment on LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI by Blake      Cache   Translate Page      
if you have a ThinkPad x230 you can try flashing HEADS: http://osresearch.net/
          Comment on LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI by navaho      Cache   Translate Page      
while big companies can spend effort in evolving open-source projects, end-users can check the code they're pushing and raise criticisms if needed. that's exactly where FOSS wins vs proprietary software. personally, I trust google's open-source software more than apple's closed-source software.
          Comment on LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI by Abhishek Prakash      Cache   Translate Page      
Let's see if that would be the case in the next few years.
          Comment on LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI by Abhishek Prakash      Cache   Translate Page      
No. It is only focused on servers.
          Comment on LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI by alaminh      Cache   Translate Page      
is it also available for desktop?
          How to find the best open source Node.js projects to study for leveling up your ...      Cache   Translate Page      

To senior developer: “How did you get so good at programming?”

“I don’t know, I guess I just wrote a lot of code, and read a lot of it too…”

Have you ever tried finding an open source Node.js project that you could study to level up your skills, only to end up not finding one at all because you didn’t know what actually makes a project a “good” one to study?

And with the hundreds of thousands of open source repositories on GitHub alone, how could you even narrow it down? How do you decide if studying the project will be worth your very valuable after-work time (because this is usually when the studying happens)?

What if you get a couple hours into reading it only to realize it’s in fact unreadable and you’re more confused than before?

Maybe you start with projects you use at work, or ones that are popular/widely used. And this is a great starting place, but it won’t get you all the way there. For example, just because it’s popular/widely used doesn’t necessarily mean it will be useful to study (although this is usually a good sign).

Instead of wasting all that precious time combing through repos upon repos on GitHub, what if you could quickly figure out which are good projects to study and which aren’t? Projects that will help you level up your skills to reach that next level in your career, rather than leave you with a lot of time spent and not a lot learned…

A list of criteria to guide you

The best way I’ve found to identify great study projects is to use a set of criteria to narrow down the search and quickly know within a few minutes of researching a repo whether it will be good to study or not.

Especially earlier on in my career, I was reading a TON of source code of various projects to get better at not only reading and understanding code but writing it as well, and understanding design patterns. Of all the things I did to improve my skillset, this was one of the things that helped me progress the fastest .

In this post is the criteria I used (and still use) when identifying good projects to study. I’ve ordered it in rough priority order (although the priority below should not be considered a hard and fast rule as there are always exceptions).

Side note:this is not necessarily a guideline on specifically what to study, although many of the pieces of criteria are applicable for that.

Nor is it necessarily a guide for selecting the right libraries/frameworks for use in your projects. But again, this could be a starting point for that. And if you’re overwhelmed by choosing from the 635,000(!) npm modules out there, check out this post I wrote on that!

On to the criteria…

Documentation

Documentation is probably one of the most important things to look for when you’re assessing a project. This is true whether you’re using the repo to study or to just consume/use in a project.

It’s so important because it serves as the “entry point” into a codebase. The documentation (and I’m including project examples as part of this, often in their own folder in the repo) is often what developers first encounter, before they even jump into the code.

Seeing as open source projects are often written on someone else’s free time, documentation can often fall by the wayside, but it’s important that there be at least some level of docs, and I always prioritize the ones with more than less.

Good documentation will generally include:

A README.md file in the root of the project. Some projects have documentation spread out throughout the sub-folders as well, and while this is better than no documents, I find this style more difficult to read and consolidate with the information found in other directories. This should have the public API/functions listed and what they do, how to use, any “gotchas”, etc. Visual diagrams if applicable Examples in the documentation or a separate folder containing multiple examples. The nice things about having the folders with examples is you can clone the repo and run them there, without having to copy/paste from a README.md or other Markdown file. These examples should show you how to get set up, use the API, etc.

As an example, the functional programming library Ramda has great docs for its API, including a REPL that allows you to run the examples and play around with the library right in the browser!


How to find the best open source Node.js projects to study for leveling up your  ...

Studying a repository is not only just to get better at reading/writing code, but also to become better at writing documentation. Good projects will have good examples of documentation you can use for documenting your projects.

Tests

In my book, tests are just as important as documentation, so in terms of priority I’d put them on equal footing. While documentation will give you a good overview of the project, its API, etc., tests will really help you when you get stuck during your studies.

Hopefully the code itself will be well-written, but having the tests to fall back on when you can’t figure out the code is very important. Even if you don’t get stuck, I find it extremely helpful to have the tests to follow along with and will often have the test file and source file open side-by-side in my IDE.

Tests are similar to documentation in that if you can’t read ’em, you can’t understand ’em. Good tests will have understandable assertions, things like:

it('should filter out non-strings', () => { ... })

vs. vague assertions, like:

it('should filter the object', () => { ... })

Another possible way of quickly assessing unit tests is looking for a code coverage badge in the README.md. Popular projects often have this, like Express, below:


How to find the best open source Node.js projects to study for leveling up your  ...
However, just because a project has high test coverage does not mean the tests are good or written in a meaningful way. I combine this check with the other methods of assessing tests talked about above. Structure/Code organization

Due to the lack of a “canonical” structure or code organization for Node projects, it’s not uncommon for developers to look to existing open source projects for guidance. So this is one of those things where if you’re looking at projects for structure examples this criteria might be harder to suss out.

Still, there are a couple easy things you can quickly check:

First, does the project follow any structure at all? Or is everything in randomly named folders and files? For smaller projects, having all the code in an index.js file in the root of the project is usually fine as long as it makes sense compared to the size/features of that project. If that file is 3000 lines of code long and doing lots of different things, then it might get confusing to read.

Second, even if the structure is unfamiliar to you, can you quickly get a sense of the organization? Part of this is having appropriately named directories and subdirectories, but I’ve found a “gut check” usually works here.

For example, if you find there are utility functions spread across 5 different directories, or if you find there are directory structures that are something like 4+ levels deep, that’s usually a sign the code organization is not good and you will struggle with figuring out where things are while studying the project.

Code quality

Code quality is a highly contested topic, and depending on who you ask, kind of subjective.

Even so, there are some quick ways of assessing
          Python Application and Platform Developer - Plotly - Montréal, QC      Cache   Translate Page      
Engage our Open Source communities and help take stewardship of our OSS projects. Plotly is hiring Pythonistas!...
From Plotly - Thu, 06 Sep 2018 15:03:02 GMT - View all Montréal, QC jobs
          MongoDB to acquire cloud database provider MLab for $68 million      Cache   Translate Page      

MongoDB, the New York-based company behind the popular eponymous open source NoSQL database, has acquired San Francisco cloud database provider MLab (styled “mLab”). The purchase price was $68 million, according to an 8-K filing. Founded in 2007, MongoDB employs most of the developers behind the MongoDB database, and monetizes through providing commercial support and a range of related […]


          Gains for Open Document Format (ODF) and Nextcloud      Cache   Translate Page      
  • Renewed push for adoption of ODF document standard

    The Document Foundation, the organisation supporting the development of LibreOffice, is calling for supporters to promote the use of Open Document Format (ODF). Standardisation organisation OASIS would welcome and assist renewed marketing efforts, as would the Open Source Initiative, says OSI director Italo Vignoli.

  • Microsoft and Telekom no longer offer cloud storage under German jurisdiction

    Nextcloud is an open source, self-hosted file share and communication platform. Access & sync your files, contacts, calendars & communicate and collaborate across your devices. You decide what happens with your data, where it is and who can access it!


          ShadowPlay: Using our hands to have some fun with AI      Cache   Translate Page      

Editor’s note:TensorFlow, our open source machine learning platform, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways and at Google, we're always looking to do the same. Here's one of those stories.

Chinese shadow puppetry—which uses silhouette figures and music to tell a story—is an ancient Chinese art form that’s been used by generations to charm communities and pass along cultural history. At Google, we’re always experimenting with how we can connect culture with AI and make it fun, which got us thinking: can AI help put on a shadow puppet show?

So we created ShadowPlay, an interactive installation that celebrates the shadow puppetry art form. The installation, built using TensorFlow and TPUs, uses AI to recognize a person’s hand gestures and then magically transform the shadow figure into digital animations representing the 12 animals of the Chinese zodiac and in an interactive show.

Shadowplay.gif

Attendees use their hands to make shadow figures, which transform into animated characters and creates.

We debuted ShadowPlay at the World AI Conference and Google Developers Day in Shanghai in September. To build the experience, we developed a custom machine learning model that was trained on a dataset made up of lots of examples of people’s hand shadows, which could eventually recognize the shadow and match it to the corresponding animal. “In order to bring this project to life, we asked Googlers to help us train the model by making a lot of fun hand gestures. Once we saw the reaction of users seeing their hand shadows morph into characters, it was impossible not to smile!”, says Miguel de Andres-Clavera, Project Lead at Google. To make sure the experience could guess what animal people were making with high accuracy, we trained the model using TPUs, our custom machine learning hardware accelerators.

We had so much fun building ShadowPlay (almost as much fun as practicing our shadow puppets … ), that we’ll be bringing it to more events around the world soon!


          Cloud Foundry launches certified systems Integrator programme      Cache   Translate Page      

two-clouds-1385018843_27_contentfullwidthOpen source outfit Cloud Foundry has announced its the Cloud Foundry Certified Systems Integrator programme.

The programme is  designed to help Systems Integrators (SIs), consultancies and professional services organisations highlight their expertise working with the Cloud Foundry family of technologies.

These organisations have demonstrated contributions to the Cloud Foundry community through contributing code, hosting meetups, public marketing of the platform, Foundation membership and more. more»


          Microsoft libera el código de más de 60.000 patentes: se consolida su filosofía Open Source       Cache   Translate Page      

Microsoft libera el código de más de 60.000 patentes: se consolida su filosofía Open Source #source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Unos años atrás nos llegan a decir que Microsoft hace de código abierto su cartera de patentes y no nos lo creeríamos. Sin embargo, la empresa acaba de sumarse al consorcio Open Invention Network (OIN) y ahora más de 60.000 patentes que tienen en su cartera podrán ser utilizadas libremente por el resto de integrantes. Un paso más que demuestra el amor de Microsoft hacia Linux y el Open Source.

Microsoft tiene en su propiedad un sinfín de patentes relacionadas con Linux. Patentes por las que ha cobrado considerables sumas a Android o a Samsung por ejemplo. Uniéndose a OIN sus patentes ahora son de licencia gratuita y sin restricciones para el resto de miembros del consorcio.

Microsoft Open Source Company#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Una compañía (casi) Open Source

¿Convierte esto a Microsoft definitivamente en una empresa Open Source? Técnicamente no, pues a pesar de que 60.000 licencias son muchas, no son todas las que hay en la cartera de la empresa de Redmond. Las licencias que tengan que ver expresamente con el código de Windows siguen siendo privadas.

Microsoft#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Como ya auguró Javier Pastor en su momento, Windows en formato Open Source no es tan sencillo. Y es que el principal problema es que Microsoft no depende de sí misma para esto. Hay muchísimo hardware por el que Microsoft vela para su buen funcionamiento y si Windows pasa a ser de código abierto, técnicamente también debería serlo los controladores de estos productos, que son de otras empresas. La complejidad y magnitud del cambio hacen difícil esta idea de un Windows de código abierto, pero nunca se sabe qué caminos puede tomar a medio plazo.

Pero entre esas 60.000 hay muchísimas que actualmente se utilizan en Android y en la fabricación de smartphones en general. Google o IBM también son algunos de los principales miembros de OIN, por lo que Microsoft automáticamente dejará de cobrar una comisión por los smartphones que se vendan con Android instalado por ejemplo.

OIN#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Con esta integración en el consorcio OIN puede que Microsoft siga sin ser una empresa Open Source, pero sí que se ha convertido en el mayor defensor de esta comunidad. No olvidemos tampoco que ahora GitHub, el portal de desarrollo colaborativo, forma parte de Microsoft.

Cambio de CEO, cambio de filosofía

Con Steve Ballmer al frente del gigante tecnológico nadie se habría imaginado un movimiento así, pero con la llegada de Satya Nadella las cosas comenzaron a cambiar. "La nube primero" ha sido la idea del actual CEO desde la toma de posesión, y esa nube es Azure, un sistema de servidores que funciona bajo Linux y que se nutre de la comunidad de código abierto. Azure es la principal fuente de ingresos de Microsoft, mientras que la venta de licencias de Windows tan sólo ocupa el tercer puesto, superada por la venta de Office. Visto así, tiene sentido que Microsoft abra de par en par sus puertas a la comunidad Open Source.

Microsoft Open Source#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Satya Nadella ya dijo que no quería liberar viejas batallas como su antecesor. ¿Por qué preocuparse en demandar a empresas cuando puede adaptarse al Open Source y sacar negocio mediante Azure a terceros? La filosofía actual de Microsoft es más clara que nunca, una empresa abierta al código abierto, ejemplos como el de hoy lo demuestran.

Más información | Open Invention Network

También te recomendamos

Google da marcha atrás: Chrome 70 permitirá desactivar el login forzado y eliminará las cookies (de verdad)

Linus Torvalds se disculpa por décadas de "ataques frívolos e impertinentes en correos electrónicos"

Lavadora con función vapor: qué es y cómo debe utilizarse

-
La noticia Microsoft libera el código de más de 60.000 patentes: se consolida su filosofía Open Source fue publicada originalmente en Xataka por Cristian Rus .


          FileZilla 3.37.4      Cache   Translate Page      

FileZilla is a powerful Open Source FTP/SFTP client with many features. It includes a site manager to store all your connection details and logins as well as an Explorer style interface that shows the local and remote folders and can be customized independently. The program offers support for firewalls and proxy connections as well as SSL and Kerberos GSS security. Additional features include keep alive, auto ascii/binary transfer, download queue, manual transfers, raw FTP commands and more. A nice program for beginners and advanced user alike.

Thanks to ARMOUR and ADN for the update.

Download


          Microsoft further commits to open source, releases two thirds of its patents to the Open Invention Network      Cache   Translate Page      
Other members of the organization include Google, IBM, and RedHat.
          Complete Internet Repair 5.2.3.3990      Cache   Translate Page      
Description: Complete Internet Repair repairs every Internet related problems.Complete Internet Repair is a free Open Source Power Tool to repair internet connections and get you up and running in no time. Please note that it is unable to repair hardware faults or get your ISP up and running or not yet. In short; Complete Internet […]
          6 Best Free Remote Access Software Tools in 2018      Cache   Translate Page      

The purpose of remote access software, sometimes also called remote desktop software or remote control software, is to let you control a computer remotely from another computer. This can be useful when a friend or relative who isn’t as computer-savvy as you are asks you for help, or when you let an application run on your computer and want to monitor its progress even when you’re away.

Most remote access software tools rely on a client-server architecture, with both the client and the server using a piece of software to facilitate the connection. In practice, this means that you need to install a remote access software host application on the computer you would like to access remotely, and then run a remote access software client application on each device from which you would like to connect to the computer.

Some remote access software tools make this easier than others, so it’s important to pick one that fits your needs and skill level. Typically, the more complicated a remote access software tool is, the greater control over the remote connection it gives you. To get you started, we’ve selected 5 best free remote access software tools available and described the main characteristics of each.

1. TeamViewer

TeamViewer is by far the most popular remote access software tool available. Connecting over 1.7 billion devices every day, TeamViewer has convinced millions of home and business customers that it’s the best remote connectivity solution on the market with its incredibly fast and secure global network, wide range of features, and excellent ease of use.

TeamViewer is free for personal use, and it’s available for windows, macOS, linux, Chrome OS, iOS, Android, Windows RT, Windows Phone, and BlackBerry. Besides remote support, you can also use TeamViewer for file transfers, remote printing, or to access unattended computers, servers, Android devices, point-of-sale devices, or public displays.

2. Chrome Remote Desktop

If you use the Chrome web browser or own a Chromebook, Chrome Remote Desktop is arguably the most straightforward remote access software tool you can use to access your devices remotely. Developed by Google and available as a Chrome app, this remote access software tool uses a proprietary protocol developed by Google to transmit the keyboard and mouse events, video, and audio from one computer to another.

Once you’ve installed Chrome Remote Desktop on your computer, you can begin sharing your desktop simply by giving access to anyone you want. All connections are fully secured, so there’s no reason to worry about someone intercepting your remote desktop session and stealing sensitive information from you. Chrome Remote Desktop is free and works on Windows, macOS, Linux, iOS, and Android.

3. Remote Utilities

Remote Utilities is an advanced remote access software tool with support for Active Directory, which is a directory service that Microsoft developed for the Windows domain networks. The purpose of Active Directory is to, among other things, authenticate and authorize all users and computers in a Windows domain type network. Because Remote Utilities easily integrates into any Active Directory environment, you can use it to administer your entire network with unprecedented comfort.

Remote Utilities can operate as a 100-percent autonomous remote support solution to comply with the strictest security requirements, and it comes with a useful MSI Configurator utility that allows you to create a custom Host installer for further deployment across your network. You can try Remote Utilities for free for 30 days and use the online License Calculator to find out how much Remote Utilities would cost you after the trial period ends.

4. UltraVNC

UltraVNC is an open source remote access software tool aimed at people who desire the greatest amount of control over their remote connections. It uses the VNC protocol, which was originally developed at the Olivetti & Oracle Research Lab in Cambridge and is now available in a number of variants, including the one implemented in UltraVNC.

UltraVNC works only on Windows and supports various features, such as encryption, file transfers, chat, and multiple authentication methods. To remotely administer one computer from another using UltraVNC, the two computers must be able to directly communicate across a network. This often leads to NAT/firewall issues, making UltraVNC considerably harder to set up than the above-described remote access software tools.

5. Microsoft Remote Desktop

Microsoft Remote Desktop is a simple yet powerful application from Microsoft that allows you to connect to a remote PC or virtual apps and desktops. It’s available for all Windows-based devices and work in conjunction with the Remote Desktop assistant, which was added in the Windows 10 Fall Creators update (1709) and is also available as a separate download.

To enable remote access on Windows, simply select Start and click the Settings icon on the left. Then choose Remote Desktop under the System group and use the slider to enable Remote Desktop. That’s how easy it is to use Microsoft Remote Desktop.

6. CloudBerry Remote Assistant

CloudBerry Remote Assistant is an easy-to-use Windows tool for remote control and desktop sharing. After setting up the links between two computers you choose whether you want to give full access or only viewing rights. The SSL-encryption that is used for all communications ensures that all your connections are fully secure.

The solution brings lots of neat features, such as unattended access, text and voice chat, multi-regional authentication server and file transfer. For instant support, you can establish the connection without installation on the target workstation.CloudBerry Remote Assistant is absolutely free of charge for personal and professional purpose.


          Microsoft Azure DevOps for ASP .NET Core Web apps      Cache   Translate Page      

Microsoft Azure DevOps for ASP .NET Core Web apps
Short introduction

Before we start with Microsoft Azure DevOps service lets explain what DevOps is. “DevOps is the union of people, process, and products to enable continuous delivery of value to your end users.” As you can see this is not one specific thing. Azure DevOps is a solution created to support this “union”. It provides tools to manage team work collected in the backlog, it provides GIT repositories to store the code, it provides automatic builds and releases once there is new feature commited. In this article I would like to present how to use Azure DevOps to provide continuous integration and delivery for ASP .NET Core Web Apps. If you want to read more about Azure DevOps visit official website .

Project structure and setup

Creating account in the Azure DevOps is free. You can create one using this link. Once you sign in you should see main panel where you can manage your organization settings and projects. Lets start from creating new project. Click “Create project” button. Provide the name for the project and short description. Below select GIT version control and Agile work items process. Click “Create” button:


Microsoft Azure DevOps for ASP .NET Core Web apps

Once project is created you should see navigation panel:


Microsoft Azure DevOps for ASP .NET Core Web apps
Repository setup
Microsoft Azure DevOps for ASP .NET Core Web apps

In this section we will commit project’s code to the GIT repository in the Azure DevOps. For this article I used already created, open source project called “eShopOnWeb”. Download its code from GitHub here .


Microsoft Azure DevOps for ASP .NET Core Web apps

Open “Repos” tab and select “Branches”. In the right top corner click “New branch” button. Create two more branches so in total there will be three of them: master, stage and dev:


Microsoft Azure DevOps for ASP .NET Core Web apps
Microsoft Azure DevOps for ASP .NET Core Web apps

Once you have branches ready there is one more thing to do set “dev” branch as default one. Click “Project setting” on the bottom and go to the “Repositories” tab. Click on the “dev” branch and select three dots on the right select “Set as default branch”. Once we have branches ready it is time to commit the code to the “dev” branch. Download code from GitHub first. Then it is time to map our GIT repository. Open “Repos” tab and click “Clone” button. Generate GIT credentials here. I am not sue which tool you prefer to use so lets ommit the part with commits the only thing here is that source code from GitHub should be committed to the Azure DevOps GIT repository.


Microsoft Azure DevOps for ASP .NET Core Web apps

Once you commit the code it should appear in the AzureDevOps portal:


Microsoft Azure DevOps for ASP .NET Core Web apps

There is great feature for branches called “Branch policy”. We can set few different policies for the branch for instance you cannot complete pull request if you did not set tasks related with it or you cannot complete pull request if the code from your branch cannot be built. Below I presented how to setup policy for “dev” branch. In the “Branches” section click three dots next to “dev” branch and select “Branch policies”:


Microsoft Azure DevOps for ASP .NET Core Web apps

Select “Require a minimum number of reviewers”. If you save this policy you will not be able to merge changes before two specified reviewers will do the code review of your proposed changes:


Microsoft Azure DevOps for ASP .NET Core Web apps

This will also enforce the use of pull requests when updating the branch.

Board setup
Microsoft Azure DevOps for ASP .NET Core Web apps

With Azure DevOps boards you can plan the team work and create backlog for your product. You can create tasks, issues, bugs, user stories and features:


Microsoft Azure DevOps for ASP .NET Core Web apps

If you want to read more about backlog configuration with Azure DevOps please refer to this documentation.

For now we will create one user story and one task inside it. User story and task will be related with updating ReadMe file:


Microsoft Azure DevOps for ASP .NET Core Web apps

Of course you can add as many stories as you need for the project.

Build pipeline setup
Microsoft Azure DevOps for ASP .NET Core Web apps

In this section we will setup build pipeline for our web app project. We want to build the code each time when there is an update on the source code from “dev” branch. To configure automatic build open “Pipelines” tab and open “Builds” section:


Microsoft Azure DevOps for ASP .NET Core Web apps

Click “New pipeline” button and then select “Use the visual designer to create a pipeline without YAML.”

There are few steps:

1. Location -we have to indicate where the code is located in our case this will be “Azure Repos GIT”:


Microsoft Azure DevOps for ASP .NET Core Web apps

2. In this step we can select template for our build this is very convenient because we do not have to define everything from scratch. Select “ASP .NET Core” template:


Microsoft Azure DevOps for ASP .NET Core Web apps

3. There will be pre-configured steps displayed. We want to build our code each time new merge is done. To do this select “Enable continuous integration” in the “Triggers” section:


Microsoft Azure DevOps for ASP .NET Core Web apps
Microsoft Azure DevOps for ASP .NET Core Web apps

4. Lets change build definition name to “eShopOnAzureDevOps-ASP.NET Core-CI-DEV” just click on the text on top.

5. Click “Save & queue” button. Select “Hosted VS2017” as build agent. New build should be scheduled:


Microsoft Azure DevOps for ASP .NET Core Web apps
Microsoft Azure DevOps for ASP .NET Core Web apps

6. Once build if finished we can check the result:


Microsoft Azure DevOps for ASP .NET Core Web apps

We can move forward to the next section.

Release pipeline setup
Microsoft Azure DevOps for ASP .NET Core Web apps

We will host our application using Microsoft Azure Web App service. In this section I would like to present how to use Azure Web App deployment slots so you can deploy two (or more) versions of the application so its available under different URL addresses. You can read more about deployment slots here . As a result of below setup we will configure two different release pipelines: one for demo and one for production.

Create web application in Azure portal Login to the Azure portal and create new Web App service inside new resourc
          Microsoft joins Open Invention Network to help protect Linux and open source      Cache   Translate Page      
We know Microsoft’s decision to join OIN may be viewed as surprising to some; it is no secret that there has been friction in the past between Microsoft and the open source community over the issue of patents.
It's as if millions of Linux fanboi voices cried out in shock
          Hadoop Needs To Be A Business, Not Just A Platform      Cache   Translate Page      

It is safe to say that a little more than a decade ago, when the clone of Google’s MapReduce and Google File System distributed storage and computing platform was cloned at Yahoo and offered up to the world as a way to transform the nature of data analytics at scale, that we all had much higher hopes for the emergence of platforms centered around Hadoop that would change enterprise, not just webscale, computing.

It has been a lot tougher to build the road to enterprise customers and therefore profits, and the reason is simple. Databases are extremely sticky and very hard to change, even when the promise of extremely cheap storage at least by the standards of the mid-2000s is dangled like a juicy carrot. Hadoop, which became the name of a collection of mostly open source programs dealing with data storage and analytics at scale, has been brilliantly and carefully evolved into a number of different platforms by the likes of Cloudera, Hortonworks, MapR Technologies, and even IBM for a while.

But Hadoop has remained a complex if sophisticated platform aimed at the upper echelons of computing, suitable for the Global 5000 customers that were once on the bleeding edge, four or five decades ago, with IBM mainframes for transaction processing. The pace of technological change has accelerated much faster than Moore’s Law, and there are so many ways to skin the analytics cat that it is, frankly, as ridiculous as it is exciting and interesting. Enthusiasm tends to run ahead of practicalities, which is why old technologies persist. The question we have as we contemplate the merger between Cloudera and Hortonworks, arguably the largest commercial distributors and the only two who have made it to public offerings to investors on Wall Street, is whether or not their momentum is enough that Hadoop will be able to evolve and become a profitable business.

That is the central question, and while one might argue that merging two customer bases, two code bases with different licensing philosophies and some radically different approaches to storing and querying data, and two distinct companies so they stop fighting each other will make for a better and stronger Hadoop platform. It is certainly not a foregone conclusion that Hadoop as a business is as good as Hadoop as a platform, and the whole premise of a commercial open source distribution is that it has to be a good business so the platform can be reinvested in to keep it improving and evolving.

There are very few platforms that have succeeded in this regard, and Red Hat, with its linux server, JBoss application middleware, OpenStack cloud controller, and OpenShift Kubernetes container orchestrator, is really the only good example worth bringing up from the open source realm. Nothing else even comes close. If Red Hat had created a Hadoop distribution, as many of us thought it should have, or bought one, as it certainly could have, it is probable that Cloudera and Hortonworks would have never become public companies, which allowed their investors, who collectively plowed $1.04 billion into the former and $248 million into the latter, to cash out. (Intel’s $740 million infusion into Cloudera in 2014 was just an example of the hubris and folly that the chip giant can indulge in thanks to its virtual monopoly in PC and server chips. It happens to all big tech companies that create large profit pools. This list of such acquisitions and partnerships by IBM is long, just as an example.)

As a venture capital harvesting machine, Hadoop has been brilliant. Don’t get us wrong. And from the humble beginnings of the MapReduce data chunking and chewing algorithm and the Hadoop Distributed File System, the Hadoop platform has grown into a vast ecosystem of tools that mirrors all of the wonderful things that have come to surround the Linux kernel and turned it into a proper operating system that can span everything from a smartphone to a supercomputer. Hadoop is ornate, and sometimes baroque, and has so many variations on the themes for everything from data storage to SQL and other kinds of database and data warehouse overlays to different distributed computation models to a layer for in-memory processing and machine learning. It is a very large Swiss Army knife. That is Hadoop’s best feature, and it has also been its curse. Perhaps now, with the two largest Hadoop players merging, the Hadoop stack can be pruned a bit and better optimized for the workloads of the 2020s.

That, we presume, is the idea behind the merger between Cloudera and Hortonworks. The two companies also want to remove costs and probably remove some of the intense competition on pricing to get the combination to profitability, as is expected from every public company. (Will it be called CloudWorks? HortonEra? Something different? Or just Cloudera? Probably not Hortonworks.) And it looks like the combined Hadoop distributions have a path to profitability, if the trend lines hold.

That said, these two companies have burned through a tremendous amount of money to get here, and in the past six and a half years where we have visibility into the numbers, the businesses have indeed grown, but at tremendous cost. Both Cloudera and Hortonworks had models that showed they would grow a lot faster than they actually did, and the slower growth has pushed out the point of profitability ahead of them every year. Adding more and more blades to the Hadoop Swiss Army knife has been costly, and to their credit, they have done innovative things that have kept Hadoop relevant as conditions in the market have changed dramatically in the past decade. What they are trying to do is extremely difficult, and we have nothing but tremendous respect for the effort that some of the smartest people we know in infrastructure and business have put in. But the numbers are not pretty, even if they are getting rosier here in 2018 and looking out into 2019 and 2020.

In fact, it might have been better for these two companies to have merged a few years back, cleaned everything up, got the synergies reckoned, and be going public right now.

Cloudera got the early jump as the dominant revenue generator of the Hadoop stack, but in recent years, Hortonworks has been catching up fast. Take a look:


Hadoop Needs To Be A Business, Not Just A Platform

The bars at the far right show figures for the first half of calendar 2018, so don’t mistakenly think revenues have dropped.

Here is our analysis in tabular form, so you can see the numbers yourself:


Hadoop Needs To Be A Business, Not Just A Platform

Clearly sales growth has cooled for Cloudera it only grew 26 percent in the past two quarters compared to the same period 12 months earlier and down from the 42 percent growth rate in 2017 while Hortonworks is still humming along at 40 percent growth here in 2018 thus far. The combined companies is probably the best way to reckon the overall growth rate, and for the first half that is 32 percent growth for $378 million in sales. There is no reason to believe that the combination cannot break through $800 million in sales in calendar 2019 and push up through $1 billion in 2020.

If you do the math, Cloudera has raked in $1.28 billion in revenues in the past six and a half years, while Hortonworks only brought in $808 million. Add in the venture capital of $1.31 billion in venture capital, plus $225 million that Cloudera raised in early 2017 for its IPO and the $100 million that Hortonworks raised in late 2014 from its IPO, and the total pile of cash that has come to the pair is $3.69 billion. Hortonworks still has $86 million of cash and Cloudera still has $440.1 million. But over that same time period, Cloudera has booked cumulative losses of $1.19 billion and Hortonworks has cumulative losses of $979 million, for a total of $2.16 billion. Both separately and together, these companies are burning the wood a lot faster than they can cut it. This chart shows it visually:


Hadoop Needs To Be A Business, Not Just A Platform

But the financial situation is getting better, as the data shows. The Cloudera and Hortonworks presentations accompanying the merger announcement use trailing twelve month data, which is suitable but we mixed the quarterly data above because it has a longer trend line. Here is what the profit top-level financials look like:


Hadoop Needs To Be A Business, Not Just A Platform

The combined companies, on a trailing twelve month basis, have $720 million in revenues, and gross margins are pretty good at 74 percent. A software company with a legacy installed base can pull in 85 percent to 95 percent gross margins, and aside from the elimination of redundant costs and other synergies that Cloudera and Hortonworks are talking about eliminating, the reduction in competition is going to help, too. No one is talking about that, of course, but that is no doubt part of the thinking behind the merger. There is some cross-selling that is possible to boost revenues, but we think the reduction in competition is a bigger deal. Rationalizing the very different licensing models Hortonworks is pure open source with subscription support, while Cloudera is open core with support plus enterprise add-ons with subscription licenses for key features is not going to be easy, and many products and projects will have to be merged or picked one over the other. Still, even with the $125 million in synergies removed, the combined company will move closer towards profitability, and with a reasonable 30 percent revenue growth rate, the new entity should break through $1 billion and be profitable in 2020. And that is precisely the plan:


Hadoop Needs To Be A Business, Not Just A Platform

To be precise, the combined Cloudera-Hortonworks is telling Wall Street that it can get above $1 billion in sales and have gross margins about 75 percent and operating margins above 10 percent for calendar 2020. Which implies that it will have actual profits, if all goes well.

The total addressable market is expanding, too, and that will help. Here is how Cloudera and Hortonworks see the opportunity out in front of them:


Hadoop Needs To Be A Business, Not Just A Platform

The core market that Hadoop is chasing is comprised of three different segments, according to Cloudera-Hortonworks, and will grow at a compound annual growth rate of 21 percent between 2017 and 2022, from $12.7 billion to $32.3 billion. Within that, cognitive and artificial intelligence workloads represent a $14.3 billion opportunity in 2022, $4.9 billion for advanced and predictive analytics software, and $13.2 billion for dynamic data management systems (what we would call modern storage). In addition to that, the Hadoop platform is also chasing relational and non-relational database management systems and data warehouses, which is another $51 billion opportunity in 2022, for a total TAM of $83 billion. Even a small slice of this, which is what Hadoop currently gets today, could be billions of dollars by then. (We shall see.)

The deal for the merger of the two companies is surprisingly simple. Shareholders in Hortonworks will get 1.305 shares in Cloudera and Cloudera will be the remaining company in fact, if not necessarily in name. This means that Cloudera shareholders will own 60 percent of the combined company and Hortonworks shareholders will own the remaining 40 percent. The combined companies had a fully diluted equity value of $5.2 billion before the merger was announced. At the time the deal was announced, the combined firms had more than $500 million in cash, no debt, and 2,500 customers who largely do not overlap. There are more than 120 customers who spend $1 million a year and another 800 customers who spend more than $100,000 a year for subscriptions and such.


          Build a Mobile App with React Native and Spring Boot      Cache   Translate Page      

React Native is a framework for building mobile applications with React. React allows you to use a declarative style of programming to describe how your UI should look. It uses embedded HTML (called JSX) to render buttons, lists, scrollable views, and many other components.

I’m a seasoned Java and JavaScript developer that loves Spring and TypeScript. Some might call me a Java hipster because I like JavaScript. In this post, I’m going to show you how to build a Spring Boot API that talks to a PostgreSQL database. You’ll use Elasticsearch to make your data searchable. You’ll also learn how to deploy it to Cloud Foundry, and Google Cloud Platform using Kubernetes.

The really cool part is you’ll see how to build a mobile app with React Native. React Native allows you to build mobile apps with the web technologies you know and love: React and JavaScript! I’ll show you how to test it on device emulators and deploy it to your phone. Giddyup!

Create a Spring Boot App

In my recent developer life, I built an app to help me track and monitor my health. I came up with the idea while writing the JHipster Mini-Book. I was inspired by Spring Boot’s Actuator, which helps you monitor the health of your Spring Boot app. The app is called 21-Points Health and you can find its source code on GitHub.

21-Points Health uses a 21-point system to see how healthy you are being each week. Its rules are simple: you can earn up to three points per day for the following reasons:

  1. If you eat healthy, you get a point. Otherwise, zero.

  2. If you exercise, you get a point.

  3. If you don’t drink alcohol, you get a point.

I’m going to cheat a bit in this tutorial. Rather than writing every component line-by-line, I’m going to generate the API and the app using JHipster and Ignite JHipster.

What is JHipster?

I’m glad you asked! It’s an Apache-licensed open source project that allows you to generate Spring Boot APIs, as well as Angular or React UIs. It includes support for generating CRUD screens and adding all the necessary plumbing. It even generates microservice architectures!

Ignite JHipster is a complementary feature of JHipster. It’s a blueprint template for the Ignite CLI project. Ignite CLI is open source and MIT licensed, produced by the good folks at Infinite Red. Ignite CLI allows you to generate React Native apps in seconds with a number of components pre-integrated. I was blown away the first time I saw a demo of it from Gant Laborde.

To get things moving quickly, I ran jhipster export-jdl to export an entity definition from 21-Points Health. After exporting the entity definitions, I used JDL-Studio to create an application definition for my project. Then I clicked the download icon to save the file to my hard drive.

JDL-Studio

The code you see below is called JDL, or JHipster Domain Language. It was initially designed for JHipster to allow multiple entities and specifying all their attributes, relationships, and pagination features. It’s recently been enhanced to allow generating whole apps from a single file! 💥

application {
  config {
    applicationType monolith,
    baseName HealthPoints
    packageName com.okta.developer,
    authenticationType oauth2,
    prodDatabaseType postgresql,
    buildTool gradle,
    searchEngine elasticsearch,
    testFrameworks [protractor],
    clientFramework react,
    useSass true,
    enableTranslation true,
    nativeLanguage en,
    languages [en, es]
  }
  entities Points, BloodPressure, Weight, Preferences
}

// JDL definition for application 'TwentyOnePoints' generated with command 'jhipster export-jdl'

entity BloodPressure {
  timestamp ZonedDateTime required,
  systolic Integer required,
  diastolic Integer required
}
entity Weight {
  timestamp ZonedDateTime required,
  weight Double required
}
entity Points {
  date LocalDate required,
  exercise Integer,
  meals Integer,
  alcohol Integer,
  notes String maxlength(140)
}
entity Preferences {
  weeklyGoal Integer required min(10) max(21),
  weightUnits Units required
}

enum Units {
  KG,
  LB
}

relationship OneToOne {
  Preferences{user(login)} to User
}
relationship ManyToOne {
  BloodPressure{user(login)} to User,
  Weight{user(login)} to User,
  Points{user(login)} to User
}

paginate BloodPressure, Weight with infinite-scroll
paginate Points with pagination

Create a new directory, with a jhipster-api directory inside it.

mkdir -p react-native-spring-boot/jhipster-api

Copy the JDL above into an app.jh file inside the react-native-spring-boot directory. Install JHipster using npm.

npm i -g generator-jhipster@5.4.2

Navigate to the jhipster-api directory in a terminal window. Run the command below to generate an app with a plethora of useful features out-of-the-box.

jhipster import-jdl ../app.jh

Run Your Spring Boot App

This app has a number of technologies and features specified as part of its application configuration, including OIDC auth, PostgreSQL, Gradle, Elasticsearch, Protractor tests, React, and Sass. Not only that, it even has test coverage for most of its code!

To make sure your app is functional, start a few Docker containers for Elasticsearch, Keycloak, PostgreSQL, and Sonar. The commands below should be run from the jhipster-api directory.

docker-compose -f src/main/docker/elasticsearch.yml up -d
docker-compose -f src/main/docker/keycloak.yml up -d
docker-compose -f src/main/docker/postgresql.yml up -d
docker-compose -f src/main/docker/sonar.yml up -d

The containers might take a bit to download, so you might want to grab a coffee, or a glass of water.

While you’re waiting, you can also commit your project to Git. If you have Git installed, JHipster will run git init in your jhipster-api directory. Since you’re putting your Spring Boot app and React Native app in the same repository, remove .git from jhipster-api and initialize Git in the parent directory.

rm -rf jhipster-api/.git
git init
git add .
git commit -m "Generate Spring Boot API"

Ensure Test Coverage with Sonar

JHipster generates apps with high code quality. Code quality is analyzed using SonarCloud, which is automatically configured by JHipster. The "code quality" metric is determined by the percentage of code that is covered by tests.

Once all the Docker containers have finished starting, run the following command to prove code quality is 👍 (from the jhipster-api directory).

./gradlew -Pprod clean test sonarqube
If you don’t commit your project to Git, the sonarqube task might fail.

Once this process completes, an analysis of your project will be available on the Sonar dashboard at http://127.0.0.1:9001. Check it - you have a triple-A-rated app! Not bad, eh?

Sonar AAA

Create a React Native App for Your Spring Boot API

You can build a React Native app for your Spring Boot API using Ignite JHipster, created by Jon Ruddell. Jon is one of the most prolific JHipster contributors. ❤️

Ignite JHipster

Install Ignite CLI:

npm i -g ignite-cli@2.1.2 ignite-jhipster@1.12.1

Make sure you’re in the react-native-spring-boot directory, then generate a React Native app.

ignite new HealthPoints -b ignite-jhipster

When prompted for the path to your JHipster project, enter jhipster-api.

When the project is finished generating, rename HealthPoints to react-native-app, then committed it to Git.

mv HealthPoints react-native-app
rm -rf react-native-app/.git
git add .
git commit -m "Add React Native app"

You might notice that two new files were added to your API project.

create mode 100644 jhipster-api/src/main/java/com/okta/developer/config/ResourceServerConfiguration.java
create mode 100644 jhipster-api/src/main/java/com/okta/developer/web/rest/AuthInfoResource.java

These classes configure a resource server for your project (so you can pass in an Authorization header with an access token) and expose the OIDC issuer and client ID via a REST endpoint.

Modify React Native App for OAuth 2.0 / OIDC Login

You will need to make some changes to your React Native app so OIDC login works. I’ve summarized them below.

Update Files for iOS

If you’d like to run your app on iOS, you’ll need to modify react-native-app/ios/HealthPoints/AppDelegate.m to add an openURL() method and an import at the top.

#import <React/RCTLinkingManager.h>

Then add the method before the @end at the bottom of the file.

- (BOOL)application:(UIApplication *)application
           openURL:(NSURL *)url
           options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options
{
 return [RCTLinkingManager application:application openURL:url options:options];
}

You’ll also need to configure your iOS URL scheme. Run open ios/HealthPoints.xcodeproj to open the project in Xcode. Navigate to Project > Info > URL Types and specify healthpoints like in the screenshot below.

Xcode URL Scheme

You can also modify ios/HealthPoints/Info.plist if you’d rather not use Xcode.

        <key>CFBundleSignature</key>
        <string>????</string>
+       <key>CFBundleURLTypes</key>
+       <array>
+               <dict>
+                       <key>CFBundleTypeRole</key>
+                       <string>Editor</string>
+                       <key>CFBundleURLName</key>
+                       <string>healthpoints</string>
+                       <key>CFBundleURLSchemes</key>
+                       <array>
+                               <string>healthpoints</string>
+                       </array>
+               </dict>
+       </array>
        <key>CFBundleVersion</key>

Update Files for Android

To make the Android side of things aware of your URL scheme, add it to android/app/src/main/AndroidManifest.xml. The following XML should go just after the existing <intent-filter>.

<intent-filter>
    <action android:name="android.intent.action.MAIN" />
    <category android:name="android.intent.category.LAUNCHER" />
    <data android:scheme="healthpoints" />
</intent-filter>

Update Keycloak’s Redirect URI

You will also need to update Keycloak to know your app’s URL scheme because it’s used as a redirect URI. Open http://localhost:9080/auth/admin in your browser and log in with admin/admin. Navigate to Clients > web_app and add healthpoints://authorize as a valid redirect URI.

Valid Redirect URIs

Run Your React Native App on iOS

To run your React Native app, you’ll need to start your Spring Boot app first. Navigate to the jhipster-api directory and run ./gradlew. In another terminal window, navigate to react-native-app and run react-native run-ios.

If you get an error Print: Entry, ":CFBundleIdentifier", Does Not Exist, run rm -rf ~/.rncache and try again.

Verify you can log in by clicking the hamburger menu in the top left corner, then Login. Use "admin" for the username and password.

Ignite JHipster with Keycloak
To enable live-reloading of your code in iOS Simulator, first click on the emulator, then press +R.

Run Your React Native App on Android

To run your app on an Android emulator, run react-native run-android. If you don’t have a phone plugged in or an Android Virtual Device (AVD) running, you’ll see an error:

Could not install the app on the device, read the error above for details.

To fix this, open Android Studio, choose open existing project, and select the android directory in your project. If you’re prompted to "Install Build Tools and sync project," do it.

To create a new AVD, navigate to Tools > Android > AVD Manager. Create a new Virtual Device and click Play. I chose a Pixel 2 as you can see from my settings below.

AVD Pixel 2

To make Keycloak and your API work with Android in an emulator, you’ll have to change all localhost links to 10.0.2.2. See Android Emulator networking for more information.

This means you’ll need to update src/main/resources/config/application.yml in the JHipster app to the following.

security:
    oauth2:
        client:
            access-token-uri: http://10.0.2.2:9080/auth/realms/jhipster/protocol/openid-connect/token
            user-authorization-uri: http://10.0.2.2:9080/auth/realms/jhipster/protocol/openid-connect/auth
            client-id: web_app
            client-secret: web_app
            scope: openid profile email
        resource:
            user-info-uri: http://10.0.2.2:9080/auth/realms/jhipster/protocol/openid-connect/userinfo

You’ll also need to update apiUrl in your React Native app’s App/Config/AppConfig.js.

export default {
  apiUrl: 'http://10.0.2.2:8080/',
  appUrlScheme: 'healthpoints'
}

Run react-native run-android again. You should be able to log in just like you did on iOS. Unfortunately, I wasn’t able to make it work. Even if I was able to make it work, it’d make it impossible to log in to the React app in the JHipster app because your local server wouldn’t know where 10.0.2.2 is. This was a bad developer experience for me. The good news is everything works with Okta (which I’ll get to in a minute).

To enable live-reloading of code on Android, first click on the emulator, then press Ctrl+M (+M on MacOS) or shake the Android device which has the running app. Then select the Enable Live Reload option from the popup.

For the rest of this tutorial, I’m going to show all the examples on iOS, but you should be able to use Android if you prefer.

Generate CRUD Pages in React Native App

To generate pages for managing entities in your Spring Boot API, run the following command in the react-native-app directory.

ignite generate import-jdl ../app.jh

Run react-native run-ios, log in, and click the Entities menu item. You should see a screen like the one below.

Ignite JHipster Entities Screen

Click on Points and you should be able to add points.

Create Points Screen

Tweak React Native Points Edit Screen to use Toggles

The goal of my 21-Points Health app is to count the total number of health points you get in a week, with the max being 21. For this reason, I think it’s a good idea to change the integer inputs on exercise, meals, and alcohol to be toggles instead of raw integers. If the user toggles it on, the app should store the value as "1". If they toggle it off, it should record "0".

To make this change to the React Native app, open App/Containers/PointEntityEditScreen.js in your favorite editor. Change the formModel to use t.Boolean for exercise, meals, and alcohol.

formModel: t.struct({
  id: t.maybe(t.Number),
  date: t.Date,
  exercise: t.maybe(t.Boolean),
  meals: t.maybe(t.Boolean),
  alcohol: t.maybe(t.Boolean),
  notes: t.maybe(t.String),
  userId: this.getUsers()
}),

Then change the entityToFormValue() and formValueToEntity() methods to save 1 or 0, depending on the user’s selection.

entityToFormValue = (value) => {
  if (!value) {
    return {}
  }
  return {
    id: value.id || null,
    date: value.date || null,
    exercise: value.exercise === 1 ? true : false,
    meals: value.meals === 1 ? true : false,
    alcohol: value.alcohol === 1 ? true : false,
    notes: value.notes || null,
    userId: (value.user && value.user.id) ? value.user.id : null
  }
}
formValueToEntity = (value) => {
  return {
    id: value.id || null,
    date: value.date || null,
    exercise: (value.exercise) ? 1 : 0,
    meals: (value.meals) ? 1 : 0,
    alcohol: (value.alcohol) ? 1 : 0,
    notes: value.notes || null,
    user: value.userId ? { id: value.userId } : null
  }
}

While you’re at it, you can change the default Points entity to have today’s date and true for every point by default. You can make this happen by modifying componentWillMount() and changing the formValue.

componentWillMount () {
  if (this.props.entityId) {
    this.props.getPoint(this.props.entityId)
  } else {
    this.setState({formValue: {date: new Date(), exercise: true, meals: true, alcohol: true}})
  }
  this.props.getAllUsers()
}

Refresh your app in Simulator using ⌘ + R. When you create new points, you should see your new defaults.

Create Points with defaults

Tweak React App’s Points to use Checkboxes

Since your JHipster app has a React UI as well, it makes sense to change the points input/edit screen to use a similar mechanism: checkboxes. Open jhipster-api/src/main/webapp/…​/points-update.tsx and replace the TSX (the T is for TypeScript) for the three fields with the following. You might notice the trueValue and falseValue attributes handle converting checked to true and vise versa.

jhipster-api/src/main/webapp/app/entities/points/points-update.tsx
<AvGroup check>
  <AvInput id="points-exercise" type="checkbox" className="form-control"
    name="exercise" trueValue={1} falseValue={0} /> // (1)
  <Label check id="exerciseLabel" for="exercise">
    <Translate contentKey="healthPointsApp.points.exercise">Exercise</Translate>
  </Label>
</AvGroup>
<AvGroup check>
  <AvInput id="points-meals" type="checkbox" className="form-control"
    name="meals" trueValue={1} falseValue={0} />
  <Label check id="mealsLabel" for="meals">
    <Translate contentKey="healthPointsApp.points.meals">Meals</Translate>
  </Label>
</AvGroup>
<AvGroup check>
  <AvInput id="points-alcohol" type="checkbox" className="form-control"
    name="alcohol" trueValue={1} falseValue={0} />
  <Label check id="alcoholLabel" for="alcohol">
    <Translate contentKey="healthPointsApp.points.alcohol">Alcohol</Translate>
  </Label>
</AvGroup>

In the jhipster-api directory, run npm start (or yarn start) and verify your changes exist. The screenshot below shows what it looks like when editing a record entered by the React Native app.

checkboxes in React app

Use Okta’s API for Identity

Switching from Keycload to Okta for identity in a JHipster app is suuuper easy thanks to Spring Boot and Spring Security. First, you’ll need an Okta developer account. If you don’t have one already, you can signup at developer.okta.com/signup. Okta is an OIDC provider like Keycloak, but it’s always on, so you don’t have to manage it.

Okta Developer Signup

Log in to your Okta Developer account and navigate to Applications > Add Application. Click Web and click Next. Give the app a name you’ll remember, and specify http://localhost:8080/login and healthpoints://authorize as Login redirect URIs. Click Done, then edit it again to select "Implicit (Hybrid)" + allow ID and access tokens. Note the client ID and secret, you’ll need to copy/paste them into a file in a minute.

Create a ROLE_ADMIN and ROLE_USER group (Users > Groups > Add Group) and add users to them. I recommend adding the account you signed up with to ROLE_ADMIN and creating a new user (Users > Add Person) to add to ROLE_USER.

Navigate to API > Authorization Servers and click the one named default to edit it. Click the Claims tab and Add Claim. Name it "roles", and include it in the ID Token. Set the value type to "Groups" and set the filter to be a Regex of .*. Click Create to complete the process.

Create a file on your hard drive called ~/.okta.env and specify the settings for your app in it.

#!/bin/bash
export SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI="/oauth2/default/v1/token"
export SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI="/oauth2/default/v1/authorize"
export SECURITY_OAUTH2_RESOURCE_USER_INFO_URI="/oauth2/default/v1/userinfo"
export SECURITY_OAUTH2_CLIENT_CLIENT_ID="{yourClientId}"
export SECURITY_OAUTH2_CLIENT_CLIENT_SECRET="{yourClientSecret}"
Make sure your *URI variables do not have -admin in them. This is a common mistake.

In the terminal where your Spring Boot app is running, kill the process, run source ~/.okta.env and run ./gradlew again. You should be able to log in at http://localhost:8080 in your React Native app (after you refresh or restart it).

Okta Login in React Native

Debugging React Native Apps

If you have issues, or just want to see what API calls are being made, you can use Reactotron. Reactotron is a desktop app for inspecting your React and React Native applications. It should work with iOS without any changes. For Android, you’ll need to run adb reverse tcp:9090 tcp:9090 after your AVD is running.

Once it’s running, you can see API calls being made, as well as log messages.

Reactotron

If you’d like to log your own messages to Reactotron, you can use console.tron.log('debug message').

Packaging Your React Native App for Production

The last thing I’d like to show you to deploy your app to production. Since there are many steps to getting your React Native app onto a physical device, I’ll defer to React Native’s Running on Device documentation. It should be as simple as plugging in your device via USB, configuring code signing, and building/running your app. You’ll also need to configure the URL of where your API is located.

You know what’s awesome about Spring Boot? There’s a bunch of cloud providers that support it! If a platform supports Spring Boot, you should be able to run a JHipster app on it!

Follow the instructions below to deploy your API to Pivotal’s Cloud Foundry and Google Cloud Platform using Kubernetes. Both Cloud Foundry and Kubernetes have multiple providers, so these instructions should work even if you’re not using Pivotal or Google.

Deploy Your Spring Boot API to Cloud Foundry

JHipster has a Cloud Foundry sub-generator that makes it simple to deploy to Cloud Foundry. It only requires you run one command. However, you have Elasticsearch configured in your API and the sub-generator doesn’t support automatically provisioning an Elasticsearch instance for you. To workaround this limitation, modify jhipster-api/src/main/resources/config/application-prod.yml and find the following configuration for Spring Data Jest:

data:
    jest:
        uri: http://localhost:9200

Replace it with the following, which will cause Elasticsearch to run in embedded mode.

data:
    elasticsearch:
        properties:
            path:
                home: /tmp/elasticsearch

You’ll also need to remove a couple of properties, due to an issue I discovered in JHipster.

@@ -30,15 +30,12 @@ spring:
         url: jdbc:postgresql://localhost:5432/HealthPoints
         username: HealthPoints
         password:
-        hikari:
-            auto-commit: false
     jpa:
         database-platform: io.github.jhipster.domain.util.FixedPostgreSQL82Dialect
         database: POSTGRESQL
         show-sql: false
         properties:
             hibernate.id.new_generator_mappings: true
-            hibernate.connection.provider_disables_autocommit: true
             hibernate.cache.use_second_level_cache: true
             hibernate.cache.use_query_cache: false
             hibernate.generate_statistics: false

To deploy everything on Cloud Foundry with Pivotal Web Services, you’ll need to create an account, download/install the Cloud Foundry CLI, and sign-in (using cf login -a api.run.pivotal.io).

You may receive a warning after logging in No space targeted, use 'cf target -s SPACE'. If you do, log in to https://run.pivotal.io in your browser, create a space, then run the command as recommended.

Then run jhipster cloudfoundry in the jhipster-api directory. You can see the values I chose when prompted below.

CloudFoundry configuration is starting
? Name to deploy as? HealthPoints
? Which profile would you like to use? prod
? What is the name of your database service? elephantsql
? What is the name of your database plan? turtle

When prompted to overwrite build.gradle, type a.

The first time I ran jhipster cloudfoundry, it didn’t work. Running it a second time succeeded.
source ~/.okta.env
export CF_APP_NAME=healthpoints
cf set-env $CF_APP_NAME FORCE_HTTPS true
cf set-env $CF_APP_NAME SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI "$SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI"
cf set-env $CF_APP_NAME SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI "$SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI"
cf set-env $CF_APP_NAME SECURITY_OAUTH2_RESOURCE_USER_INFO_URI "$SECURITY_OAUTH2_RESOURCE_USER_INFO_URI"
cf set-env $CF_APP_NAME SECURITY_OAUTH2_CLIENT_CLIENT_ID "$SECURITY_OAUTH2_CLIENT_CLIENT_ID"
cf set-env $CF_APP_NAME SECURITY_OAUTH2_CLIENT_CLIENT_SECRET "$SECURITY_OAUTH2_CLIENT_CLIENT_SECRET"
cf restage healthpoints

After overriding the default OIDC settings for Spring Security, you’ll need to add https://healthpoints.cfapps.io/login as a redirect URI in your Okta OIDC application.

Then…​ you’ll be able to authenticate. Voila! 😃

JHipster API on Cloud Foundry

Modify your React Native application’s apiUrl (in App/Config/AppConfig.js) to be https://healthpoints.cfapps.io/ and deploy it to your phone. Hint: use the "running on device" docs I mentioned earlier.

export default {
  apiUrl: 'https://healthpoints.cfapps.io/',
  appUrlScheme: 'healthpoints'
}

I used Xcode on my Mac (open react-native-app/ios/HealthPoints.xcodeproj) and deployed it to an iPhone X.

When I encountered build issues in Xcode, I ran rm -rf ~/.rncache and it fixed them. I also used a bit of rm -rf node_modules && yarn.

Below are screenshots that show it worked!

Login and Entities on iPhone X

Deploy Your Spring Boot API to Google Cloud Platform using Kubernetes

JHipster also supports deploying your app to the 🔥 hottest thing in production: Kubernetes!

To try it out, create a k8s directory alongside your jhipster-api directory. Then run jhipster kubernetes in it. When prompted, specify the following answers:

  • Type of application: Monolithic application

  • Root directory: ../

  • Which applications: jhipster-api

  • Setup monitoring: No

  • Kubernetes namespace: default

          Hortonworks Cloudera merger proposal stirs market pot      Cache   Translate Page      

Cloudera was first to market in 2008, and Hortonworks followed in 2011. In a recent interview with Computer Weekly, Rob Beardon, CEO and co-founder of Hortonworks, said the company’s software had always been about the business value to be derived from bringing unstructured, big data under management and less about Hadoop, as such.

“Back in 2011, our intuition was that all the ‘new paradigm’ data sets the mobile, the click stream, the sensor data was all coming at enterprises very quickly, and in large volumes. Architecturally, that data would not go into relational environments.

“Also, it was data about [companies’] customers, products and suppliers that was pre-transaction or pre-event. Our hypothesis was if we could bring that data under management, and learn how to get value from it, we could transform business models to be less reactionary post event, post transaction and more proactive pre-event, pre-transaction. And we thought that Hadoop had the best shot of being the platform that would do that.”

In another recent interview with Computer Weekly, Amy O’Connor, chief data and information officerat Cloudera, said that when she was a customer at Nokia, she had been impressed that the company’s foundersAmr Awadallah and Mike Olsen said that all companies should be able to transform their businesses with new ways of treating data, not just the likes of Yahoo or Google.

In yesterday’s merger statement, Tom Reilly, chief executive officerat Cloudera, said, presenting the two suppliers as complementary: “By bringing together Hortonworks’ investments in end-to-end data management with Cloudera’s investments in data warehousing and machine learning, we will deliver the industry’s first enterprise data cloud from the edge to AI.”

Matt Aslett, analyst at 451 Research, said of the proposed merger in a comment provided to Computer Weekly: “There shouldn’t be significant overlap in terms of customers. While many companies might have both Cloudera and Hortonworks distributions running tactical deployments, in terms of strategic adoption, most organisations have chosen one or the other, and there is a commitment from Cloudera that customers will be supported on current offerings for at least three years.

“While there is a common foundation of Apache Hadoop and associated open source projects, the two companies do have some differentiating functionality and Cloudera clearly sees opportunities to sellHortonworks DataFlow (HDF) to Cloudera customers for streaming analytics and Cloudera Data Science Workbench to Hortonworks clients for machine learning and AI,” he said.

“There is also significant overlap in some areas, particularly data management, data governance and data security. In relation to overlapping products, Cloudera has said that the combined engineering teams will identify the best and merge them where appropriate. This is likely to be a lot easier said than done, and could be a major hurdle to the company realising its potential R&D cost savings if not managed effectively.

“If the leadership and engineering teams of the combined company are able to put aside their historically sometimes acrimonious differences to successfully rationalise the merged product portfolio, the result should be positive for customers overall. It will potentially involve a reduction in choice, it’s true, but given the proliferation of competing projects from Cloudera and Hortonworks in recent years, that may not be a bad thing.”

Doug Henschen, an analyst at Constellation Research said, in a comment provided to our sister site SearchDataMangement.com : “The move to the cloud by enterprises is sapping growth and revenue potential for Cloudera and Hortonworks such that both players can’t sustain strong and profitable growth. Amazon EMR and Spark services, and similar Azure and Google services, are seeing faster growth, and, together, are capturing the lion’s share of the big data platforms market.”


          Redis Labs and Common Clause attacked where it hurts: With open-source code      Cache   Translate Page      

After Redis Labs added a new license clause, Commons Clause , on top of popular open-source, in-memory data structure store Redis , open-source developers were mad as hell . Now, instead of just ranting about it, some have counterattacked by starting a project, GoodFORM , to fork the code in question.

Also: Why Redis Labs made a huge mistake when it changed its open source strategy TechRepublic

The two developers behind this, Chris Lamb, the Debian linux project leader, and Nathan Scott, a Fedora developer, explained:

"With the recent licensing changes to several Redis Labs modules making them no longer free and open source, GNU/Linux distributions such as Debian and Fedora are no longer able to ship Redis Labs' versions of the affected modules to their users.

As a result, we have begun working together to create a set of module repositories forked from prior to the license change. We will maintain changes to these modules under their original open-source licenses, applying only free and open fixes and updates."

They're looking for help with this project.

The Common Clause sub-license forbids you from selling software it covers. It also states you may not host or offer consulting or support services as "a product or service whose value derives, entirely or substantially, from the functionality of the software." This is expressly designed to prevent cloud companies from profiting by using the licensed programs.

As Redis Labs' co-founder and CTO Yiftach Shoolman said in an email, the company did this "for two reasons -- to limit the monetization of these advanced capabilities by cloud service providers like AWS and to help enterprise developers whose companies do not work with AGPL licenses."

Be that as it may, Bruce Perens, co-founder of the Open Source Initiative (OSI) , thinks Redis Labs could have handled it better. Perens wrote, "Once the Commons Clause is added , it's no longer the Apache license, and calling it so confuses people about what is Open Source and what isn't. ... Stop it."

Also: Why novelty open source licenses hurt businesses more than they help TechRepublic

Lamb added in an e-mail that Redis Labs' use of the Common Clause made it impossible to use their programs. Thus, their forked replacements under the old license. "We are committed to making these available under an open-source license permanently, and welcome community involvement."

Victor Ruiz, a developer who'd worked on Redis, tweeted, "After using open source contributions to make the projects good enough, now they want to cash out. Let's keep free and open Redis modules."

In an e-mail Ruiz expanded:

"Their behaviour seems unethical also to me. They are now selling licenses of their software which includes open-source contributions.They have used the open-source contributions to make these modules good enough and now they will cash out. And I bet they knew, many people wouldn't have contributed if they had this Common Clause from the beginning." He understands that they're trying to stop SaaS [Software-as-a-Service] companies to sell software which uses their modules, but wonder if there aren't other types of licenses which might fit better in this scenario."

Also: Mozilla's open-source move 20 years ago helped rewrite rules of tech CNET

Ruiz also added that prior to announcing its moving some of its code under the Common Clause. Redis Labs had asked him to sign a Contributor License Agreement (CLA), which granted the copyrights and patent rights of their contributions to Redis Labs.

Ruiz commented, "It seems such a great coincidence that they introduced the clause in the CLA right after ensuring they have all the rights on the contributions made to the projects, ensuring that anybody can claim for rights on that software."

The struggle between open-source developers and Common Clause adopters is on.


          Chasing fads or solving problems?      Cache   Translate Page      
Sample chat between two engineers <strong>Engineer 1:</strong> We should use Hadoop/Kubernetes/Cassandra/insert-random-cool-tech. <strong>Engineer 2:</strong> Why? <strong>Engineer 1:</strong> Well, all the cool guys rave about it: latest-fad-tech is the next big thing on the tech horizon. All the hot startups are using it too! <strong>Engineer 2:</strong> So what will we use it for? <strong>Engineer 1:</strong> Well, we could use latest-fad-tech to revamp our entire data processing stack. It's so much easier than the stable-reliable-well-known-and-proven platform we currently use. <strong>Engineer 2:</strong> So what are the benefits of moving to latest-fad? <strong>Engineer 1:</strong> We'll look modern like the hot startups, use the latest stuff and not have to worry about data loss any more. <strong>Engineer 2:</strong> Do we have a data loss problem now? <strong>Engineer 1:</strong> Errm..., we don't have a data-loss problem. Do you know that latest-fad also promises automatic resource management? Our infrastructure staffing needs will go down. <strong>Engineer 2:</strong> Our current resource management costs come to about 5 - 10% of our staffing needs. Are you telling me latest-fad has no adoption and management cost at all? <strong>Engineer 1:</strong> Probably not - nothing is free! But it is a one-time cost and the investment will eventually pay off. <strong>Engineer 2:</strong> OK, fair point. How does latest-fad behave under extreme load? Are failure modes well-known? <strong>Engineer 1:</strong> Well, it is the in-thing! There are loads of people running it so I think you'll be able to Google search for problems... <strong>Engineer 2:</strong> What happens if you run into an edge case that no one has ever run into before? Who are you gonna call? <strong>Engineer 1:</strong> I don't know... <strong>Engineer 2:</strong> So you're saying we should take a plunge from a stable less-cool platform to an unproven cool one? <strong>Engineer 1:</strong> Errm, well I don't like the less-cool platform because it is not as cool as latest-fad. And latest-fad is open source too! <strong>Engineer 2:</strong> Have you thought about the cost of moving to latest-fad? The risks of rewriting code, redesigning our stack and the productivity dip during the learning phase has to be weighed against potential benefits. <strong>Engineer 1:</strong> That sounds like a lot of work but we really should use latest-fad. <strong>Engineer 2:</strong> What benefit will accrue to our business and customers by switching? What needle does it move? <strong>Engineer 1:</strong> But it'll make our developers more productive. <strong>Engineer 2:</strong> Well, there are other tasks that will significantly increase our productivity e.g. better documentation, release automation and investing in tools. I admit they aren't as 'cool' as latest-fad though. <strong>Engineer 1:</strong> ... <strong>Engineer 2:</strong> Why not do a spike and come back with a comprehensive plan covering what it'll take to adopt latest-fad? Shiny new kid on the block

If you’ve been a software engineer for some time, you probably have been engineer 1, engineer 2 or listened on a similar conversation. It’s the classic conversation along React vs Angular vs Vue; micro-services vs monoliths, AWS vs Azure themes etc.

Most times we name-drop new technologies to impress. At other times, we want to satisfy our itch and try out new stuff: when you have a new hammer, you go about finding nails to drive in!

This attitude is risky because the eagerness to wield the new hammer can divert engineers from truly focusing on the problem at hand. Problem identification and isolation should be starting point: technology is a means and not the goal.

By all means, use Hadoop , neural networks or Cassandra if they are the best tools for the problem space. However, if they aren’t, then you’ve worsened the situation for the following reasons:

You still haven’t solved the original problem You have a new problem: maintaining acomplicatedtechnology platform Supporting the ill-fitting abstractionthat the platform in 2 provides for 1 Congrats! You just played yourself!! You bought two more problems for the price of one!!! Cost vs benefit vs risk

There are three factors to be considered every time you make a technology decision:

Cost (C): What will it take to adopt or implement the technology? Benefits (B): What benefits will accrue? Risk (R): What is the risk to existing business value?

My heuristic is to green-light full adoption only if the long-term benefits outweigh the costs and risks; i.e: B > C + R . Thus a short-term dip in productivity (i.e. high cost) is totally acceptable if, in the long run, the increased developer output makes up for it.

I know that shiny is exciting but the goal of software engineers is to solve business problems and bring value to the customer/business. Technology choices are expensive and you have to think of the second-degree and third-degree impacts of your choices as technology leaders.

But I want to try out new things!

Every now and then, some new technology comes up that solves your business problem perfectly, you might outgrow your current design or even have outdated technology. Here are a few suggestions on how to avoid this:

Do hackathons they are great for synergistic and symbiotic discovery of great solutions! Encourage engineers to explore new approaches to problem solving. Allow independent evaluations, proposals and implementations.

Think again: “What problem am I solving?”

Related
          Cloudera 2.0: Cloudera and Hortonworks Merge to form a Big Data Super Power      Cache   Translate Page      

We’ve all dreamed of going to bed one day and waking up the next with superpowers stronger, faster and with maybe the ability to fly. Yesterday that is exactly what happened to Tom Reilly and the people at Cloudera and Hortonworks . On October 2 nd they went to bed as two rivals vying for leadership in the big data space. In the morning they woke up as Cloudera 2.0, a $700M firm, with a clear leadership position. “ From the edge to AI ”…to infinity and beyond! The acquisition has made them bigger, stronger and faster.


Cloudera 2.0: Cloudera and Hortonworks Merge to form a Big Data Super Power

Like any good movie, however, the drama is just getting started, innovation in the cloud, big data, IoT and machine learning is simply exploding, transforming our world over and over, faster and faster. And of course, there are strong villains, new emerging threats and a host of frenemies to navigate.

What’s in Store Cloudera 2.0

Overall, this is great news for customers, the Hadoop ecosystem and the future of the market. Both company’s customers can now sleep at night knowing that the pace of innovation from Cloudera 2.0 will continue and accelerate. Combining the Cloudera and Hortonworks technologies means that instead of having to pick one stack or the other, now customers can have the best of both worlds. The statement from their press release “From the Edge to AI” really sums up how complementary some of the investments that Hortonworks made in IoT complement Cloudera’s investments in machine learning. From an ecosystem and innovation perspective, we’ll see fewer competing Apache projects with much stronger investments. This can only mean better experiences for any user of big data open source technologies.

At the same time, it’s no secret how much our world is changing with innovation coming in so many shapes and sizes. This is the world that Cloudera 2.0 must navigate. Today, winning in the cloud is quite simply a matter of survival. That is just as true for the new Cloudera as it is for every single company in every industry in the world. The difference is that Cloudera will be competing with a wide range of cloud-native companies both big and small that are experiencing explosive growth. Carving out their place in this emerging world will be critical.

The company has so many of the right pieces including connectivity, computing, and machine learning. Their challenge will be, making all of it simple to adopt in the cloud while continuing to generate business outcomes. Today we are seeing strong growth from cloud data warehouses likeAmazon Redshift, Snowflake , Azure SQL Data Warehouse andGoogle Big Query. Apache Spark and service players like Databricks and Qubole are also seeing strong growth. Cloudera now has decisions to make on how they approach this ecosystem and they choose to compete with and who they choose to complement.

What’s In Store for the Cloud Players

For the cloud platforms like AWS, Azure, and Google, this recent merger is also a win. The better the cloud services are that run on their platforms, the more benefits joint customers will get and the more they will grow their usage of these cloud platforms. There is obviously a question of who will win, for example, EMR, Databricks or Cloudera 2.0, but at the end of the day the major cloud players will win either way as more and more data, and more and more insight runs through the cloud.

Talend’s Take

From a Talend perspective, this recent move is great news. At Talend, we are helping our customers modernize their data stacks. Talend helps stitch together data, computing platforms, databases, machine learning services to shorten the time to insight.

Ultimately, we are excited to partner with Cloudera to help customers around the world leverage this new union. For our customers, this partnership means a greater level of alignment for product roadmaps and more tightly integrated products. Also, as the rate of innovation accelerates from Cloudera, our support for what we call “dynamic distributions” means that customers will be able to instantly adopt that innovation even without upgrading Talend. For Talend, this type of acquisition also reinforces the value of having portable data integration pipelines that can be built for one technology stack and can then quickly move to other stacks. For Talend and Cloudera 2.0 customers, this means that as they move to the future, unified Cloudera platform, it will be seamless for them to adopt the latest technology regardless of whether they were originally Cloudera or Hortonworks customers.

You have to hand it to Tom Reilly and the teams at both Hortonworks and Cloudera. They’ve given themselves a much stronger position to compete in the market at a time when people saw their positions in the market eroding. It’s going to be really interesting to see what they do with the projected $125 million in annualized cost savings. They will have a lot of dry powder to invest in or acquire innovation. They are going to have a breadth in offerings, expertise and customer base that will allow them to do things that no one else in the market can do.


          318: A call for kindness in open source      Cache   Translate Page      

Adam and Jerod talk to Brett Cannon, core contributor to Python and a fantastic representative of the Python community. They talked through various details surrounding a talk and blog post he wrote titled "Setting expectations for open source participation" and covered questions like: What is the the purpose of open source? How do you sustain open source? And what's the goal?

They even talked through typical scenarios in open source and how kindness and recognizing that there's a human on the other end of every action can really go a long way.

Sponsors

  • Vettery –  Vettery helps you scale your teams by connecting you with highly qualified tech, sales & finance candidates. Download their tech salary report for 2018 with insights from tech hiring activity in New York City, San Francisco, Los Angeles, and Washington D.C. Download at vettery.com/changelog.
  • DigitalOcean –  DigitalOcean is simplicity at scale. Whether your business is running one virtual machine or ten thousand, DigitalOcean gets out of your way so your team can build, deploy, and scale faster and more efficiently. New accounts get $100 in credit to use in your first 60 days.
  • Raygun –  Unblock your biggest app performance bottlenecks with Raygun APM. Smarter application performance monitoring (APM) that lets you understand and take action on software issues affecting your customers.
  • Algolia –  Our search partner. Algolia's full suite search APIs enable teams to develop unique search and discovery experiences across all platforms and devices. We're using Algolia to power our site search here at Changelog.com. Get started for free and learn more at algolia.com.

Featuring

Notes and Links


          TuxMachines: You Can Now Run Ubuntu 18.04 on Raspberry Pi 3 with BunsenLabs' Helium Desktop      Cache   Translate Page      

RaspEX Build 181010 is now available for Raspberry Pi users, made specifically for the latest Raspberry Pi model, the Raspberry Pi 3 Model B+, and featuring the super fast and lightweight Helium Desktop from the Debian-based BunsenLabs Linux distribution, a continuation of the acclaimed CrunchBang Linux.

The new RaspEX BunsenLabs build remains based on the latest Ubuntu 18.04 LTS (Bionic Beaver) operating system series, using packages from the Debian GNU/Linux 9 "Stretch" and Linaro open source software for ARM SoCs. RaspEX is compatible with Raspberry Pi 2, Raspberry Pi 3, and Raspberry Pi 3 Model B+.

Read more

read more


          Krita (v 3.0+) | Guida al programma gratuito di disegno, digital painting e fotoritocco!      Cache   Translate Page      

Ciao! Rieccoci qui a parlare di Krita. E lo faremo con una introduzione sul background e poi con una quanto più completa guida stilata sulla mia oramai annuale esperienza su tale programma. Perché parliamo di Krita? Sostanzialmente per due motivi: primo, il 31 Maggio 2016 è stato una data storica per questo fantastico programma open source:…

L'articolo Krita (v 3.0+) | Guida al programma gratuito di disegno, digital painting e fotoritocco! sembra essere il primo su Capo Nerd.


          Microsoft embraces open source community by joining Open Invention Network      Cache   Translate Page      
The move brings Microsoft in line with other industry giants in opening their patents to protect Linux and other open source software. Microsoft today announced it is joining the Open Invention Network (OIN), a community of companies whose aim is to shield Linux and other open source software from patent aggressors. ...
          Continuum Analytics Blog: Bringing Dataframe Acceleration to the GPU with RAPIDS Open-Source Software from NVIDIA      Cache   Translate Page      

Today we are excited to talk about the RAPIDS GPU dataframe release along with our partners in this effort: NVIDIA, BlazingDB, and Quansight. RAPIDS is the culmination of 18 months of open source development to address a common need in data science: fast, scalable processing of tabular data for extract-transform-load (ETL) operations. ETL tasks typically …
Read more →

The post Bringing Dataframe Acceleration to the GPU with RAPIDS Open-Source Software from NVIDIA appeared first on Anaconda.


          Open Source RAPIDS GPU Platform to Accelerate Predictive Data Analytics      Cache   Translate Page      

Today NVIDIA announced a GPU-acceleration platform for data science and machine learning, with broad adoption from industry leaders, that enables even the largest companies to analyze massive amounts of data and make accurate business predictions at unprecedented speed. "It integrates seamlessly into the world’s most popular data science libraries and workflows to speed up machine learning. We are turbocharging machine learning like we have done with deep learning,” he said.

The post Open Source RAPIDS GPU Platform to Accelerate Predictive Data Analytics appeared first on insideHPC.


          Water exploitation index plus (WEI+) for river basin districts (1990-2015)      Cache   Translate Page      

This interactive map gives a European overview of water stress conditions. The information presented may deviate from that available in the EEA member countries and cooperating countries, particularly for those countries where data availability is insufficient in the WISE SoE – Water quantity database (WISE 3). Data on hydro-climatic variables was aggregated from a daily to a monthly scale. Water abstraction data was taken from WISE 3 (annual resolution at the national scale), although there are large gaps in the time series. Therefore, intensive gap filling was performed on water abstraction data and proxies were used to disaggregate the data from the national to the sub-basin scale. Information on water use was mainly modelled on the UWWTP capacities, the E-PRTR database and the Eurostat Population change dataset (online data code [demo_gind]) among others. See the methodology chapter for further explanation of gap filling, and spatial and temporal disaggregation, and the data uncertainties chapter for current data availability. This interactive map allows users to explore changes over time in water abstraction by source, water use by sector and water stress level at sub-basin or river basin scale. The WEI+ has been estimated as the quarterly average per river basin district, for the years 1990-2015, as defined in the European catchments and rivers network system (ECRINS). The ECRINS delineation of river basin districts differs slightly from that defined by Member States under the Water Framework Directive. The Ecrins delineation is used instead of WFD because it contains geo-spatial information on Europe’s hydrographical systems with full topological information enabling flow estimation between upstream and downstream basins, as well as integration of economic data collected at NUTS or country level. In addition to using e WISE SoE – Water quantity database, a comprehensive manual data collection was performed by accessing all open sources (Eurostat, OECD, FAO) including national statistical offices of the countries. This was done because of the temporal and spatial gaps in the data on water abstraction. Moreover, a large part of the stream flow data from LISFLOOD has also been substantially updated by the Directorate-General Joint Research Centre. Similarly, a comprehensive update with climatic parameters has been performed by the EEA based on the E-OBS dataset. Therefore, the time series of the WEI+ presented in the current version might be slightly different for some basins compared with the previous version.


          VIDEO: Ruffcoin – Oh Bebe (Dir By Paul Gambit)      Cache   Translate Page      
About UsWelcome to Jaguda (JaH-GooD’-AH), your global open source medium that utilizes credible sources to keep you informed. Utilizing innovative technology, visual documentaries, and blogging
          As Everyone Knows, In The Age Of The Internet, Privacy Is Dead -- Which Is Awkward If You Are A Russian Spy      Cache   Translate Page      

Judging by the headlines, there are Russian spies everywhere these days. Of course, Russia routinely denies everything, but its attempts at deflection are growing a little feeble. For example, the UK government identified two men it claimed were responsible for the novichok attack on the Skripals in Salisbury. It said they were agents from GRU, Russia's largest military intelligence agency, and one of several groups authorized to spy for the Russian government. The two men appeared later on Russian television, where they denied they were spies, and insisted they were just lovers of English medieval architecture who were in Salisbury to admire the cathedral's 123-meter spire.

More recently, Dutch military intelligence claimed that four officers from GRU had flown into the Netherlands in order to carry out an online attack on the headquarters of the international chemical weapons watchdog that was investigating the Salisbury poisoning. In this case, the Russian government didn't even bother insisting that the men were actually in town to look at Amsterdam's canals. That was probably wise, since a variety of information available online seems to confirm their links to GRU, as the Guardian explained:

One of the suspected agents, tipped as a "human intelligence source" by Dutch investigators, had registered five vehicles at a north-western Moscow address better known as the Aquarium, the GRU finishing school for military attaches and elite spies. According to online listings, which are not official but are publicly available to anyone on Google, he drove a Honda Civic, then moved on to an Alfa Romeo. In case the address did not tip investigators off, he also listed the base number of the Military-Diplomatic Academy.

One of the men, Aleksei Morenets, an alleged hacker, appeared to have set up a dating profile.

Another played for an amateur Moscow football team "known as the security services team" a current player told the Moscow Times. "Almost everyone works for an intelligence agency." The team rosters are publicly available.

The "open source intelligence" group Bellingcat came up with even more astonishing details when they started digging online. Bellingcat found one of the four Russians named by the Dutch authorities in Russia's vehicle ownership database. The car was registered to Komsomolsky Prospekt 20, which happens to be the address of military unit 26165, described by Dutch and US law enforcement agencies as GRU's digital warfare department. By searching the database for other vehicles registered at the same address, Bellingcat came up with a list of 305 individuals linked with the GRU division. The database entries included their full names and passport numbers, as well as mobile phone numbers in most cases. Bellingcat points out that if these are indeed GRU operatives, this discovery would be one of the largest breaches of personal data of an intelligence agency in recent years.

An interesting thread on Twitter by Alexander Gabuev, Senior Fellow and Chair of Russia in Asia-Pacific Program at Carnegie Moscow Center, explains why Bellingcat was able to find such sensitive information online. He says:

the Russian Traffic Authority is notoriously corrupt even by Russian standards, it's inexhaustible source of dark Russian humor. No surprise its database is very easy to buy in the black market since 1990s

In the 1990s, black market information was mostly of interest to specialists, hard to find, and had limited circulation. Today, even sensitive data almost inevitably ends up posted online somewhere, because everything digital has a tendency to end up online once it's available. It's then only a matter of time before groups like Bellingcat find it as they follow up their leads. Combine that with a wealth of information contained in social media posts or on Web sites, and spies have a problem keeping in the shadows. Techdirt has written many stories about how the privacy of ordinary people has been compromised by leaks of personal information that is later made available online. There's no doubt that can be embarrassing and inconvenient for those affected. But if it's any consolation, it's even worse when you are a Russian spy.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+



Permalink | Comments | Email This Story

          Avinash Kumar: PostgreSQL Extensions for an Enterprise-Grade System      Cache   Translate Page      
PostgreSQL extensions for logging

PostgreSQL® logoIn this current series of blog posts we have been discussing various relevant aspects when building an enterprise-grade PostgreSQL setup, such as security, back up strategy, high availability, and different methods to scale PostgreSQL. In this blog post, we’ll get to review some of the most popular open source extensions for PostgreSQL, used to expand its capabilities and address specific needs. We’ll cover some of them during a demo in our upcoming webinar on October 10.

Expanding with PostgreSQL Extensions

PostgreSQL is one of the world’s most feature-rich and advanced open source RDBMSs. Its features are not just limited to those released by the community through major/minor releases. There are hundreds of additional features developed using the extensions capabilities in PostgreSQL, which can cater to needs of specific users. Some of these extensions are very popular and useful to build an enterprise-grade PostgreSQL environment. We previously blogged about a couple of FDW extensions (mysql_fdw and postgres_fdw ) which will allow PostgreSQL databases to talk to remote homogeneous/heterogeneous databases like PostgreSQL and MySQL, MongoDB, etc. We will now cover a few other additional extensions that can expand your PostgreSQL server capabilities.

pg_stat_statements

The pg_stat_statements module provides a means for tracking execution statistics of all SQL statements executed by a server. The statistics gathered by the module are made available via a view named pg_stat_statements. This extension must be installed in each of the databases you want to track, and like many of the extensions in this list, it is available in the contrib package from the PostgreSQL PGDG repository.

pg_repack

Tables in PostgreSQL may end up with fragmentation and bloat due to the specific MVCC implementation in PostgreSQL, or simply due to a high number of rows being naturally removed. This could lead to not only unused space being held inside the table but also to sub-optimal execution of SQL statements. pg_repack is the most popular way to address this problem by reorganizing and repacking the table. It can reorganize the table’s content without placing an exclusive lock on it during the process. DMLs and queries can continue while repacking is happening.  Version 1.2 of pg_repack introduces further new features of parallel index builds, and the ability to rebuild just the indexes. Please refer to the official documentation for more details.

pgaudit

PostgreSQL has a basic statement logging feature. It can be implemented using the standard logging facility with

log_statement = all
 . But this is not sufficient for many audit requirements. One of the essential features for enterprise deployments is the capability for fine-grained auditing the user interactions/statements issued to the database. This is a major compliance requirement for many security standards. The pgaudit extension caters to these requirements.

The PostgreSQL Audit Extension (pgaudit) provides detailed session and/or object audit logging via the standard PostgreSQL logging facility. Please refer to the settings section of its official documentation for more details.

pldebugger

This is a must-have extension for developers who work on stored functions written in PL/pgSQL. This extension is well integrated with GUI tools like pgadmin, which allows developers to step through their code and debug it. Packages for pldebugger are also available in the PGDG repository and installation is straightforward.Once it is set up, we can step though and debug the code remotely.

The official git repo is available here

plprofiler

This is a wonderful extension for finding out where the code is slowing down. This is very helpful, particularly during complex migrations from proprietary databases, like from Oracle to PostgreSQL, which affect application performance. This extension can prepare a report on the overall execution time and tables representation, including flamegraphs, with clear information about each line of code. This extension is not, however, available from the PGDG repo: you will need to build it from source. Details on building and installing plprofiler will be covered in a future blog post. Meanwhile, the official repository and documentation is available here

PostGIS

PostGIS is arguably the most versatile implementation of the specifications of the Open Geospatial Consortium. We can see a large list of features in PostGIS that are rarely available in any other RDBMSs.

There are many users who have primarily opted to use PostgreSQL because of the features supported by PostGIS. In fact, all these features are not implemented as a single extension, but are instead delivered by a collection of extensions. This makes PostGIS one of the most complex extensions to build from source. Luckily, everything is available from the PGDG repository:

$ sudo yum install postgis24_10.x86_64

Once the postgis package is installed, we are able to create the extensions on our target database:

postgres=# CREATE EXTENSION postgis;
CREATE EXTENSION
postgres=# CREATE EXTENSION postgis_topology;
CREATE EXTENSION
postgres=# CREATE EXTENSION postgis_sfcgal;
CREATE EXTENSION
postgres=# CREATE EXTENSION fuzzystrmatch;
CREATE EXTENSION
postgres=# CREATE EXTENSION postgis_tiger_geocoder;
CREATE EXTENSION
postgres=# CREATE EXTENSION address_standardizer;
CREATE EXTENSION

Language Extensions : PL/Python, PL/Perl, PL/V8,PL/R etc.

Another powerful feature of PostgreSQL is its programming languages support. You can code database functions/procedures in pretty much every popular language.

Thanks to the enormous number of libraries available, which includes machine learning ones, and its vibrant community, Python has claimed the third spot amongst the most popular languages of choice according to the TIOBE Programming index. Your team’s skills and libraries remain valid for PostgreSQL server coding too! Teams that regularly code in JavaScript for Node.js or Angular can easily write PostgreSQL server code in PL/V8. All of the packages required are readily available from the PGDG repository.

cstore_fdw

cstore_fdw is an open source columnar store extension for PostgreSQL. Columnar stores provide notable benefits for analytics use cases where data is loaded in batches. Cstore_fdw’s columnar nature delivers performance by only reading relevant data from disk. It may compress data by 6 to 10 times to reduce space requirements for data archive. The official repository and documentation is available here

HypoPG

HypoPG is