Next Page: 10000

          

FLOSS Weekly 554: Hotrock

 Cache   

FLOSS Weekly (Audio)

Hotrock is a leading-edge event intelligence platform that allows IT leaders to navigate data overload, cut through the silos and potentially reduce costs. With digital transformation and the digital customer experience, a high priority on corporate agendas, next-generation event management solutions such as Hotrock can deliver meaningful business results in terms of ensuring applications and systems are continuously available and optimized for best performance.

Hosts: Randal Schwartz and Jonathan Bennett

Guests: Troy McSimov and Josh Mahar

Download or subscribe to this show at https://twit.tv/shows/floss-weekly

Here's what's coming up for FLOSS in the future.

Think your open source project should be on FLOSS Weekly? Email Randal at merlyn@stonehenge.com

Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music.

Sponsors:


          

블록체인컴퍼니, 아르고 메인넷과 스테이블코인 컨설팅 사업 확대

 Cache   
암호화폐 거래소 비트레이드를 운영하는 블록체인컴퍼니(대표 최정록)가 블로코가 추진하는 아르고(Aergo) 메인넷과 스테이블 코인인 아르고젬(Aergo Gem) 도입에 대한 기술 컨설팅에 나선다.블록체인컴퍼니와 블로코는 아르고 기반의 보상형 스테이블 코인인 아르고젬을 공동 개발했다.회사 측에 따르면 아르고젬은 보상 가치를 외부 서비스로도 전달할 수 있도록 오픈소스(Open Source) 기반 개방형 구조로 설계됐다. 대부분의 블록체인 보상 체계가 폐쇄형 구조로 설계돼 자체 플랫폼 내에서만 활용되는 것과는 차별화됐다는 게 회사측 설명이
          

Osint-goeroe: stop met Google [VIDEO]

 Cache   
Wie open source intelligence (osint) serieus neemt, kan beter direct stoppen met het gebruik van Google. Dat stelt oud-MIVD’er Arno Reuser tijdens zijn presentatie op Infosecurity.nl in de Jaarbeurs in Utrecht.
          

Professions: Data Analyst - Las Vegas, Nevada

 Cache   
Job Snapshot Employee Type: Full-Time Location: Las Vegas, NV Job Type: Other Transportation Experience: Not Specified Date Posted: 10/22/2019 Job ID: DATAA16029 Job Description MV Transportation is seeking a Data Analyst who will be responsible for the ongoing management of all information technology and communications equipment; this includes management of the dispatch and reservations system, MDT's, GPS/AVL technology, DriveCam, as well as communications and networking interfaces. Job Responsibilities: Make suggestion for business decision and business tools for internal coarse corrections and improvements. Reconciling and managing the SQL database systems and gathering NTD data. Assisting and streamlining data gathered from dispatch, including missed trips, breakdowns and all reportable incidents for the monthly reports. Have experience with complex client facing reports and the creation of solution-based action plans. All other duties assigned. Talent Requirements: High school diploma. Must have three (3) to five (5) years' experience in a transit, or similar, environment. Must have database and SQL expertise using both open source methods and Power BI/Tablue visualization experience. Ideal candidates will be experienced in Trapeze, Zonar and other transit software. Must have strong verbal communication and writing skills. Work in a fast-paced environment with multiple deadlines. Expert level in Excel. MV Transportation is committed to as policy of Equal Employment Opportunity and will not discriminate against an applicant or employee on the basis of race, color, religion, creed, national origin or ancestry, sex, physical or mental disability, veteran or military status, genetic information or any other legally recognized protected basis under federal, state or local laws, regulations or ordinances. The information collected by this application is solely to determine suitability for employment, verify identity and maintain employment statistics on applicants. Where permissible under applicable state and local law, applicants may be subject to a pre-employment drug test and background check after receiving a conditional offer of employment. ()
          

New comment by toomuchtodo in "Re-Licensing Sentry"

 Cache   

> a REAL open source community

No true scotsman. If it works for the stakeholders, it works, regardless of what you want to call the model, whether that's GNU/Linux, Redis, Elastic, or Sentry.


          

When Should You Replace Your Free SIEM Tools?

 Cache   
Undefined

Free Security Information and Event Management (SIEM) solutions have significant benefits, providing visibility into security environments and enabling proactive vulnerability management for many small and mid-sized organizations. However, these tools often come with limitations that will lead security teams to consider commercial options. How do you know when it’s time to upgrade?

When your organization expands 

Growth is one of the first indicators that you need to migrate to a commercial SIEM tool. Freeware may have limited functionality that worked when you were first starting up, but you may find the benefits offered in an enterprise version are better suited for your organization as it grows. Alternately, freeware may offer full functionality for a limited number of assets. As an organization grows, the number of devices and applications naturally increases. Since a SIEM is strongest when it’s centralizing everything in the environment, outgrowing the freeware is a good indicator that you’re ready for the full commercial version.

When you're ready for support  

While free SIEM tools have their benefits, they usually offer only documentation for support. It may take a bit longer to get up to speed, but once you've gotten comfortable with the SIEM solution, this will typically not be a problem. But any more complex questions or issues will go unanswered or take much longer to solve without the assistance of support personnel who are skilled specialists on the product. Good support resources provide stability, vital expertise, and peace of mind that can be as valuable as the product itself.

Open source tools may not even have official support people or documentation, so support options have to be found elsewhere—through forums or from other open source users. Additionally, while open source SIEM solutions allow you to develop them further, customizing a SIEM tool so extensively is quite the undertaking. If you have someone maintaining and continuing to develop custom coding, this is a large investment in terms of time and skills, so open source can’t really be considered free.

Finding the right commercial SIEM software

If your organization is facing any of these issues, it might be time to migrate to a paid SIEM solution. Commercial tools can easily scale, streamline troubleshooting, and get the support you need when you need it. 

A majority of SIEM tools are intended for huge organizations, with many more features than a small to mid-sized organization wants, and a price point that is far out of range. Thankfully, there are mid-range SIEM solutions that are intuitive to use and provide better value than some of the heavy-weight options—while still providing all the critical functionality you need as a growing business.

When you're looking for a tool, make sure you find one that offers: 

  • Real-time monitoring: The sooner you can see a threat, the sooner you can eliminate it. Real time monitoring allows you to investigate and begin remediation quickly.  
  • Tailored prioritization and escalation: Threat prioritization saves security teams from having to sort out critical threats from the mundane. The ability to fine tune what constitutes a real threat for each asset creates an even more effective filter.
  • The ability to monitor every type of device: For maximum effectiveness, your SIEM should be able to easily monitor any type of data, be it a standard operating system like Windows or a customized feed like a legacy application or homegrown database.
  • Data normalization: With so many types of applications and devices whose data is streamed through a SIEM, the language and formatting of the log information can vary broadly. Normalizing this data it into a common format and giving it meaning streamlines the process considerably.
  • Integrations: Every organization requires multiple security solutions, so the ability to integrate data from other enterprise applications, like antivirus software, saves time and provides a holistic picture of your environment.
  • Long term event storage: Compliance and analysis may require long term storage of data. An effective SIEM allows you to specify exactly what types of data you want to store, excluding data that you know is harmless.
  • Reporting capabilities: Logging all event and incident response activity not only provides valuable performance data, it also proves adherence to multiple industry standards and regulations to inquiring auditors.

 

In addition to finding the right features and doing a SIEM pricing comparison, other factors should be taken into account, like licensing models or deployment methods. It’s helpful to develop a requirements checklist to evaluate the various offerings on the market and how they line up with what you need. The right SIEM solution will centralize your security, and as your organization continues to grow, this will provide stability for your security team, keeping your infrastructure safe through every transition.

When_Should_You_Upgrade_from_Free_SIEM.jpeg

When Should You Replace Your Free SIEM Tools?
Vulnerability Management
Big text: 
Blog
Resource type: 
Blogs
Ready to upgrade to a commercial SIEM?

Use our SIEM Buyer’s Guide to help find the right solution for you.


          

Kobo Nickel default font extraction

 Cache   
I wanted to transfer some fonts, mainly the Kobo Nickel, from the main OS to KOreader. Is there a way to extract it or find it in the main directory? I can't find it in the hidden folders or anywhere from a google search. Seems it is not open source. Is there no way to get it outside of the Nickel OS?
          

Mediacurrent Wins Two 2019 Acquia Engage Awards

 Cache   

Award Program Showcases Outstanding Examples of Digital Experience Delivery, Featuring CAS.org for Best ROI and Mediacurrent’s Rain Install Profile for Open Source Giants

(PRWeb November 06, 2019)

Read the full story at https://www.prweb.com/releases/mediacurrent_wins_two_2019_acquia_engage_awards/prweb16702315.htm


          

Red Hat’s Quarkus Java stack moves toward production release

 Cache   

The fast, lightweight, open source Quarkus Java stack will graduate from its current beta designation and become available as a production release at the end of November. Sponsored by Red Hat, the microservices-oriented Java stack supports both reactive and imperative programming models. 

Quarkus is a Kubernetes-native Java stack for cloud-native and serverless application development. Quarkus promises faster startup times and lower memory consumption than traditional Java-based microservices frameworks. It features a reactive core based on Vert.x, a toolkit for building reactive applications based on the JVM, and the ability to automatically reflect code changes in running applications. 

To read this article in full, please click here


          

3 models for open source governance

 Cache   

In part 3 of this article, I detailed several economic theories on common goods management. In this part, we’ll explore how to apply these theories to the issue of open source sustainability.

Studying the work of Garrett Hardin (Tragedy of the Commons), the prisoner’s dilemma, Mancur Olson (Collective Action), and Elinor Ostrom’s core design principles for self-governance, a number of shared patterns emerge. When applied to open source, I’d summarize them as follows:

  1. Common goods fail because of a failure to coordinate collective action. To scale and sustain an open source project, open source communities need to transition from individual, uncoordinated action to cooperative, coordinated action.
  2. Cooperative, coordinated action can be accomplished through privatization, centralization, or self-governance. All three work—and can even be mixed.
  3. Successful privatization, centralization, and self-governance all require clear rules around membership, appropriation rights, and contribution duties. In turn, this requires monitoring and enforcement, either by an external agent (centralization and privatization), a private agent (self-governance), or members of the group itself (self-governance).

Next, let’s see how these three concepts—centralization, privatization, and self-governance—could apply to open source.

To read this article in full, please click here


          

3 suggestions for stronger open source projects

 Cache   

In part 4 of this article, I covered how economic theories of collective action can be applied to open source from a conceptual perspective. In this final part, I’ll discuss how these theories can be made actionable for scaling and sustaining open source communities.

Suggestion 1: Don’t just appeal to organizations’ self-interest, but also to their fairness principles

If, like most economic theorists, you believe that organizations act in their own self-interest, we should appeal to that self-interest and better explain the benefits of contributing to open source.

To read this article in full, please click here


          

How takers hurt makers in open source

 Cache   

In part 1 of this article, I introduced the concept of open source Makers and Takers, and explained why it is important to find new ways to scale and sustain open source communities. Here in part 2, I’ll dive into why Takers hurt Makers, as well as how the “prisoner’s dilemma” affects the behavior of takers.

To be financially successful, many Makers mix open source contributions with commercial offerings. Their commercial offerings usually take the form of proprietary or closed source IP, which may include a combination of premium features and hosted services that offer performance, scalability, availability, productivity, and security assurances. This is known as the open core business model. Some Makers offer professional services, including maintenance and support assurances.

To read this article in full, please click here


          

Open source and the free-rider problem

 Cache   

In part 2 of this article, I focused on how Takers hurt Makers in open source, as well as how individual actions—no matter how rational they may seem—can have adverse outcomes for open source communities. Now I’ll show how these problems have been solved elsewhere by looking at popular economic theories.

In economics, the concepts of public goods and common goods are decades old, and have similarities to open source.

To read this article in full, please click here


          

A cure for unfair competition in open source

 Cache   

In many ways, open source has won. Most people know that open source provides better quality software, at a lower cost, without vendor lock-in. But despite open source being widely adopted and more than 30 years old, scaling and sustaining open source projects remain challenging.

Not a week goes by that I don’t get asked a question about open source sustainability. How do you get others to contribute? How do you get funding for open source work? But also, how do you protect against others monetizing your open source work without contributing back? And what do you think of MongoDB, Cockroach Labs, or Elastic changing their license away from open source?

To read this article in full, please click here


          

Why the Rust language is on the rise

 Cache   

You’ve probably never written anything in Rust, the open source, systems-level programming language created by Mozilla, but you likely will at some point. Developers crowned Rust their “most loved” language in Stack Overflow’s 2019 developer survey, while Redmonk’s semi-annual language rankings saw Rust get within spitting distance of the top 20 (ranking #21).

This, despite Rust users “find[ing] difficulty and frustration with the language’s highly touted features for memory safety and correctness.”

To read this article in full, please click here


          

Should you go all-in on cloud native?

 Cache   

We’ve all heard about “cloud native” databases, security, governance, storage, AI, and pretty much anything else that a cloud provider could offer. Here’s my definition of cloud native applications: Applications that leverage systems native to the public cloud they are hosted on.

The general advice is, “Cloud native: good. Non-native lift-and-shift: bad.”

This makes sense. By using native services, we can take advantage of core systems that include native security using native directory services, as well as native provisioning systems and native management and monitoring. Using non-native applications on public clouds is analogous to driving a super car on a gravel road.

To read this article in full, please click here


          

Microsoft to participate in open source Java

 Cache   

Microsoft has climbed aboard the OpenJDK project to help with the development of open source Java.

In a message posted on an OpenJDK mailing list, Microsoft’s Bruno Borges, principal product manager for Java at the company, said Microsoft’s team initially will be working on smaller bug fixes and back ports so it can learn how to be “good citizens” within OpenJDK. Microsoft and subsidiaries are “heavily dependent” on Java in many aspects, Borges said. For one, Java runtimes are offered in Microsoft’s Azure cloud.

To read this article in full, please click here


          

Google Cloud launches TensorFlow Enterprise

 Cache   

Google Cloud has introduced TensorFlow Enterprise, a cloud-based TensorFlow machine learning service that includes enterprise-grade support and managed services.

Based on Google’s popular, open source TensorFlow machine learning library, TensorFlow Enterprise is positioned to help machine learning researchers accelerate the creation of machine learning and deep learning models and ensure the reliability of AI applications. Workloads in Google Cloud can be scaled and compatibility-tested.

To read this article in full, please click here


          

i-Verve Inc

 Cache   
i-Verve Inc

i-Verve is a Web Design & Development company that is on a mission to provide next-generation IT solutions and services to Startups, Businesses and Enterprises worldwide. We leverage our technical expertise and domain expertise with proven methodologies to deliver app and software development and related IT services. 


Category: Software Developers
: 2507 WESTMINSTER
: BLVD PARLIN
: New Jersey
: United States
: http://i-verve.com
: PHP Web Development, Web Design Company, Web Designing Company, php development, hire PHP developer, web site design, Open Source Development, web page design, internet web site design, custom web site design, mobile web site design, responsive web design and development, ecommerce web site design, web design company, flash web design
: php web development, iphone apps, ecommerce solutions, HTML 5 web site design, web design, CMS Design and development, web page design, web design company, custom web site design, ecommerce web site design, web design web development, flash web design, web design and development, free templates, small business web design, web design template, web design, ecommerce web design
          

SteemFlagRewards: Moderation Decentralized Ad

 Cache   

https://www.youtube.com/watch?v=h64YVCsxZBA ### Snake? Snake!? Snaaaake! I was fortunate enough to win a contest for a week of free advertisement on an internet radio show (Minnow Support Project - MSP Waves) so I had to get to work.
[Link as I do not want it to be the thumbnail](https://cdn.steemitimages.com/DQmPr75E6BJ9mPbX1WbgdH7Tb7GEexE2HoDpyJ71JCmRvEt/sgt_downvote_mgs_green.png) Also, created this pretty sweet image of myself w/ my kevlar using Photoshop
It is now my new profile pic. ;) *Disclaimer: I do not specialize in audio or visual media but I am learning. It certainly doesn't hurt that I tend to be a creative type of person. Also, you will notice if you follow me that I often make references to the great video games of old. It's one of my "things". Anyways, I think this came out okay given the circumstances.* --- I was also asked to included a blurb for this video that should include a call to action. I'm going to include one suggestion that honestly is pretty thoughtful. >Fight abuse, get rewarded! Join the Steem Flag Rewards Discord to support a better Steem for all. https://discord.gg/aXmdXRs **That's not bad. Not bad at all. Short and succinct.** The only thing I have apprehensions about with the video is people may be unsure the many ways that they are able to support this open source project and the moderators that give their valuable time to approve flags. I want to ensure moving forward that I work to at least try to make it worth that time. This is why I am considering leveraging the Steem Proposal System to make that happen if folks see the value in what we do. Not trying to put myself up on a pedestal or anything but suffice it to say that the Steem blockchain has had the benefit of a Sr. System Engineer dedicating a lot of time to help get this project this far. It wasn't easy and, at times, things get a little rough which our admins / mods will attest. Think it is in all of our best interests to not allow SFR to go to the wayside and, to be frank, that is what will likely happen if mods bail. I simply just do not have enough time to develop, troubleshoot, do PR, be a dad to a moderately autistic child as well as a new infant. I wished I did. **Where's a good clone when you need 'em?** --- Ok, back on track, my blurb is as follows: >Join Steemflagrewards and use of your stake to minimize the profitability of abusive behaviors... and let's keep it that way! If you would like to help the cause, consider delegating to @steemflagrewards and upvoting @steemflagrewards / @sfr-mod-fund beneficiary reports to further support abuse fighters and mods. *Together, we can accomplish much more than we could individually.* http://mspwaves.com/
▶️ DTube
▶️ YouTube *Btw big shout out to @r0ndon for his help / guidance on the ad! Thanks broseph!*
          

Top KDnuggets tweets, Oct 30 – Nov 05: Everything a Data Scientist Should Know About Data Management

 Cache   
Which Data Science Skills are core and which are hot/emerging ones?; The 4 Quadrants of Data Science Skills and 7 Principles for Creating a Viral DataViz; Microsoft open sources #SandDance, a visual data exploration tool.
          

Facebook Has Been Quietly Open Sourcing Some Amazing Deep Learning Capabilities for PyTorch

 Cache   
The new release of PyTorch includes some impressive open source projects for deep learning researchers and developers.
          

Here’s why 8.5 million users love Visual Studio Code, the free software that’s helping Microsoft win over programmers in the cloud wars with Amazon (MSFT)

 Cache   
Here’s why 8.5 million users love Visual Studio Code, the free software that’s helping Microsoft win over programmers in the cloud wars with Amazon (MSFT)Microsoft Visual Studio Code is the top open source project on GitHub. Here's how it's helping Microsoft attract developers and take on AWS and Google.

Read more: http://feedproxy.google.com/~r/businessinsider/~3/5afY2zba8hs/microsoft-visual-studio-code-programmers-cloud-wars-amazon-2019-11


          

"Les fiches techniques pour construire chaque outil sont disponibles en open source sur le site de l’Atelier Paysan, et peuvent être téléchargées gratuitement. Elles évoluent avec le retour des maraîchers." https://reporterre.net/VIDEO-A-l-Atelier-paysan-les-maraichers-fabriquent-leurs-propres-outils … #Communs

 Cache   

"Les fiches techniques pour construire chaque outil sont disponibles en open source sur le site de l’Atelier Paysan, et peuvent être téléchargées gratuitement. Elles évoluent avec le retour des maraîchers." https://reporterre.net/VIDEO-A-l-Atelier-paysan-les-maraichers-fabriquent-leurs-propres-outils …


          

NethServer 7.7 Cockpit Edition Linux OS Arrives with Nextcloud 17, UI Changes

 Cache   

The Cockpit Edition of NethServer 7.7, which is based on CentOS 7.7, is now complete and available by default on new installations, making server administration easier with a modern, redesigned and user-friendly web UI, as well as improved usability and new features.

"We're confident that it will be as always a great release and it will achieve our mission: making sysadmin’s life easier. This is thanks to the most vibrant, supportive and friendly community in the Open Source space (and not only Open Source)," said Alessio Fattorini in the release announcement.

Read more


          

Akeneo PIM utmanar inRiver med open source Enterprise variant

 Cache   
Som ett brev på posten, brukade göra, kommer oftast de öppna källkodsvarianterna ...
          

Microsoft launches Visual Studio Online

 Cache   

#241 — November 6, 2019

Read on the Web

StatusCode
Covering the week's news in software development, infrastructure, ops, platforms, and performance.

Recursive Sans and Mono: A Free Variable Type Family — This is a new ‘highly-flexible’ type family that takes advantage of variable font tech to let you pick the right style along five different axes. It’s pretty clever, well demonstrated, and very suitable for presenting data, code, or to be used in documentation and UIs.

Arrow Type

Microsoft Launches Visual Studio Online — It’s basically a collaborative version of VS Code that runs in the browser letting you develop from anywhere in a cloud-based environment. This isn’t a new idea but it’s great to see Microsoft’s might behind such an effort.

Visual Studio

Top CI Pipeline Best Practices — At the center of a good CI/CD setup is a well-designed CI pipeline. If your team is adopting CI, or your work involves building or improving CI pipeline, this best practices guide is for you.

Datree.io sponsor

You Can't Submit an Electron 6 (or 7) App to the Mac App Store? — Electron is a popular cross-platform app development toolkit maintained by GitHub. The bad news? It uses Chromium which uses several ‘private’ Apple APIs and Apple aren’t keen on accepting apps that use them for a variety of reasons.

David Costa

Dart 2.6: Now with Native Executable Compilation — Dart began life as a Google built, typed language that compiled to JavaScript but is now a somewhat broader project. The latest version includes a new dart2native tool for compiling Dart apps to self-contained, native executables for Windows, macOS, and Linux.

Michael Thomsen

GitHub Sponsors Is Now Out of Beta in 30 Countries — GitHub launched its Sponsors program in beta several months ago as a way for open source developers to accept contributions for their work and projects more easily. It’s now generally available in 30 countries with hopefully more to follow.

Devon Zuegel (GitHub)

Quick bytes:

💻 Jobs

DevOps Engineer at X-Team (Remote) — Work with the world's leading brands, from anywhere. Travel the world while being part of the most energizing community of developers.

X-Team

Find a Job Through Vettery — Vettery specializes in tech roles and is completely free for job seekers. Create a profile to get started.

Vettery

📕 Tutorials and Stories

How Monzo Built Network Isolation for 1,500 Services — 1,500 services power Monzo, a British bank, and they want to keep them all as separate as possible so that no single bad actor can bring down their platform. Here’s the tale of how they’ve been working towards that goal.

Monzo

A Comparison of Static Form Providers — A high level comparison of several providers who essentially provide the backend for your HTML forms.

Silvestar Bistrović

▶  An Illustrated Guide to OAuth and OpenID Connect — A 16 minute video rich with illustrations and diagrams.

Okta

Intelligent CI/CD with CircleCI: Test Splitting — Did you know that CircleCI can intelligently split tests to get you your test results faster?

CircleCI sponsor

▶  Writing Maintainable Code Documentation with Automated Tools and Transclusion — A 37 minute podcast conversation between Robby Russell and Ana Nelson, the creator of Dexy, a documentation writing tool.

Maintainable Podcast podcast

▶  Git is Hard but Time Traveling in Git Isn't — A lightning talk from React Conf 2019 that flies through some interesting Git features in a mere 6 minutes.

Monica Powell

Highlights from Git 2.24 — Take a look at some of the new features in the latest Git release including feature macros and a new way to ‘rewrite history’.

GitHub

Create a Bookmarking Application with FaunaDB, Netlify and 11ty — Brings together FaunaDB’s serverless cloud database, the Netlify platform (which uses Lambda under the hood), and 11ty (a static site generator) to create a bookmark management site.

Bryan Robinson

File Systems Unfit As Distributed Storage Backends: Lessons From Ten Years of Ceph Evolution — You can’t help but be won over by a comment like “Ten years of hard-won lessons packed into just 17 pages makes this paper extremely good value for your time.”

the morning paper

An SQL Injection Tutorial for Beginners — This is not a tutorial for you to follow but more a look at what hackers will attempt to do to your systems, if you let them. The techniques used are sneaky and interesting.

Marezzi

🛠 Code and Tools

Stripe CLI: A Command Line Development Environment for Stripe Users — Stripe has become somewhat ubiquitous in the payment processing space and their focus on developers is pretty neat, not least in this new tool for building and testing integrations.

Tomer Elmalem

Mark Text: A Simple, Free Markdown Editor — Works on macOS, Windows, and Linux. Built in Node with Electron.

Luo Ran

Sell Your Managed Services and APIs to Millions of Developers

Manifold sponsor

Yumda: Yum Packages, but for AWS Lambda — Essentially a collection of AWS Lambda-ready binary packages that you can easily install. You can request new packages, build your own, or use the existing ones that include things like GraphicsMagick, OpenEXR, GCC, libpng, Ruby, TeX, and more.

LambCI

K-Rail: A Workload Policy Enforcement Tool for Kubernetes — A webhook-based policy enforcement tool built in Go that lets you define policies in Go code too.

Cruise

Gitql: A Git Query Language and Tool — Lets you query a git repository using a SQL-like syntax, e.g. select date, message from commits where date < '2014-04-10'

Claudson Oliveira


          

Rene Dudfield: Draft 2 of, ^Let's write a unit test!^

 Cache   
So, I started writing this for people who want to 'contribute' to Community projects, and also Free Libre or Open source projects. Maybe you'd like to get involved, but are unsure of where to begin? Follow along with this tutorial, and peek at the end in the "what is a git for?" section for explanations of what some of the words mean.
Draft 1, 2018/07/18 - initial draft.
Draft 2, 2019/11/04 - two full unit test examples, assertions, making a pull request, use python 3 unittest substring search, "good first issue" is a thing now. Started "What is a git for? Jargon" section.


What's first? A test is first.

A unit test is a piece of code which tests one thing works well in isolation from other parts of software. In this guide, I'm going to explain how to write one using the standard python unittest module, for the pygame game library. You can apply this advice to most python projects, or free/libre open source projects in general.

A minimal test.

What pygame.draw.ellipse should do: http://www.pygame.org/docs/ref/draw.html#pygame.draw.ellipse
Where to put the test: https://github.com/pygame/pygame/blob/master/test/draw_test.py

def test_ellipse(self):
import pygame.draw
surf = pygame.Surface((320, 200))
pygame.draw.ellipse(surf, (255, 0, 0), (10, 10, 25, 20))

All the test does is call the draw function on the surface with a color, and a rectangle. That's it. A minimal, useful test. If you have a github account, you can even edit the test file in the browser to submit your PR. If you have email, or internet access you can email me or someone else on the internet and ask them to do add it to pygame.

An easy test to write... but it provides really good value.
  • Shows an example of using the code.
  • Makes sure the function arguments are correct.
  • Makes sure the code runs on 20+ different platforms and python versions.
  • No "regressions" (Code that starts failing because of a change) can be introduced in the future. The code for draw ellipse with these arguments should not crash in the future.

But why write a unit test anyway?

Unit tests help pygame make sure things don't break on multiple platforms. When your code is running on dozens of CPUs and just as many operating systems things get a little tricky to test manually. So we write a unit test and let all the build robots do that work for us.

A great way to contribute to libre/free and open source projects is to contribute a test. Less bugs in the library means less bugs in your own code. Additionally, you get some public credit for your contribution.

The best part about it, is that it's a great way to learn python, and about the thing you are testing. Want to know how graphics algorithms should work, in lots of detail? Start writing tests for them.
The simplest test is to just call the function. Just calling it is a great first test. Easy, and useful.

At the time of writing there are 39 functions that aren't even called when running the pygame tests. Why not join me on this adventure?


Let's write a unit test!

In this guide I'm going to write a test for an pygame.draw.ellipse to make sure a thick circle has the correct colors in it, and not lots of black spots. There's a bunch of tips and tricks to help you along your way. Whilst you can just edit a test in your web browser, and submit a PR, it might be more comfortable to do it in your normal development environment.

Grab a fork, and let's dig in.

Set up git for github if you haven't already. Then you'll want to 'fork' pygame on https://github.com/pygame/pygame so you have your own local copy.
Note, we also accept patches by email, or on github issues. So you can skip all this github business if you want to. https://www.pygame.org/wiki/patchesandbugs
  • Fork the repository (see top right of the pygame repo page)
  • Make the change locally. Push to your copy of the fork.
  • Submit a pull request
So you've forked the repo, and now you can clone your own copy of the git repo locally.

$ git clone https://github.com/YOUR-USERNAME/pygame
$ cd pygame/
$ python test/draw_test.py
...
----------------------------------------------------------------------
Ran 3 tests in 0.007s

OK

You'll see all of the tests in the test/ folder.

Browse the test folder online: https://github.com/pygame/pygame/tree/master/test


If you have an older version of pygame, you can use this little program to see the issue.


There is some more extensive documentation in the test/README file. Including on how to write a test that requires manual interaction.


Standard unittest module.

pygame uses the standard python unittest module. With a few enhancements to make it nicer for developing C code.
Fun fact: pygame included the unit testing module before python did.
We will go over the basics in this guide, but for more detailed information please see:
https://docs.python.org/3/library/unittest.html



How to run a single test?

Running all the tests at once can take a while. What if you just want to run a single test?

If we look inside draw_test.py, each test is a class name, and a function. There is a "DrawModuleTest" class, and there should be a "def test_ellipse" function.

So, let's run the test...

~/pygame/ $ python test/draw_test.py DrawModuleTest.test_ellipse
Traceback (most recent call last):
...
AttributeError: type object 'DrawModuleTest' has no attribute 'test_ellipse'


Starting with failure. Our test isn't there yet.

Good. This fails. It's because we don't have a test called "def test_ellipse" in there yet. What there is, is a method called 'todo_test_ellipse'. This is an extension pygame testing framework has so we can easily see which functionality we still need to write tests for.

~/pygame/ $ python -m pygame.tests --incomplete
...
FAILED (errors=39)

Looks like there are currently 39 functions or methods without a test. Easy pickings.

Python 3 to the rescue.

Tip: Python 3.7 makes it easier to run tests with the magic "-k" argument. With this you can run tests that match a substring. So to run all the tests with "ellipse" in their name you can do this:

~pygame/ $ python3 test/draw_test.py -k ellipse



Digression: Good first issue, low hanging fruit, and help wanted. 

Something that's easy to do.

A little digression for a moment... what is a good first issue?

Low hanging fruit is easy to get off the tree. You don't need a ladder, or robot arms with a claw on the end. So I guess that's what people are talking about in the programming world when they say "low hanging fruit".

pygame low hanging fruit


Many projects keep a list of "good first issue", "low hanging fruit", or "help wanted" labeled issues. Like the pygame "good first issue" list. Ones other people don't think will be all that super hard to do. If you can't find any on there labeled like this, then ask them. Perhaps they'll know of something easy to do, but haven't had the time to mark one yet.

One little trick is that writing a simple test is quite easy for most projects. So if they don't have any marked "low hanging fruit", or "good first issue" go take a look in their test folder and see if you can add something in there.

Don't be afraid to ask questions. If you look at an issue, and you can't figure it out, or get stuck on something, ask a nice question in there for help.

Digression: Contribution guide.

There's usually also a contribution guide.  Like the pygame Contribute wiki page. Or it may be called developer docs, or there may be a CONTRIBUTING.md file in the source code repository. Often there is a separate place the developers talk on. For pygame it is the pygame mailing list, but there is also a chat server which is a bit more informal.

A full example of a test.

The unittest module arranges tests inside functions that start with "test_" that live in a class.

Here is a full example:

import unittest


class TestEllipse(unittest.TestCase):

def test_ellipse(self):
import pygame.draw
surf = pygame.Surface((320, 200))
pygame.draw.ellipse(surf, (255, 0, 0), (10, 10, 25, 20))


if __name__ == '__main__':
unittest.main()

You can save that in a file yourself(test_draw1.py for example) and run it to see if it passes.

Committing your test, and making a Pull Request.

Here you need to make sure you have "git" setup. Also you should have "forked" the repo you want to make changes on, and done a 'git clone' of it.

# create a "branch"
git checkout -b my-draw-test-branch

# save your changes locally.
git commit test/draw_test.py -m "test for the draw.ellipse function"

# push your changes
git push origin my-draw-test-branch


Here we see a screenshot of a terminal running these commands.

Here we see the commands to commit something and push it up to a repo.
When you push your changes, it will print out some progress, and then give you a URL at which you can create a "pull request".

When you git push it prints out these instructions:
remote: Create a pull request for 'my-draw-test-branch' on GitHub by visiting:
remote: https://github.com/YOURUSERNAME/pygame/pull/new/my-draw-test-branch


You can also go to your online fork to create a pull request there.

Writing your pull request text.

When you create a pull request, you are saying "hey, I made these changes. Do you want them? What do you think? Do you want me to change anything? Is this ok?"

It's usually good to link your pull request to an "issue". Maybe you're starting to fix an existing problem with the code.


Different "checks" are run by robots to try and catch problems before the code is merged in.



Testing the result with assertEquals.


How about it we want to test if the draw function actually draws something?
Put this code into test_draw2.py


import unittest


class TestEllipse(unittest.TestCase):

def test_ellipse(self):
import pygame.draw
black = pygame.Color('black')
red = pygame.Color('red')

surf = pygame.Surface((320, 200))
surf.fill(black)

# The area the ellipse is contained in, is held by rect.
#
# 10 pixels from the left,
# 11 pixels from the top.
# 225 pixels wide.
# 95 pixels high.
rect = (10, 11, 225, 95)
pygame.draw.ellipse(surf, red, rect)

# To see what is drawn you can save the image.
# pygame.image.save(surf, "test_draw2_image.png")

# The ellipse should not draw over the black in the top left spot.
self.assertEqual(surf.get_at((0, 0)), black)

# It should be red in the middle of the ellipse.
middle_of_ellipse = (125, 55)
self.assertEqual(surf.get_at(middle_of_ellipse), red)


if __name__ == '__main__':
unittest.main()


Red ellipse drawn at (10, 11, 225, 95)



What is a git for? Jargon.

jargon - internet slang used by programmers. Rather than use a paragraph to explain something, people made up all sorts of strange words and phrases.
git - for sharing versions of source code. It lets people work together, and provides tools for people to.
pull request (PR) - "Dear everyone, I request that you git pull my commits.". A pull request is a conversation starter. "Hey, I made a PR. Can you have a look?". When you "git push" your commits (upload your changes).
unit test - does this thing(unit) even work(test)?!!? A program to test if another program works (how you think it should). Rather than test manually over and over again, a unit test can be written and then automatically test your code. A unit test is a nice example of how to use what you've made too. So when you do a pull request the people looking at it know what the code is supposed to do, and that the machine has already checked the code works for them.
assert - "assert 1 == 1". An assert is saying something is true. "I assert that one equals one!". You can also assert variables.


This is a draft remember? So what is there left to finish in this doc?


Any feedback? Leave an internet comment. Or send me an electronic mail to: rene@pygame.org







pygame book

This article will be part of a book called "pygame 4000". A book dedicated to the joy of making software for making. Teaching collaboration, low level programming in C, high level programming in Python, GPU graphics programming with a shader language, design, music, tools, quality, and shipping.

It's a bit of a weird book. There's a little bit of swearing in it (consider yourself fucking warned), and all profits go towards pygame development (the library, the community, and the website).

          

Mike Driscoll: PyDev of the Week: Joannah Nanjekye

 Cache   

This week we welcome Joannah Nanjekye (@Captain_Joannah) as our PyDev of the Week! Joannah is a core developer of the Python programming language. She is also the author of Python 2 and 3 Compatibility. You can find out more about Joannah on here website. Let’s take a few moments to get to know her better!

Can you tell us a little about yourself (hobbies, education, etc):

I am Joannah Nanjekye, I live in Canada, Fredericton but I am originally from Uganda in East Africa. I am a CS grad and doing research related to Python in one of the Python IBM labs at UNB. I went to University in Uganda and Kenya where I studied Software Engineering at Makerere University and Aeronautical Engineering at Kenya Aeronautical College respectively. I am also the Author of Python 2 and 3 compatibility, a book published by Apress. I do not have any serious hobbies but I love flying aircraft. Very expensive hobby heh!!

Why did you start using Python?

I started to use Python because I had to in my first programming class in 2009. Like any CS class Python is simple but some professor decided to make the class so hard. After failing a few assignments in the course, I managed to read my first programming book cover to cover which was a Python book– how to think like a computer scientist and managed to pass my final exams. Nevertheless, my real significant use of Python was in 2012 where I worked on a Django project. I continue to use Python because of its simplicity that allows me to focus on solving the problem at hand.

What other programming languages do you know and which is your favorite?

I have good command and proficiency in Golang, Ruby and C. I would say my favourite would be C because I write more C code in general.

What projects are you working on now?

I full time work on a project related to Python the language itself and may be one of its alternate implementations that I can not go into detail because of some NDA restrictions. I am currently working on aspects related to garbage collection. I also give my time to Cpython and other open source projects.

Which Python libraries are your favorite (core or 3rd party)?

I think currently am very interested and curious in how subinterpreters in Cpython will evolve and solve some current shortcomings we have in the language.

What portion of code do you take care of in Python as a core developer?

I would not say take care of because am not assigned to these areas as an expert. I plan to look more at subinterpreters and garbage collection as far as Cpython is concerned. During the recent core developer sprints, I was able to get some good mileage on the high level subinterpreters module implementation which is PEP 554 with Eric Snow’s guidance. In the same sprint, I talked to Pablo Salgado about GC and what areas of improvement we can look at. I just pray for bandwidth and good health to be able to help.

Do you have any advice for other aspiring core developers?

Cpython needs help from everyone individuals and companies otherwise, we will be building on top of a crumbling infrastructure. The process of becoming a core developer is a very transparent one for Cpython. For anyone interested, join the discussion on different aspects of the project of your interest
and contribute in any way. There are many areas where your skills can benefit Python.

Thanks for doing the interview, Joannah!

The post PyDev of the Week: Joannah Nanjekye appeared first on The Mouse Vs. The Python.


          

XBOT All Terrain Tracked Mobile Robot

 Cache   

 Ordine speciale

* Questo è un articolo di ordine speciale. Lo ordineremo per te non appena effettui l'ordine.
La nostra normale disponibilità per la maggior parte dei prodotti è compresa tra 2 e 6 settimane.
Spedizione gratuita su questo articolo
  • Migliora il tuo lavoro di ricerca senza sprecare tempo e denaro
  • Ingressi analogici diretti, digitali, di impulsi RC
  • Progettato per lo sviluppo open source
  • Un robot, più applicazioni
  • Facile da usare e controllare con trasmettitore radio e codici di campionamento
  • Il pacco batterie 4800W consente agli utenti di accendere tutti i dispositivi e i sensori senza limiti

L'articolo XBOT All Terrain Tracked Mobile Robot proviene da Robot Store.


          

Director of Cyberinfrastructure and Research Technologies

 Cache   
Director of Cyberinfrastructure and Research Technologies Context The University of California Merced seeks an experienced, collaborative leader to join the Office of Information Technology (OIT) leadership team as Director of Cyberinfrastructure and Research Technologies (DCRT). The Director will collaborate with faculty, postdoctoral researchers and graduate students and OIT leadership to plan and provide the infrastructure, applications and services required to support the University's diverse and growing research portfolio. The successful candidate will assist with the development of research computing strategies, and deploy technologies to enable data intensive research programs. The position reports to the Chief Information Officer and has two direct reports. Research at the University of California, Merced As it is at all University of California campuses, research is the cornerstone of UC Merced. Innovative faculty members conduct interdisciplinary, groundbreaking research that will solve complex problems affecting the San Joaquin Valley, California and the world. Students "� as early as their first years "� have opportunities to work right alongside them, sometimes even publishing in journals and presenting at conferences. The list of UC Merced's research strengths includes climate change and ecology; solar and renewable energy; water quality and resources; artificial intelligence; cognitive science; stem-cell, diabetes and cancer research; air quality; big-data analysis; computer science; mechanical, environmental and materials engineering; and political science. The campus also has interdisciplinary research institutes with which faculty members affiliate themselves to conduct even more in-depth investigations into a variety of scientific topics. Examples of special research institutes established at the university include the Center for Information Technology Research in the Interest of Society (CITRIS) and the UC Water Security and Sustainability Research Initiative (UC Water) . A complete list of research institutes, centers and partnerships can be found on . IT Support for Research The research IT team strives to bring research computing, high-speed networking, and advanced visualization to campus researchers across all disciplines including mathematics, science and engineering, genomics, meteorology, remote sensing, molecular modeling, and artificial intelligence. The team works closely with the Library for data curation, data management and storage and provides support for technical grant writing. In many cases staff are included in the grant proposals, especially for grants involving cyberinfrastructure, major research instrumentation or cyber training. Successful NSF awards include: MRI Acquisition: Multi-Environment Research Computer for Exploration and Discovery (MERCED) Cluster. CC Networking Infrastructure: Building a Science DMZ Network for University of California Merced. Reducing Attrition of Underrepresented Minority and First-Generation Graduate Students in Interdisciplinary Computational Sciences. UCM has access to both the San Diego Supercomputing Center (SDSC), the UCSD Nautilus Cluster, and the national-level computing resources known as XSEDE . UC Merced has four Fast Input/Output Network Appliances (FIONAs) allow UC Merced researchers to stage data for external collaboration and quickly access data hosted remotely. UCM also has close partnerships with the Calit2 institute and collaborations with UC Santa Cruz, Lawrence Livermore National Laboratory and UC Berkeley. Please refer to the OIT website for more information on the available research computing resources at UC Merced ( . The Leadership Opportunity Through participatory and collaborative decision-making with the Faculty Committee on Research Technologies, the DCRT is responsible for managing the design, development, and delivery of a cost-effective mix of services that support research computing, including shared high- performance computing resources, data-analysis platforms, storage systems, and visualization tools and platforms across the UCM campus. The DCRT will be expected to work closely with the Academic Deans, the Vice Chancellor for Research and the OIT leadership team to find opportunities across UCM to increase services to support research that will span across virtually every academic discipline with important applications in fields such as mathematics, science and engineering, genomics, meteorology, remote sensing, molecular modeling, and artificial intelligence. In general, this work will include: Providing expertise, developing tools and techniques in scientific visualization, efficient parallelization of applications, data formats and I/O methods, grid computing, programming frameworks, optimization, and algorithms. Providing oversight, direction, mentoring, coaching, and professional development to a cross- functional team of research computing systems professionals. Managing the administration and operations of the Research Cyberinfrastructure team. Assisting with networking, data storage/management systems administration tasks as necessary and appropriate. Supporting grant proposals and funded grants. Providing educational workshops and training for the UCM research community The individual will be a member of the UC Research IT Committee (RITC), a workgroup of the University of California CIO IT Leadership Council and an active member of the Pacific Research Platform and participate in the National Research Platform . Specific responsibilities of the position include: Service Delivery Work closely with researchers, research units and schools across UCM to identify research computing needs and ensure that they are being met in the most cost effective manner; and lead a collaborative process to build out services and operational capacity in the design of a CyberInfratructure strategic roadmap for the campus; Work with supercomputer users to develop their own research computing software or help them deploy and use third party software (commercial and/or open source); Manage and mentor technical personnel responsible for providing quality service and support for the campus' research computing activities; Develop and report metrics that measure workload and performance of systems and services; Engage in strategic planning, tactical design, and operational management of infrastructure and applications; Maintain a robust infrastructure for research computing and data stewardship; Identify opportunities and assist in testing the design, configuration, procurement, and use of cloud-based resources for research needs; and Lead the delivery of support for containers, middleware, workflows, data management, data movement, compliance and security, and user training. Education Teach research computing topics to individuals, small and/or large groups; Support instruction using advanced research computing and data, on-boarding into new technologies, and deep engagement ("facilitation") to guide researchers; and Develop and/or collaborate on research projects and/or grant proposals that further the UCM CyberInfrastructure vision and strategy. Strategy and Thought Leadership Provide campus-wide leadership in developing, advocating for, and advancing research technologies into scalable and sustainable services in support of faculty research Work with academic leadership to help identify business strategy, requirements, trends, and desired business outcomes for Research Computing; Stay abreast of trends and new advances in the high performance computing industry by reading, researching, and participating in forums or communities of HPC professionals; Establish, maintain, and/or participate in research computing consortia locally, regionally, and nationally; and Establish effective relationships with relevant external research computing organizations and ensure that UCM is effectively utilizing national research infrastructure. Qualifications The following qualifications are required: Master's degree in any STEM discipline, including but not limited to physical sciences, biosciences, geosciences, mathematics, computer science, engineering?and/or social sciences. Minimum of 2 years' experience leading or working in high performance computing - including software development, system administration, software applications, storage systems and networking. D. in a related field will substitute for 3 years of experience. Knowledge or experience in one or more of the following (and readiness and ability to develop skills in other areas): Parallel programming; Grid computing, programming frameworks; Numerical methods and algorithms; Software debugging, profiling and optimization in an HPC environment; Scientific data visualization; Experience with a variety of HPC applications. Networking and/or cyber-engineering Storage and big data management Data Carpentry Articulate, consensus building leader who can serve as an effective member of OIT's leadership team. The most competitive candidates will have many of the following additional qualifications: In depth understanding of research cyberinfrastructure landscape and demonstrated experience with processes for procurement, deployment and operation. Functional understanding of advanced computational environments needed for data-intensive research in physical and life sciences, engineering, information sciences, social sciences and humanities. Strong program planning skills in order to develop services and solutions to meet needs in research computing. Demonstrated expertise in specifying, designing, and implementing computing infrastructure, clustered and parallel file systems, large scale storage, backup and archiving and high bandwidth networking, software defined networks and Science DMZ. Demonstrated experience supporting computational science and engineering or as a user of with experience with computational science and engineering applications. Excellent customer service skills and ability to work directly with constituents or vendors to understand needs, resolve problems, and address systemic issues in service delivery processes. Highly developed written, oral, listening, and presenting skills and ability to lead complex discussions and achieve outcomes through patient, transparent, consultation. Successful experience developing budgets, tracking expenditures, and negotiating with vendors and contractors. To Apply: Please submit your cover letter and resume to . You may direct your questions or nominations in confidence to either Mary Beth Baker, ( ) or Phil Goldstein (), Managing Partners of Next Generation Executive Search The University of California is an Equal Employment Opportunity/Affirmative Action employer and invites applications from all qualified applicants, including women, minorities, veterans, and individual with disabilities, who will enrich the teaching, research and public service missions of the university. All qualified applicants will be considered for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age or protected veteran status. For the complete University of California nondiscrimination and affirmative action policy, see: UC Nondiscrimination & Affirmative Action Policy ( The Chronicle of Higher Education. Keywords: Director of Cyberinfrastructure and Research Technologies, Location: Merced, CA - 95343
          

Latest in Planet Python: Open Source, SaaS and Monetization; Some Python Guides

 Cache   
  • Open Source, SaaS and Monetization

    When you're reading this blog post Sentry which I have been working on for the last few years has undergone a license change. Making money with Open Source has always been a complex topic and over the years my own ideas of how this should be done have become less and less clear. The following text is an attempt to summarize my thoughts on it an to put some more clarification on how we ended up picking the BSL license for Sentry.

    [...]

    Open Source is pretty clear cut: it does not discriminate. If you get the source, you can do with it what you want (within the terms of the license) and no matter who you are (within the terms of the license). However as Open Source is defined — and also how I see it — Open Source comes with no strings attached. The moment we restrict what you can do with it — like not compete — it becomes something else.

    The license of choice is the BSL. We looked at many things and the one we can to is the idea of putting a form of natural delay into our releases and the BSL does that. We make sure that if time passes all we have, becomes Open Source again but until that point it's almost Open Source but with strings attached. This means for as long as we innovate there is some natural disadvantage for someone competing with the core product while still ensuring that our product stays around and healthy in the Open Source space.

    If enough time passes everything becomes available again under the Apache 2 license.

    This ensures that no matter what happens to Sentry the company or product, it will always be there for the Open Source community. Worst case, it just requires some time.

    I'm personally really happy with the BSL. I cannot guarantee that after years no better ideas came around but this is the closest I have seen that I feel very satisfied with where I can say that I stand behind it.

  • How to Handle Coroutines with asyncio in Python

    When a program becomes very long and complex, it is convenient to divide it into subroutines, each of which implements a specific task. However, subroutines cannot be executed independently, but only at the request of the main program, which is responsible for coordinating the use of subroutines.

  • When to Use a List Comprehension in Python

    Python is famous for allowing you to write code that’s elegant, easy to write, and almost as easy to read as plain English. One of the language’s most distinctive features is the list comprehension, which you can use to create powerful functionality within a single line of code. However, many developers struggle to fully leverage the more advanced features of a list comprehension in Python. Some programmers even use them too much, which can lead to code that’s less efficient and harder to read.

    By the end of this tutorial, you’ll understand the full power of Python list comprehensions and how to use their features comfortably. You’ll also gain an understanding of the trade-offs that come with using them so that you can determine when other approaches are more preferable.


          

Magento vil tilbageholde sikkerhedsrettelser fra open source-koden i to uger

 Cache   
Open-source kode giver bedre mulighed for at finde potentielle sikkerhedsbrister for både venligtsindede og ondsindede programmører, anfører e-commerce-giganten Magento.
          

DevOps Engineer - Spaceflight Industries - Seattle, WA

 Cache   
Citizen or Green Card holder only). Experience with integrating open source software with internal development is highly desired. Experience with Python or GO.
From Spaceflight Industries - Fri, 25 Oct 2019 15:25:51 GMT - View all Seattle, WA jobs
          

Intel Analyst

 Cache   
About the Job Secure our Nation, Ignite your Future Job Summary Each day, U.S. Customs and Border Protection (CBP) oversees the massive flow of people, capital, and products that enter and depart the United States via air, land, sea, and cyberspace. The volume and complexity of both physical and virtual border crossings require the application of business intelligence support service to promote efficient trade and travel. Further, effective intelligence support and big data solutions help CBP ensure the movement of people, capital, and products is legal, safe, and secure. In response to this challenge, ManTech, as a trusted mission partner of CBP, seeks capable, qualified, and versatile SME Intelligence Analyst to facilitate mission critical decisions in response to national security threats. As an Intelligence Analyst, on our team, you'll employ a variety of tactics and techniques to provide all-source intelligence analysis on topics related to homeland security, border security, counter terrorism, and critical infrastructure protection. You'll provide OSINT Collection research, analysis and recommendations regarding liaison efforts with operational and support elements, and external intelligence agencies. You'll prepare finished analytical products, assessments, briefings, and other written or oral products on current threats and trends based on the sophisticated collection, research, and analytic tradecraft of all sources of classified and unclassified information, including intelligence collection INTs, open source, law enforcement, and other government data. You'll employ Intelligence Community analytic standards and appropriate tradecraft, methodologies and techniques and apply specialized subject matter expertise to prepare assessments of current threats and trends for senior US officials to state and local and private sector customers. Required Qualifications - 3+ years of experience as an all-source analyst in an Intelligence Community agency producing finished, all-source analytical intelligence products in accordance with IC analytical standards, tradecraft, sourcing, and classification requirements. - Experience collaborating/coordinating all source analytic assessments with agencies across the Intelligence Community - Experience using OSINT analytic tools such as Babel Street, GOST, SILO, Analyst Notebook/i2 and MS Office. - Clearance: Active TS/SCI Desired Qualifications - 5+ years of experience in all-source intelligence analysis role at the national level. - Prior NTC experience, preferably with proficiency in operational data analysis. - Current or prior work in a liaison capacity to another component of CBP or another agency. - Specific experience in/with intelligence production related to counter-terrorism, counter narcotics, illicit trade, and/or alien smuggling and agree that CBP operations is an unnecessary limiting factor. - Familiarity with unique DHS systems and databases. Clearance: Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information; TS clearance is required as well as CBP suitability. Must be a US Citizen and able to obtain and maintain a U.S. Customs and Border Protection (CBP) Background Investigation. ManTech International Corporation, as well as its subsidiaries proactively fulfills its role as an equal opportunity employer. We do not discriminate against any employee or applicant for employment because of race, color, sex, religion, age, sexual orientation, gender identity and expression, national origin, marital status, physical or mental disability, status as a Disabled Veteran, Recently Separated Veteran, Active Duty Wartime or Campaign Badge Veteran, Armed Forces Services Medal, or any other characteristic protected by law. If you require a reasonable accommodation to apply for a position with ManTech through its online applicant system, please contact ManTech's Corporate EEO Department at **************. ManTech is an affirmative action/equal opportunity employer - minorities, females, disabled and protected veterans are urged to apply. ManTech's utilization of any external recruitment or job placement agency is predicated upon its full compliance with our equal opportunity/affirmative action policies. ManTech does not accept resumes from unsolicited recruiting firms. We pay no fees for unsolicited services. If you are a qualified individual with a disability or a disabled veteran, you have the right to request an accommodation if you are unable or limited in your ability to use or access ************************************************* as a result of your disability. To request an accommodation please click ******************* and provide your name and contact information.
          

Software Engineer Linux

 Cache   
Location: Cedar Rapids Job type: Permanent Sector: Manufacturing Category: Software Engineer Jobs Date Posted: :00 Country: United States of America Location: HIA32: Cedar Rapids, IA 400 Collins Rd NE , Cedar Rapids, IA, USA At Collins Aerospace, we're dedicated to relentlessly tackling the toughest challenges in our industry - all to redefine aerospace. Created in 2018 through the combination of two leading companies - Rockwell Collins and United Technologies Aerospace Systems - we're driving the industry forward through technologically advanced and intelligent solutions for global aerospace and defense. Every day we imagine ways to make the skies and the spaces we touch smarter, safer and more amazing than ever. Together we chart new journeys, reunite families, protect nations and save lives. And we do it all with some of the greatest talent this industry has to offer. We are Collins Aerospace and we hope you join us as we REDEFINE AEROSPACE. We are currently searching for a Software Engineer - Linux to join our team in Cedar Rapids, Iowa. A comprehensive relocation package is available for qualified candidates. Our Avionics team advances aviation electronics and information management solutions for commercial and military customers across the world. That means we're helping passengers reach their destination safely. We're connecting aircraft operators, airports, rail and critical infrastructure with intelligent data service solutions that keep passengers, flight crews and militaries connected and informed. And we're providing industry-leading fire protection and safety systems that our customers can count on when it matters most. Are you ready to learn from the most knowledgeable experts in the industry, develop the technologies of tomorrow and reach new heights in your career? Join our Avionics team today. Job Summary Applies a systematic, disciplined, quantifiable approach to the construction, analysis, or management of software. Uses independent judgment to make decisions in day-to-day job responsibilities the majority of the time under general supervision. Job Responsibilities * Expand and apply knowledge: Product domain, Requirements, Design, Development, Test and Release software processes, tools, methods and coding best practices with primary emphasis on taking technical ownership in a software component of the product domain. * Develop and document component and moderate changes to software requirements documentation, applying knowledge of processes, tools and methods in the management and tracking of software requirements baseline. * Design, code, test, integrate and document software of moderate complexity within software services, software components, software test tools and software test scripts. Prepare software builds for execution in a simulation environment, reference platforms and on the target hardware. Understands and utilizes the appropriate RC processes and tools during product development, resulting in increased product quality and improving customer satisfaction. * Participate in cross-functional team efforts in integration, verification and validation for products and sub-systems of moderate complexity. * Contribute to the engineering estimates for tasks such as change requests or problem reports. * Create unit testing ability (along with continued regression testing ability) such that software components may be developed and comprehensively tested in a simulation environment - if such an environment does not exist, consider various alternatives to create one. * Able to use test equipment (e.g. Logic Analyzer) and software debugging tools (e.g. Wireshark) to aid in the integration process. techniques and skills required to identify a root cause of a given software integration issue. * Escalates encountered technical software issues to project leadership in a timely fashion. * Contribute to software engineering requirements capture, analysis and creation for moderate complexity software designs. * Individual job duties may vary. Basic Qualifications * Bachelor's degree in a Science, Technology, Engineering or Math (STEM) discipline is required. * Basic knowledge as a Linux user is required. * Experience programming in C and the use of Bash & Python scripting are required. * Experience with the open source version control system Git is highly preferred. At Collins, the paths we pave together lead to limitless possibility. And the bonds we form - with our customers and with each other propel us all higher, again and again. Some of our competitive benefits package includes: - Medical, dental, and vision insurance - Three weeks of vacation for newly hired employees - Generous 401(k) plan that includes employer matching funds and separate employer retirement contribution - Tuition reimbursement - Life insurance and disability coverage - And more now and be part of the team that's redefining aerospace, every day. United Technologies Corporation is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Date Posted: :00 Country: United States of America Location: HIA32: Cedar Rapids, IA 400 Collins Rd NE , Cedar Rapids, IA, USA At Collins Aerospace, we're dedicated to relentlessly tackling the toughest challenges in our industry - all to redefine aerospace. Created in 2018 through the combination of two leading companies - Rockwell Collins and United Technologies Aerospace Systems - we're driving the industry forward through technologically advanced and intelligent solutions for global aerospace and defense. Every day we imagine ways to make the skies and the spaces we touch smarter, safer and more amazing than ever. Together we chart new journeys, reunite families, protect nations and save lives. And we do it all with some of the greatest talent this industry has to offer. We are Collins Aerospace and we hope you join us as we REDEFINE AEROSPACE. We are currently searching for a Software Engineer - Linux to join our team in Cedar Rapids, Iowa. A comprehensive relocation package is available for qualified candidates. Our Avionics team advances aviation electronics and information management solutions for commercial and military customers across the world. That means we're helping passengers reach their destination safely. We're connecting aircraft operators, airports, rail and critical infrastructure with intelligent data service solutions that keep passengers, flight crews and militaries connected and informed. And we're providing industry-leading fire protection and safety systems that our customers can count on when it matters most. Are you ready to learn from the most knowledgeable experts in the industry, develop the technologies of tomorrow and reach new heights in your career? Join our Avionics team today. Job Summary Applies a systematic, disciplined, quantifiable approach to the construction, analysis, or management of software. Uses independent judgment to make decisions in day-to-day job responsibilities the majority of the time under general supervision. Job Responsibilities * Expand and apply knowledge: Product domain, Requirements, Design, Development, Test and Release software processes, tools, methods and coding best practices with primary emphasis on taking technical ownership in a software component of the product domain. * Develop and document component and moderate changes to software requirements documentation, applying knowledge of processes, tools and methods in the management and tracking of software requirements baseline. * Design, code, test, integrate and document software of moderate complexity within software services, software components, software test tools and software test scripts. Prepare software builds for execution in a simulation environment, reference platforms and on the target hardware. Understands and utilizes the appropriate RC processes and tools during product development, resulting in increased product quality and improving customer satisfaction. * Participate in cross-functional team efforts in integration, verification and validation for products and sub-systems of moderate complexity. * Contribute to the engineering estimates for tasks such as change requests or problem reports. * Create unit testing ability (along with continued regression testing ability) such that software components may be developed and comprehensively tested in a simulation environment - if such an environment does not exist, consider various alternatives to create one. * Able to use test equipment (e.g. Logic Analyzer) and software debugging tools (e.g. Wireshark) to aid in the integration process. techniques and skills required to identify a root cause of a given software integration issue. * Escalates encountered technical software issues to project leadership in a timely fashion. * Contribute to software engineering requirements capture, analysis and creation for moderate complexity software designs. * Individual job duties may vary. Basic Qualifications * Bachelor's degree in a Science, Technology, Engineering or Math (STEM) discipline is required. * Basic knowledge as a Linux user is required. * Experience programming in C and the use of Bash & Python scripting are required. * Experience with the open source version control system Git is highly preferred. At Collins, the paths we pave together lead to limitless possibility. And the bonds we form - with our customers and with each other propel us all higher, again and again. Some of our competitive benefits package includes: - Medical, dental, and vision insurance - Three weeks of vacation for newly hired employees - Generous 401(k) plan that includes employer matching funds and separate employer retirement contribution - Tuition reimbursement - Life insurance and disability coverage - And more now and be part of the team that's redefining aerospace, every day. United Technologies Corporation is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.
          

Network Architect

 Cache   
Req Ref No: DKGAE-15 Location: Alpharetta, GA Duration: 5.0 months Description Job Description: This dynamic individual will partner with business analysts and engineers to define the technical requirements, principles, and models that guide all network-related decisions for a service delivery ecosystem. The role is responsible for analyzing and translating business, information, and technical requirements into the architectural blueprints used to achieve overall business goals. The Network Architect will be responsible for identifying network, systems, applications, and infrastructure components necessary to strategically outline how the end-to-end network will operate. Experience/Skills Needed: - 7-10 years of experience in multiple technology areas require 10 years of experience in telecommunications network design or deployment required. - Advanced to expert level knowledge and understanding of architecture, application design, system engineering and integration required. - 2 - 4 years of relevant domain experience (data, network, application, systems, etc.) preferred. - 2 - 4 Proven ability to run simultaneous projects and tasks. - Contributor in open source, user groups, and conferences preferred. Computer Skills: - Demonstrable experience working with Tier 1 and 2 carriers, NEMS and MSO's. - Knowledge of Protocols and Standards including Ethernet, SDN Technologies and Principles, OSPF, BGP, RTP, RTCP, Netconf-Yang. - Networking experience with Wan, Lan, Man, Data Center required. - Experience with Linux required. - Working understanding of the Agile Development Process desired. - Solid experience working with Tier 1 and 2 carriers, NEMS and MSO's preferred. - Solution architecture experience in evolving Ethernet services a plus. - Knowledge of new technologies including SDN/NFV preferred. Responsibilities: - Works on multiple projects as a project leader or internal consultant. - Works on highly complex projects using specialized architecture areas such as network, security, applications, data, systems and Internet and business segments.Participates in domain technical and business discussions relative to future architecture direction. - Assists in the analysis, design, and development of a roadmap and implementation plan based upon a current vs. future state in a cohesive architecture viewpoint. - Designs standard configurations and patterns. - Participates in the Enterprise Architecture ecosystem-wide and domain's architecture Governance process. - Reviews exceptions and makes recommendations to architectural standards at a domain/program level. - Captures and analyzes data and develops architectural requirements at project/program level. - Aligns architectural requirements with technology strategy. - Assesses near-term needs to establish business priorities - Consults with project teams to ensure compatibility with existing solutions, infrastructure and services. - Supports the development of software and data delivery platforms with reusable components that can be orchestrated together into different methods for different business. - Coordinates architecture implementation and modification activities. - Assists in post-implementation continuous-improvement efforts to enhance performance and provide increased functionality. - Ensures the conceptual completeness of the technical solution. - Works closely with project management to ensure alignment of plans with what is being delivered. - Analyzes the current architecture to identify weaknesses and develop opportunities for improvements. - Identifies and when necessary, proposes variances to the architecture to accommodate project needs. - Performs ongoing architecture quality review activities relative to specific project/programs they are responsible for. - Provides strategic consultation to business partners. - Advises on options, risks, costs versus benefits, system impacts, and technology priorities. - Consults on projects and maintains knowledge of their progress. - Keeps technology and service managers aware of key customer issues, identifying and resolving potential problems and conflicts. - Sells the architecture process, its outcome and ongoing results. - Researches and evaluates emerging technology, industry and market trends to assist in project development and/or operational support activities. - Provides recommendations based on business relevance, appropriate timing and deployment. - Identifies the tools and components used for a project from the approved enterprise toolset. - Advises on expenditures based on the size, scope, and cost of hardware and software components. - Assists in developing business cases. - Recommends changes that impact the strategic direction. - Meets with project leaders and Technology Leaders to ensure progress towards architectural alignment with project goals and requirements. Education: Bachelor's degree in Computer Science, Information Systems, Computer Engineering, System Analysis or a related field or equivalent work experience VIVA is an equal opportunity employer. All qualified applicants have an equal opportunity for placement, and all employees have an equal opportunity to develop on the job. This means that VIVA will not discriminate against any employee or qualified applicant on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
          

API Integration Engineer

 Cache   
OVERVIEW

Are you a problem solver, explorer, and knowledge seeker always asking, What if*



If so, you may be the new team member we re looking for. Because at SAS, your curiosity matters whether you re developing algorithms, creating customer experiences, or answering critical questions. Curiosity is our code, and the opportunities here are endless.



What we do

We re the leader in analytics. Through our software and services, we inspire customers around the world to transform data into intelligence. Our curiosity fuels innovation, pushing boundaries, challenging the status quo and changing the way we live.



What you ll do

As an API Integration Engineer at SAS, you will collaborate with product managers, technical leads, developers, documentation team members, developer advocates, architects, and other stakeholders to determine tool and automation needs related to the design, development and publishing of APIs across all of SAS.



You will:

* Deliver, support and maintain, and continuously improve test and deployment automation that leverages OpenAPI 2/3 specifications across the entire API lifecycle and within our DevOps pipeline, including but not limited to:

* API Design tooling (openapi-gui, apicurio, stoplight studio).

* API Standards linting (spectral).

* Contract testing (dredd).

* Forward/backward compatibility testing.

* Governance adherence verification.

* Documentation rendering (slate/widdershins, swagger-ui, Redoc).

* Documentation deployment automation.

* Client and server SDK generation tools (openapi-generator, swagger-codegen).

* API conversion tools (REST to GraphQL like openapi-to-graphql or gRPC/protobufs like openapi2proto, OpenAPI 2 to 3 like swagger2openapi).

* Assess and make recommendations about the use of existing open source tooling that supports the API lifecycle.

* Solicit feedback and requirements from stakeholders and prioritize new functionality aligned with overall business needs.

* Regularly communicate work progress with management, identifying issues early and resolving them quickly to avoid or minimize impacts to projects.

* Anticipate time needed to complete projects and assist in project estimates/scheduling.

* Update job knowledge by independent and structured research.



What we re looking for

* You re curious, passionate, authentic, and accountable. These are our values and influence everything we do.

* You have a bachelor s degree in Computer Science, Engineering, or a related quantitative field.

* Experience with:

* Linux operating system, commands, and shell programming tools.

* Scripting languages and automation techniques for testing and deployment.

* Writing or leveraging OpenAPI (Swagger) 2.0 documentation.

* Familiarity with OpenAPI 3.x.

* Understanding of the API lifecycle and awareness of common existing tooling across that ecosystem (visit tools/ in a web browser).

* Understanding of DevOps principles and commonly used tooling in DevOps pipelines.



The nice to haves

* Master s degree or higher in Computer Science, Statistics, or related field.

* Experience integrating / interacting with SAS programmatically through one or more types of APIs found on https://developer.sas.com/home.html.

* Familiarity with usage and programming for cloud platforms such as AWS, Google Cloud, and Azure.

* Understanding of RESTful API principles.

* Experience with:

* Docker containers and/or Kubernetes.

* API Management Solutions (Apigee, API Connect).

* User and developer experience design (UX and DX).

* Developing and/or consuming HTTP APIs, especially REST.

* Contributing to open source projects.

* HTML, CSS, JavaScript to build, support, and maintain internal dashboards.



Other knowledge, skills, and abilities

* Professional software development experience.

* Familiarity with Agile methodologies.



Why SAS

* We love living the #SASlife and believe that happy, healthy people have a passion for life, and bring that energy to work. No matter what your specialty or where you are in the world, your unique contributions will make a difference.

* Our multi-dimensional culture blends our different backgrounds, experiences, and perspectives. Here, it isn t about fitting into our culture, it s about adding to it - and we can t wait to see what you ll bring.

#LI-TP1



SAS looks not only for the right skills, but also a fit to our core values. We seek colleagues who will contribute to the unique values that makes SAS such a great place to work. We look for the total candidate: technical skills, values fit, relationship skills, problem solvers, good communicators and, of course, innovators. Candidates must be ready to make an impact.



Additional Information:

To qualify, applicants must be legally authorized to work in the United States, and should not require, now or in the future, sponsorship for employment visa status. SAS is an equal opportunity employer. All qualified applicants are considered for employment without regard to race, color, religion, gender, sexual orientation, gender identity, age, national origin, disability status, protected veteran status or any other characteristic protected by law. Read more: Equal Employment Opportunity is the Law. Also view the supplement EEO is the Law, and the notice Pay Transparency



Equivalent combination of education, training and experience may be considered in place of the above qualifications. The level of this position will be determined based on the applicant's education, skills and experience. Resumes may be considered in the order they are received. SAS employees performing certain job functions may require access to technology or software subject to export or import regulations. To comply with these regulations, SAS may obtain nationality or citizenship information from applicants for employment. SAS collects this information solely for trade law compliance purposes and does not use it to discriminate unfairly in the hiring process.



Want to stay up to date with life at SAS, products and jobs* Follow us on LinkedIn
          

Senior Software Engineer

 Cache   
DESCRIPTION

At Workiva we create best-in-class, next-generation collaborative solutions for enterprise productivity. We pride ourselves on bringing the consumer level user experience to business users. We love our customers, and they love us back. We hire smart, talented people with a wide range of skills who are hungry to tackle some of today s most challenging problems. Workiva s core product, Wdesk, is being used by thousands of companies globally, including 70% of the 500 largest U.S. corporations by total revenue. We boast a 96% customer satisfaction rating.

We are a full-stack team with deep expertise in web, mobile, and cloud-based distributed systems. Our technology stack primarily consists of Python, Java, and Go on the backend and Javascript, HTML, Dart, and CSS on the front-end. We encourage all our engineers to explore new skills, experiences, and tools and provide opportunities to apply these things in our overall strategy.

We believe great systems are the result of elegant design, simple solutions, and superb collaboration with some of the best teams in the industry. We believe in small, empowered teams. We promote openness through open source contributions (github.com/workiva). We are committed to consistently pushing boundaries to create powerful, innovative solutions to real-world problems

As a Senior Software Engineer you can enjoy the perks of a fast-paced, high tech organization in Bozeman, MT. Our agile environment allows for a flexible environment with integrated work life. Engineers love solving complex problems with autonomy and authority and are encouraged to stay on top of new technologies. Innovation is the key to our success; we do not get stuck in a rut! Being a customer driven environment, we have seamless daily and weekly releases. You can see your code in production in short order.

WHAT YOU LL DO:

* Write cutting edge code

* Work on any part of the stack - from very rich, highly complex HTML5 applications to highly scalable distributed systems

* Deploy quickly to production

* Work on an agile development team

* Work with other engineers, designers, and test engineers to bring prototypes to life

* Mentor, coach, and help develop junior engineers

WHAT YOU LL NEED:

* BS CS/EE/CE, or equivalent job experience

* A passion for coding and building complex web applications

* A passion and excitement for mentoring and developing junior engineers

* Proficient in numerous front end and back end languages; expertise as a full stack engineer

* Fluent with the latest web technologies (Javascript/React/HTML5/Java)

* Experience with AWS or Google AppEngine technologies

* Experience with XML, JSON, or other serialization formats

* Experience with OO design patterns

* Excellent problem solving skills, great attention to details

* Strong communication skills, both verbal and written

* Ability to learn new technologies quickly and understand a wide variety of technical challenges to be solved

* Ability to scale solutions



BONUS POINTS:

* Experience in Dart, Go, or Python

* Familiarity writing code that works across all popular platforms and browsers

* Experience with Docker or other container systems

* Experience integrating with Lucene or other search engines

* Experience working with financial data or XBRL

Individuals seeking employment are considered without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, status as a protected veteran, or disability.

Workiva is an equal opportunity employer. It is our strong belief that equal opportunity for all employees is central to the continuing success of our organization. We will not discriminate against an employee or applicant for employment because of race, religion, sex, national origin, ethnicity, age, physical disabilities, political affiliation, sexual orientation, color, marital status, veteran status, medical condition or other protected status in hiring, promoting, demoting, training, benefits, transfers, terminations, recommendations, rates of pay or other forms of compensation. Opportunity is provided to all employees based on qualifications meeting job requirements.
          

Senior Principal CDN Software Engineer, VIPER

 Cache   
Comcast's Technology. Product Xperience. organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards. We are looking for engineers to join our Content Delivery team. Do you love to build massive, distributed, and amazing systems? Are you passionate about open source software, and building systems using it? Do you like to add immediate, tangible business value? As an engineer in the CDN team, you will help build the infrastructure and develop software to support the systems that deliver IP content for a wide range of mobile and first-screen television devices. As part of the larger Comcast engineering teams, you will help shape the next generation of IP content delivery and transform the customer experience. Using the tenets of DevOps, you will have the opportunity to own the entire stack, from architecture to production. Who does the CDN engineer work with? Apache Traffic Control is the Open Source CDN control plane for which we lead the development, and this project represents our primary focus. Want to learn more? Visit *********************************** There you will find the documentation, source code, and every open bug. We're a small but growing team, delivering state-of-the-art software solutions at the leading edge of CDN technology. What are some interesting problems you'll be working on? We deliver petabytes of traffic and tens of billions of transactions every day. Our software and infrastructure must reliably deliver an excellent customer experience, automatically and seamlessly converge around network and system events and provide the necessary telemetry and instrumentation for operational, planning and engineering use. Where can you make an impact? Thinking out of the box and considering the customer experience are key to our success. We never want to impact service unless it is in a positive manner. We need additional team members to follow their passion of engineering thought leadership, coding and contributing to the delivery code at the heart of many organizations! Responsibilities: Provide technical leadership in a fast-paced environment Participate in and contribute to our architectural advancement Interact with the Open Source community with focus on Apache Traffic Control Create design and engineering documentation Keep current with emerging technologies in the CDN and surrounding knowledge spaces Help ensure the system can scale in any dimension quickly and safely Develop and improve automated validation environments Improve system reliability and stability Drive to ensure all changes made are positive to customer experience Collaborate with project stakeholders to identify product and technical requirements Conduct analysis to determine integration needs Diagnose performance issues and propose and implement code improvements Work with Quality Assurance team to determine if applications fit specification and technical requirements Other duties and responsibilities as assigned. Here are some of the specific technologies we use with the CDN team: Linux (CentoOS) Git HTTP(s) including HTTP caching SQL DNS TCP/UDP BGP IPv6 Adaptive Bitrate Video Protocols MPEG database technologies PostgreSQL InfluxDB ClickHouse Preferred Skills: Experienced technical leader 4+ years of technical leadership (Leadership not Management) Good communicator; able to analyze and clearly articulate complex issues and technologies understandably and engagingly Great design and problem-solving skills, with a strong bias for architecting at scale Strong troubleshooting skills, adaptable, proactive and willing to take ownership Able to work in a fast-paced environment Strong familiarity with industry standards, specifications, standards bodies and working groups Advanced networking knowledge (protocols, routing, switching, hardware, optics, etc) Advanced knowledge of current, state-of-the-art hardware systems (storage, CPU, memory, network) Advanced knowledge of software development, including the software development lifecycle Working to advanced knowledge of database technologies such as RDBMS, time-series and column-oriented Deep knowledge of GNU/Linux, including kernel tuning and customization About the CDN Team: CDN is a passionate and high paced team within Comcast's Technology and Product Division and is based in Denver's LoDo district. Our technology is open-source based, and our products deliver video and other content over IP infrastructure to an array of connected devices in and out of the home. About Viper: VIPER (Video IP Engineering & Research), is a division within Comcast's Core Platform Technologies team and spun out from IP Video and online projects originated within Comcast Interactive Media is based in downtown Denver, CO. We are a cloud-based, IP video infrastructure that's been built to deliver a broad mix of on-demand video, live TV streams and an assortment of other digital media to an array of connected devices in the home. Job Specifications: Bachelors Degree or Equivalent Engineering, Computer Science Generally requires 15+ years related experience Comcast is an EOE/Veterans/Disabled/LGBT employer
          

Revue de presse de l'April pour la semaine 44 de l'année 2019

 Cache   

Cette revue de presse sur Internet fait partie du travail de veille mené par l’April dans le cadre de son action de défense et de promotion du logiciel libre. Les positions exposées dans les articles sont celles de leurs auteurs et ne rejoignent pas forcément celles de l’April.

[Le Courrier] Des chatons pour un internet éthique à échelle humaine

✍ Susana Jourdan, Jacques Mirenowicz, le .

[L'Informaticien] L’Open Source entre dans la normalité

Le .


          

Libre à vous ! Radio Cause Commune - Transcription de l'émission du 29 octobre 2019

 Cache   


Bannière de l'émission

Titre : Émission Libre à vous ! diffusée mardi 29 octobre 2019 sur radio Cause Commune
Intervenant·e·s : Marie-Odile Morandi - Jean-Baptiste Kempf - Jean-Christophe Becquet - Frédéric Couchet - Étienne Gonnu à la régie
Lieu : Radio Cause Commune
Date : 29 octobre 2019
Durée : 1 h 30 min
Écouter ou télécharger le podcast
Page des références utiles concernant cette émission
Licence de la transcription : Verbatim
Illustration : Bannière radio Libre à vous - Antoine Bardelli ; licence CC BY-SA 2.0 FR ou supérieure ; licence Art Libre 1.3 ou supérieure et General Free Documentation License V1.3 ou supérieure. Logo radio Cause Commune, avec l'accord de Olivier Grieco.
NB : transcription réalisée par nos soins, fidèle aux propos des intervenant·e·s mais rendant le discours fluide.
Les positions exprimées sont celles des personnes qui interviennent et ne rejoignent pas nécessairement celles de l'April, qui ne sera en aucun cas tenue responsable de leurs propos.

logo cause commune

Transcription

Voix off : Libre à vous !, l’émission pour comprendre et agir avec l’April, l’association de promotion et de défense du logiciel libre.

Frédéric Couchet : Bonjour à toutes. Bonjour à tous. Vous êtes sur la radio Cause Commune 93.1 en Île-de-France et partout dans le monde sur le site causecommune.fm. La radio dispose également d’une application Cause Commune pour téléphone mobile.
Merci à vous d’être avec nous aujourd’hui.
La radio dispose également d’un salon web, utilisez votre navigateur web, rendez-vous sur le site de la radio, causecommune.fm, cliquez sur « chat » et retrouvez-nous ainsi sur le salon dédié à l’émission.
Nous sommes mardi 29 octobre 2019, nous diffusons en direct, mais vous écoutez peut-être une rediffusion ou un podcast.

Soyez les bienvenus pour cette nouvelle édition de Libre à vous !, l’émission pour comprendre et agir avec l’April, l’association de promotion et de défense du logiciel libre. Je suis Frédéric Couchet, le délégué général de l’April.

Aujourd’hui c’est une émission exceptionnelle, car c’est la 42e émission de Libre à vous ! et 42 est un nombre fétiche dans la culture geek, informatique, la culture de l’imaginaire. C’est issu de l’œuvre de science-fiction de Douglas Adams qui était originellement un feuilleton radiophonique sur la BBC, totalement déjanté, et ensuite une série de livres, Le guide du voyageur galactique. Imaginez un peuple extraterrestre, intelligent, qui construit le plus puissant ordinateur de tous les temps pour trouver la réponse à la question sur la vie, l’univers et le reste. Après 7,5 millions d’années de calcul et de réflexion, l’ordinateur propose la réponse : 42. Le problème, c’est que personne n’a jamais su vraiment la question précise.
Pour connaître la suite de l’histoire, je vous invite à lire l’œuvre de Douglas Adams et nous allons faire dans l’émission quelques clins d’œil à cette œuvre de Douglas Adams.
Déjà, pour les personnes qui partent en vacances ou autre, n’oubliez pas, évidemment, de prendre avec vous des podcasts de Cause Commune pour accompagner votre voyage et n’oubliez pas, surtout, votre serviette c’est en effet l’outil indispensable pour tout auto-stoppeur galactique qu’il doit avoir ou qu’elle doit avoir en permanence.

Le site web de l’April c’est april.org et vous y trouvez d’ores et déjà une page consacrée à l’émission avec toutes les références utiles, les détails sur les pauses musicales et les moyens de nous contacter.
Si vous souhaitez réagir, poser une question pendant ce direct, n’hésitez pas à vous connecter sur le salon web de la radio, donc sur causecommune.fm, et vous pouvez également nous appeler 09 50 39 67 59 ; je répète 09 50 39 67 59.

Nous vous souhaitons une excellente écoute.

Voici maintenant le programme de l’émission.
Dans quelques secondes nous allons commencer par la chronique de Marie-Odile Morandi, animatrice du groupe Transcriptions, qui va nous parler de communs numériques.
D’ici dix à quinze minutes nous aborderons notre sujet principal qui portera sur le fameux lecteur multimédia libre VLC avec notre invité Jean-Baptiste Kempf.
En fin d’émission nous aurons la chronique de Jean-Christophe Becquet, président de l’April, sur Wikidata, relier tous les serveurs du monde.
À la réalisation de l’émission aujourd’hui Étienne Gonnu. Bonjour Étienne.

Étienne Gonnu : Salut Fred.

Frédéric Couchet : Comme à chaque émission on va vous proposer un petit quiz. Vous pouvez proposer les réponses soit sur le salon web, soit sur les réseaux sociaux.
Première question : lors de l’émission du 15 octobre 2019, nous avons parlé de Google et des assistants personnels connectés. Par quel prénom et pourquoi on a proposé de renommer les assistants personnels connectés ?
Deuxième question : nous allons parler du lecteur multimédia libre VLC au cours de cette émission. Savez-vous pourquoi l’icône de VLC est un cône de chantier ?

Tout de suite place au premier sujet.

[Virgule musicale]

Chronique « Les transcriptions qui redonnent le goût de la lecture » de Marie-Odile Morandi sur les communs numériques

Frédéric Couchet : Les choix voire les coups de cœur de Marie-Odile Morandi qui met en valeur deux ou trois transcriptions dont elle conseille la lecture, c’est la chronique « Les transcriptions qui redonnent le goût de la lecture » de Marie-Odile Morandi, animatrice du groupe Transcriptions. Bonjour Marie-Odile.

Marie-Odile Morandi : Bonjour.

Frédéric Couchet : Le sujet du jour dont tu souhaites nous parler aujourd’hui : les communs numériques. Nous t’écoutons.

Marie-Odile Morandi : Effectivement, ce mois-ci dans la chronique j’ai souhaité faire une rétrospective des transcriptions de conférences et interventions de Lionel Maurel, publiées par notre groupe, de « La dictature du copyright » à « Faire atterrir les communs numériques » sur le sol terrestre.
Les transcriptions auxquelles je vais me référer sont listées à l’onglet références de la page relative à l’émission d’aujourd’hui sur le site april.org, mais il y en a aussi d’autres que vous pouvez retrouver sur la partie consacrée aux transcriptions, toujours sur le site de l’April.

Pour savoir qui est Lionel Maurel et quels sont ses sujets de prédilection, je vais m’appuyer sur la transcription de l’émission 13 du Vinvinteur qui date de 2013, d’une durée d’une quarantaine de minutes ; à noter que cette émission n’existe plus. Lionel Maurel y était interviewé par Jean-Marc Manach. Il nous explique que le pseudo qu’il a choisi, Calimaq, fait référence à un certain Callimaque de Cyrène, un des premiers bibliothécaires de la bibliothèque d'Alexandrie dans l’Antiquité. En effet, Lionel est à la fois bibliothécaire et juriste d’où aussi le nom de son blog : lex, la loi et SI sciences de l’information donc S.I.Lex. Avec cette double casquette, Lionel s’intéresse aux problèmes juridiques liés au droit d’auteur et aux licences libres qui, dit-il, « mettent le droit d'auteur sens dessus dessous en laissant l’auteur au centre du dispositif ».
Dans cet entretien il explique ce que sont les biens communs avec la nécessaire prise de conscience qu'il y a un écosystème numérique dans lequel il faut défendre la neutralité du Net et préserver certaines libertés essentielles, avec des références au logiciel libre et à l’intelligence collective.

Je vous laisse lire cette transcription avec en bonus les explications que donne Lionel Maurel concernant la compilation hebdomadaire qu’il réalise, le Copyright Madness, c’est-à-dire les dérives de la propriété intellectuelle, du droit des marques et du droit des brevets, ce qui, généralement, ne manque pas de sel.

Concernant le droit d’auteur, nous avions transcrit une intervention de Lionel Maurel à l’université de Compiègne en 2016 intitulée « Contenus numériques : droit d'auteur et licences libres » qui dure une heure et 40 minutes. Cette intervention est un cours complet et j’invite toutes les personnes qui sont intéressées par ce sujet, soit personnellement, soit dans un cadre professionnel, à écouter ce cours et à relire sa transcription : les thèmes abordés vont des notions de base du droit d’auteur, son fonctionnement, sa gestion, ses exceptions, pour arriver à l’application de ce droit sur Internet et terminer par les licences Creative Commons auxquelles est faite une large part. C’est un ensemble très complet qui mérite vraiment d’être relu régulièrement.

Toujours concernant les licences libres, Lionel Maurel avait fait une intervention d’une dizaine de minutes au Paris Open Source Summit de 2017 intitulée : « Creative Commons. Où en est-on en 2017 ? »
Il rappelle l’origine de ces licences, c’est-à-dire la façon dont Lawrence Lessig aux États-Unis, suite à sa défaite en tant qu’avocat pour empêcher l’allongement de 50 à 70 ans du copyright après la mort de l’auteur, souhaite « redonner directement aux créateurs le pouvoir de changer les choses et d’ouvrir leurs œuvres directement à la base en utilisant leur droit d’auteur non pas pour mettre des restrictions, mais pour donner des autorisations ». Il rappelle que certaines de ces licences ne sont pas libres au sens classique des termes des licences pour les logiciels libres dont elles s’inspirent. Je mentionne que les musiques qui sont diffusées durant les émissions Libre à vous ! sont réellement libres, c’est-à-dire Attribution et Partage à l’identique si elles sont publiées sous une licence Creative Commons.
Je vous laisse lire les conclusions de cette intervention, somme toutes optimistes, ce qui est de bonne augure, avec la présentation de belles réussites d’œuvres placées sous ces licences Creative Commons.

Lionel Maurel s’intéresse aussi au matériel et il avait tenu une conférence d’environ une heure au festival Pas Sage En Seine de 2016 intitulée : « Que manque-t-il pour avoir des licences Open Hardware qui fonctionnent ».
Avoir du matériel vraiment libre est un enjeu fort, mais difficile parce qu’on entre dans le champ de la propriété industrielle qui comporte d’autres droits, les dessins et modèles, les marques, les brevets. Le droit d’auteur et la propriété industrielle ne fonctionnent pas du tout de la même manière, les règles sont différentes : pour obtenir un droit de propriété industrielle il faut notamment faire un dépôt.
Actuellement ce mouvement se développe. Une fondation s’est montée, donne des instructions sur comment on doit faire pour être dans une démarche d’open source hardware et propose une définition : « conceptions réalisées publiquement et disponibles de manière à ce que n’importe qui puisse étudier, modifier, distribuer, créer et vendre un design ou un produit basé sur ce design », ce qui ressemble beaucoup à la définition du logiciel libre.

Lionel Maurel estime qu’il y a trois stratégies possibles pour libérer le matériel :

  • la première serait de publier la documentation de ce qu’on a produit et verser directement l’invention dans le domaine public. Sauf qu’il existe aux États-Unis les patent trolls qui pourraient s’en servir. Ce sont ces sociétés qui ne fabriquent rien, déposent le plus de brevets possibles et vivent de la menace des procès qu’elles peuvent faire ;
  • deuxième pratique : documenter le projet : expliquer la démarche, le processus de fabrication, publier les plans, les fichiers de conception, préparer un maximum de documentation et tout publier sous licence libre. Sauf que la seule chose qui peut être protégée par le droit d’auteur c’est le texte de la documentation et absolument pas l’objet réalisé à partir de cette documentation ;
  • la dernière stratégie c’est de se dire, puisqu’il faut un brevet, eh bien déposons des brevets et ensuite ouvrons-les. Sauf qu’il faudra engager la procédure de dépôt, payer les coûts et, pour un petit constructeur, un petit inventeur, ce n’est certainement pas possible.

Lionel Maurel propose des solutions que je laisse découvrir aux auditeurs qui liront la transcription. Pour lui il y a là un champ sur lequel faire de la recherche. Il appelle les personnes intéressées à participer car, dit-il, c’est un peu sous-estimé par le monde du Libre qui devrait être beaucoup plus présent sur le sujet.

La dernière conférence en date qui a été transcrite est son intervention au Colloque « Territoires solidaires en commun : controverses à l'horizon du translocalisme », de juin 2019 et qui dure environ une heure.

Là encore, il nous propose quelque chose de très complet, très documenté, avec des références à de nombreux auteurs ce qui permettra aux personnes qui le souhaitent d’approfondir leurs connaissances.

Habituellement, dans notre esprit, nous distinguons les communs matériels, tangibles, des communs de la connaissance, des communs informationnels qui vont être des communs immatériels, intangibles.

Charlotte Hess, qui a travaillé avec Elinor Ostrom, se pose la question : « C’est quoi Internet ? C’est la machine que j’ai devant moi. Il y a un fil. Le fil va à un serveur. Le serveur va à d’autres fils. D’autres ordinateurs sont reliés à ce serveur qui est relié à un système d’information. Ce réseau est relié par des câbles au réseau des réseaux qu’est Internet », et elle fait ainsi une description qui n’a absolument rien d’immatériel ; Internet est indissociable d'un certain nombre d'objets – ordinateurs, câbles, serveurs. Donc, nous dit-elle : « On peut penser Internet comme un commun local et global », montrant que les communs de la connaissance ont une dimension matérielle.
Sur Internet, tout ce que vous allez échanger va laisser une trace quelque part et cette trace n’est pas du tout virtuelle, elle est matérielle parce qu’elle est inscrite dans une infrastructure physique. Nos données ne sont pas du tout stockées dans un nuage, elles sont stockées dans des datacenters, ces immenses hangars extrêmement matériels ; c’est la fameuse phrase « le cloud, le nuage, c’est toujours l’ordinateur de quelqu’un d’autre. »
Donc le fait de nous présenter Internet comme quelque chose d’immatériel est extrêmement faux. L’idée selon laquelle le numérique allait nous permettre de produire les choses avec moins de matière est elle aussi fausse. On lit régulièrement que la consommation électrique due à Internet est préoccupante, à laquelle il faut ajouter les coûts de production des machines, sans oublier les déchets en fin de course qui sont difficilement recyclables. D’où les problèmes sur l’environnement. On en revient à des sujets d’actualité.

Selon un des auteurs cités, une réelle émancipation « impliquera de se réapproprier toute cette chaîne logistique numérique aujourd’hui intégralement privatisée et aliénée ». Il faut qu’on fasse des centres de stockage des données autogérés et contrôlés par nous-mêmes.

Lionel Maurel nous rappelle alors l’existence des fournisseurs d’accès à Internet associatifs, c’est-à-dire ces associations qui disent : « L’accès à Internet est un droit fondamental, donc nous allons tirer des câbles et nous gérerons nous-mêmes la couche physique du réseau. »
Lionel Maurel nous rappelle les projets de l’association Framasoft et l’excellente idée du collectif d’hébergeurs CHATONS. Nos données se trouveront à un niveau local, sur les serveurs d’une entreprise ou d’une association proche de chez nous, qui a signé une charte avec notamment la clause de ne pas utiliser nos données personnelles, donc respect de la vie privée.
Ainsi Internet redevient « translocal », thème de cette conférence.

Actuellement de nombreux penseurs s’interrogent sur la matérialité d’Internet et sur son coût écologique que nous avons sans doute négligé.
Cette dernière conférence m’a particulièrement intéressée, avec, il me semble, une évolution de la pensée, et j’ai souhaité partager.
Transcrire les conférences de Lionel Maurel, défenseur de longue date des logiciels libres, est toujours un plaisir. N’hésitez pas à rejoindre notre groupe Transcriptions, vous ne le regretterez pas !

Frédéric Couchet : Merci Marie-Odile. Tu nous a donné envie de lire ces conférences de Lionel Maurel.
Je précise que le collectif CHATONS est le Collectif des Hébergeurs Alternatifs Transparents Ouverts Neutres et Solidaires dont nous avons déjà parlé dans les émissions Libre à vous ! du 18 juin et du 16 avril 2019. Vous retrouverez les podcasts sur april.org et causecommune.fm. Je précise également que tu as parlé des patent trolls, de ces trolls de brevets. On en reparlera rapidement en fin d’émission parce que c’est dans l’actualité.
Marie-Odile je te remercie et je te souhaite de passer une belle journée.

Marie-Odile Morandi : À vous de même. Bonne soirée.

[Virgule musicale]

Frédéric Couchet : On va passer une pause musicale. Nous allons écouter La fin de Saint Valéry par Ehma. On se retrouve juste après. Belle journée à l’écoute de Cause Commune.

Pause musicale : La fin de Saint Valéry par Ehma.

Frédéric Couchet : Nous venons d’écouter La fin de Saint Valéry par Ehma, disponible en licence Art libre. Vous retrouverez les références sur le site de l’April, april.org et sur le site de Cause Commune, causecommune.fm.

Ne paniquez pas, vous êtes toujours avec l'April pour l’émission Libre à vous ! sur radio Cause commune 93.1 FM en Île-de-France et partout ailleurs sur le site causecommune.fm.

Nous allons passer à notre sujet principal.

[Virgule musicale]

Le lecteur multimédia libre VLC avec Jean-Baptiste Kempf président de VideoLAN et fondateur de la société Videolabs

Frédéric Couchet : Nous allons donc poursuivre par notre sujet principal qui porte aujourd’hui sur le célèbre lecteur multimédia libre VLC dont l’icône est un cône de chantier et nous allons bientôt apprendre les raisons de ce choix. Notre invité est Jean-Baptiste Kempf président de VideoLAN, l’association qui gère VLC, et fondateur de la société Videolabs qui crée des services autour de VLC et plus généralement des nouveautés autour de la vidéo. Bonjour Jean-Baptiste.

Jean-Baptiste Kempf : Bonjour.

Frédéric Couchet : On a déjà eu l’occasion d’avoir Jean-Baptiste dans l’émission en octobre 2018 pour nous parler de DRM, les fameuses menottes numériques sur lesquelles on reviendra très rapidement au cours de l’émission ; vous pouvez écouter évidemment le podcast. Déjà une première petite question, une présentation personnelle. Jean-Baptiste, d’où viens-tu ? Qui es-tu ? Quel est ton parcours ?

Jean-Baptiste Kempf : Je m’appelle Jean-Baptiste. Je suis un geek, j’ai 36 ans, je suis Parisien, j’ai vécu la plupart de ma vie à Paris. Ça fait un bout de temps, à peu près 13 ou 14 ans, que je fais du VLC et que ça a pris de plus en plus de temps dans ma vie jusqu’à être mon métier principal.

Frédéric Couchet : C’est quoi un geek ? Tu as employé ce mot-là au début.

Jean-Baptiste Kempf : Oui. Quelqu’un qui adore coder et être sur son ordinateur. Moi j’ai toujours été dans le logiciel libre dès que je me suis mis à l’informatique, pendant que j’étais en école.

Frédéric Couchet : Donc un passionné notamment d’informatique.

Jean-Baptiste Kempf : Principalement.

Frédéric Couchet : Principalement.

Jean-Baptiste Kempf : Mais aussi de bons bouquins de fantaisie comme le Le Guide du voyageur galactique de l’espace.

[Rires]

Frédéric Couchet : En plus c’est un grand honneur de te recevoir car, depuis le 15 novembre 2018, tu as eu le grade de chevalier de l’ordre national du Mérite, c’est l’une des plus importantes décorations françaises. Ça a l’air de te faire soupirer mais en même temps ça récompense une dizaine d’années de contribution à la fois dans ta société et dans la communauté du Libre.

Jean-Baptiste Kempf : Ça va te faire rigoler parce je suis un gros boulet : je n’ai toujours pas récupéré cette décoration parce qu’il faut organiser une cérémonie, avoir quelqu’un qui te la remet, et je dois avouer que ce n’était pas vraiment dans mes priorités, notamment personnelles, cette année. Il faut absolument que je m’en occupe parce que sinon je ne vais jamais avoir le droit de la porter. C’est génial ; c’est clair, c’est génial parce que ça montre notamment qu’on a eu des gens dans l’État qui commencent à comprendre ce qu’est le logiciel libre et pourquoi c’est important pour l’État et pour la France. Ça c’est vraiment très cool. C’était Mounir, à l’époque, qui m’avait proposé.

Frédéric Couchet : Mahjoubi, qui était ministre du numérique [secrétaire d’État chargé du numérique].

Jean-Baptiste Kempf : Maintenant c’est Cédric O, je crois, qui l’a remplacé. Donc c’est très cool. Par contre, ce que je n’aime pas : c’est une décoration personnelle pour un projet qui est un projet commun. C’est sûr que je suis la personne qui a passé le plus de temps autour de VLC et d’autres projets autour de VideoLAN, mais je suis toujours un peu mal à l’aise avec ça.

Frédéric Couchet : C’est le côté starisation qui ne te plaît pas.

Jean-Baptiste Kempf : Ouais. Il y a beaucoup trop de starisation dans tout ce qui est tech, tout ce qui est startup. On parle plus souvent, on voit plus souvent, à propos des startups, plus des photos des fondateurs que de leurs produits. Ça me gêne un peu ; ce n’est pas très grave, mais ça me gêne un peu.

Frédéric Couchet : OK. Avant d’oublier je précise que si des personnes qui écoutent veulent appeler pour faire une intervention et notamment poser une question à Jean-Baptiste, vous pouvez appeler le 09 50 39 67 59 et Étienne Gonnu, en régie, attend vos appels.
Déjà une petite première question. En fait de très nombreuses personnes utilisent VLC souvent sans savoir que c’est un logiciel libre et ça permet à ces personnes de lire des vidéos. Mais toi, quand tu présentes par exemple peut-être en soirée ce que tu fais, comment tu présentes VLC, en une ou deux phrases ? Petit résumé.

Jean-Baptiste Kempf : Ça dépend de qui est en face, du public, et ça dépend de si je veux troller ou pas. En général, ce que je dis, c’est que c’est un lecteur multimédia qui est capable de lire tous les formats de fichiers audio, vidéo et qui marche partout. Ça c’est l’accroche et après, surtout, je dis que c’est un logiciel libre, développé par une communauté, pour le bien commun.

Frédéric Couchet : D’accord. C’est intéressant parce qu’une des forces, effectivement, de VLC c’est de lire à peu près tous les formats de fichiers et on va y revenir dans la partie plus technique, présentation des fonctionnalités. Tu dis que c’est une communauté qui développe ça, justement, on va parler de l’histoire de ce projet. Comment c’est né ? Parce que c’est un projet très ancien, il y a de nombreuses années. Est-ce que tu peux nous raconter comment est né ce projet à l’École centrale de Paris, si j’ai bien suivi.

Jean-Baptiste Kempf : En fait, ce qui est marrant dans VLC, c’est qu’il n’y a pas eu de créateur de VLC et surtout il n’y a personne qui a voulu faire VLC. Souvent les gens, quand je leur raconte ça, ça les déçoit un peu, il n’y a personne qui s’est dit « je vais faire un nouveau lecteur vidéo, ça va être mieux que le reste ». En fait, c’est une succession de projets qui commence il y a très longtemps, et une partie du projet du projet est devenu VLC. Je vais m’expliquer un petit peu parce sinon c’est un peu flou.

Frédéric Couchet : Avec des dates.

Jean-Baptiste Kempf : Le projet originel date du fait que dans les années 60 l’École centrale Paris a déménagé de la gare de Lyon à Châtenay-Malabry dans le sud de Paris, pour des raisons un peu bizarres, mais notamment parce que l’Éducation nationale n’avait pas l’argent pour le faire. On s’est retrouvé avec une grande école française qui était sur un campus géré par des anciens élèves, donc privé. Et tout dans l’organisation du campus était fait par des étudiants : le téléphone, la télé, la radio, la cafétéria et le réseau informatique. Dans les années 80 ils mettent un réseau informatique et c’était un réseau informatique qui était basé sur Token Ring, donc un réseau très lent. Vers le milieu des années 90, ils veulent avoir un réseau plus rapide et quand ils vont voir l’École pour dire « on a besoin d’un nouveau réseau plus rapide », en particulier pour jouer, il ne faut pas mentir.

Frédéric Couchet : Pour jouer en réseau au début.

Jean-Baptiste Kempf : Pour jouer en réseau et l’École leur dit : « Écoutez, vous allez être gentils, vous allez l’utiliser pour jouer en réseau et pas du tout pour travailler » et surtout, la raison principale de l’École c’est « vous comprenez, le campus est privé, nous on ne peut rien y faire » ; ils disent : « Allez voir les partenaires ». C’est là que le projet qui s’appellait Network 2000 – on est en 1995, à l’époque, évidemment, tout projet s’appelle 2000 sinon ce n’est pas un vrai projet – ils vont voir des partenaires, ils vont voir notamment TF1 qui dit : « Le futur de la vidéo c’est le satellite — aujourd’hui c’est facile de rigoler, mais en 1995 c’était important — et pour 1500 étudiants s’il faut mettre 1500 décodeurs et 1500 antennes ça va coûter une fortune. Ce qu’on vous propose, c’est de mettre juste un réseau très rapide, numérique – ce sont les débuts de la vidéo numérique – on met une grosse antenne et on diffuse la vidéo sur tout le réseau hyper-rapide. Évidemment on est en 1994/95, les ordinateurs les plus puissants ce sont des 486DX-33, DX-66 ou des Pentium 60, c’est absolument impossible, sans avoir des grosses machines, de faire du décodage vidéo taille DVD à l’époque, sans matériel, mais ils le font quand même et c’est comme ça qu’ils justifient l’achat d’un nouveau réseau pour ce projet dans l’association des étudiants qui gérait le réseau informatique. À ce moment-là il n’y a pas du tout de VLC.
Ça finit, il y a une démo qui marche, ça crashe au bout de 50 secondes ; on fait une démo de 42 secondes, comme ça c’est nickel, c’était cross-platform, ça marchait grosso modo sous BeOS et Linux, rien d’autre, mais on montrait que c’était possible. Pendant un an il ne se passe plus rien. Il y a des étudiants en 98 qui disent : « C’est un projet qui est marrant, de diffusion de vidéos sur un réseau, il y a peut-être d’autres campus ou des réseaux d’entreprises qui sont intéressés ». Donc ils remontent un projet qui, à ce moment-là, s’appelle VideoLAN, lan qui veut dire réseau local en anglais. Donc ils montent le projet VideolAN. Ils sont en 98, ils ont comme objectif de devenir open source et d’être cross-platform. Mais dans VideoLAN, il y avait une partie serveur, une partie réseau, une autre partie un truc un peu compliqué, et il y avait une partie cliente. Mais la partie cliente ce n’était pas forcément le focus, parce que ce n’était pas forcément l’endroit le plus compliqué. La partie cliente s’appelle VideoLAN client.

Frédéric Couchet : Donc VLC.

Jean-Baptiste Kempf : À ce moment-là tout le monde l’appelle VideoLAN client. Ça ne va s’appeler VLC que trois ou quatre ans plus tard. Au moment où en 2001, après une bataille de longue haleine, l’École autorise le changement de licence pour que ça passe d’une licence propriétaire vers une licence open source, libre.

Frédéric Couchet : Une licence libre, en l’occurrence la licence GNU GPL, General Public License.

Jean-Baptiste Kempf : Exactement. Ils ne précisent pas la version, ils disent GNU General Public License et ils ne précisent pas VLC, ils précisent « pour l’ensemble des logiciels du projet VideoLAN ». Donc VLC c’est une petite partie du projet VideoLAN, qui est un projet dont le but a été d’être libre, mais qui, au début ne l’était pas, basé sur un projet qui était originellement de faire un nouveau réseau parce qu’il y avait un réseau informatique lent à l’époque. À ce moment-là, quand ça passe en logiciel libre, c’est à ce moment-là qu’il y a des contributions extérieures importantes qui font que ça passe sous Windows et sous Mac OS rapidement et pas à l’initiative des élèves et que ça commence à démarrer à l’extérieur.
En fait il n’y a personne qui s’est dit « waouh, je vais faire un nouveau lecteur, je vais le porter partout ». Ce sont vraiment des étudiants, plusieurs générations d’étudiants parce qu’on parle de 1994 à 2002 pour le début de l’explosion et il n’y a personne qui s’est dit « je vais créer VLC ! »

Frédéric Couchet : D’accord. L’École centrale de Paris c’est une école d’ingénieurs. Toi tu intègres l’École centrale à quelle date ?

Jean-Baptiste Kempf : En 2003.

Frédéric Couchet : En 2003. Je suppose, comme tu l’as dit en introduction, tu es un geek et tu es là pour apprendre, que tout de suite le projet te plaît. Est-ce que tu contribues tout de suite ?

Jean-Baptiste Kempf : C’est pire que ça. Moi j’ai choisi l’École centrale Paris parce que je savais que c’était une école où il y avait une association informatique qui faisait du Libre.

Frédéric Couchet : Tu as choisi l’école pour ça ! D’accord !

Jean-Baptiste Kempf : J’avais rencontré en vacances quelqu’un ; j’ai eu le choix entre plusieurs grandes écoles et je suis allé à Centrale parce que je savais que un, il n’y avait pas beaucoup de cours et deux, parce qu’il y avait une association qui faisait du réseau, qui était sous Linux. Je n’y connaissais rien, c’était clair à l’époque. Donc ça a été mon choix, c’est pour ça que ça que je suis allé à Centrale.

Frédéric Couchet : C’est marrant parce que ça me rappelle ma propre histoire à Paris 8, mais des années avant parce que je suis un peu plus vieux que toi. Donc tu arrives à Centrale en 2003. À l’époque il n’y a pas d’association qui porte ce projet et, si j’ai bien suivi, c’est toi qui vas initier l’idée de créer une association qui va s’appeler VideoLAN.

Jean-Baptiste Kempf : En fait ça arrive bien plus tard parce qu’à l’époque entre les gens du réseau VIA et les gens de VideoLAN c'était très interconnecté. Je deviens vice-président de l’association du réseau et c’est moi, avec notamment un autre développeur qui s’appelle Rémi, qui portons pendant une année cette association, donc on fait des choses sur VideoLAN. La première chose que je fais sur VideoLAN c’est gérer la diffusion interne de la télévision pour le campus de Centrale. Et ça, ça doit être fin 2003/début 2004 que je commence à toucher au projet VideoLAN, mais pas du tout par la partie code, vraiment par la partie infrastructure. En fait, je fais un stage plus tard en 2005/2006 et je m’emmerde pendant ce stage.

Frédéric Couchet : C’est aux États-Unis, ce stage ?

Jean-Baptiste Kempf : Pas du tout. J’étais au CEA [Commissariat à l'énergie atomique], à la direction des applications militaires. Le stage était génial, mais j’avais beaucoup trop de temps. Je me suis vraiment amusé sur le stage, mais c’est juste que ça n’allait pas assez vite pour moi. Donc j’ai fait deux choses : j’ai fait pas mal de documentation et j’ai commencé à aider sur VLC.
En fait, on s’est retrouvé un peu avec le problème que le projet était trop gros pour l’école, trop gros pour des étudiants, trop d’utilisateurs, et c’était très difficile de faire quoi que ce soit, surtout parce qu’en 2006/2007/2008 la nouvelle génération d’étudiants n’est vraiment pas intéressée par le projet. C’est à ce moment-là, fin 2007 et début 2008, que je lance l’idée de se séparer de l’école. Je crée l’association au VideoLAN Dev Days en décembre 2008, hébergée chez, Free et c’est là où on fait un vote, où on décide de créer une association. Début 2007 il n’y avait plus que deux personnes et demie actives sur le projet. Quand j’étais dans mon stage, comme tu l’as dit aux États-Unis, j’ai passé beaucoup de temps à retrouver des mondes, des anciens et des nouveaux, pour se remotiver autour de projet et ça va prendre quelques années pour qu’on arrive à la version 1.0 de VLC.

Frédéric Couchet : D’accord. On va y arriver. Petite question : le choix du cône chantier comme icône, c’était à cette époque-là ou pas ?

Jean-Baptiste Kempf : Quand je suis arrivé, le cône de chantier était déjà là.

Frédéric Couchet : Est-ce que tu sais pourquoi le cône de chantier a été choisi ?

Jean-Baptiste Kempf : Oui, je sais, évidemment !

Frédéric Couchet : Vas-y.

Jean-Baptiste Kempf : Il faut savoir, et je suis désolé pour les auditeurs, qu’il y a une bataille d’anciens pour expliquer quelle est la raison du cône, mais quand moi je suis arrivé à Centrale, c’est sûr, on avait des étages de 24 étudiants et sur l’étage du 2H, l’étage du réseau, il y avait à peu près une centaine de cônes, il y avait une armoire à cônes.

Frédéric Couchet : Le culte du cône !

Jean-Baptiste Kempf : Le culte avec des jeux physiques comme le cône acrobatique, le « côneball », des batailles, des montages de batailles moitié laser moitié cônes. Il y avait vraiment un culte sur le cône qui était très drôle, pas du tout malsain, attention pour ceux qui ont peur, très marrant et hyper deuxième ou troisième degré. À l’origine ils avaient besoin de parler à un étudiant qui ne voulait pas leur ouvrir la porte. En fait, après une soirée probablement un peu arrosée, ils ont utilisé le cône comme porte-voix pour l’appeler et l’alpaguer depuis sa fenêtre. Plutôt que d’avoir une petite mandoline pour chanter une sérénade, ils ont pris un cône qui était là. Ça c’était des gens autour du réseau et, en fait, dans la première sortie sous Linux X11.

Frédéric Couchet : X11 c’est l’environnement de fenêtrage graphique, on va dire.

Jean-Baptiste Kempf : Avant, la première version était en framebuffer, c’est encore au niveau plus bas. Ça passe à la première version. En fait, à l’origine, tout le monde se tirait un peu la bourre dans VLC, ce qui est normal parce qu’il y avait toujours plein de choses à faire, c’est super marrant, donc celui qui met la première version X11, il commit à quatre heures du matin, même si ce n’est pas fini, mais juste parce qu’il a quand même fait le plus gros du boulot, il envoie sa version et, pour montrer que ce n’est pas fini, il met comme icône le petit cône de chantier pour dire que c’est en travaux.
Ensuite Sam Hocevar, qui est un des génies qu’il y a eu autour du projet, dessine la première icône et ça reste. Ce n'est pas réfléchi, c’est complètement débile d’utiliser un cône de chantier pour un lecteur multimédia, mais c’est un coup marketing absolument génial parce que c’est hyper-reconnaissable. Là, maintenant, je vais partout dans le monde, quand je parle de VLC les gens connaissent déjà beaucoup plus que l’École centrale Paris ou des choses comme ça, mais, surtout, il y a plein de gens qui font : « Je ne sais pas trop » et tu dis : « Mais si, le cône qui lit des vidéos » et là, c’est universel.

Frédéric Couchet : Le cône de chantier. C’était une excellent idée et on salue Samuel Hocevar qui a aussi été le responsable du projet Debian, qui est aussi un grand fan de cinéma et notamment de La Classe américaine dont on parlera peut-être un jour. En tout cas, allez chercher sur un moteur de recherche Samuel Hocevar, c’est un génie.

Jean-Baptiste Kempf : Et qui a été un des premiers à introduire Wikipédia en France.

Frédéric Couchet : Exactement. C’est aussi un des fondateurs de Wikimédia France.
J’ai une petite question sur le salon web de la radio, je rappelle que c’est sur causecommune.fm, une réponse rapide, Marie-Odile qui demande : « Est-ce que cette école est toujours aussi sympa afin de la conseiller aux jeunes qui vont prochainement passer des concours ? » L’ECP ? Est-ce que tu conseillerais d’aller à l’ECP aujourd’hui ?

Jean-Baptiste Kempf : Désolé, je n’en sais rien du tout. Maintenant elle s’appelle CentraleSupélec, ça a été fusionné avec Supélec. J’y vais de temps en temps parce que je suis toujours administrateur de l’association du réseau, je trouve que les gens sont toujours aussi cools, par contre je trouve que leur campus est quand même moins marrant que le nôtre.

Frédéric Couchet : D’accord. Voilà la réponse par rapport à ça. On a bien compris qu’au départ il y a pas mal d’étudiants et d’étudiantes qui ont contribué. On va revenir tout à l’heure sur la contribution concrète, aujourd’hui, à VLC, parce que les gens doivent se dire qu’il doit y avoir des centaines de personnes qui contribuent tous les jours à VLC. On va aussi parler du financement, mais dans une deuxième partie. On va revenir un petit peu, une fois passé cet historique, on remarque que c’est un logiciel libre qui existe depuis très longtemps, qui se développe. Aujourd’hui c‘est la version 3.0, c’est ça ?

Jean-Baptiste Kempf : C’est ça.

Frédéric Couchet : 3.0. Tu l’as dit tout à l’heure, l’un des grands atouts de VLC en termes de fonctionnalité, c’est que ça intègre les codecs nécessaires à la lecture de la plupart des formats audio et vidéo et que VLC peut aussi lire à peu près tous les flux réseau. Donc le choix de VLC, pour beaucoup de gens, c’est aussi la qualité et la capacité d’accéder à peu près à tous les contenus. Une autre caractéristique c’est la capacité de lire des flux un petit peu endommagés et de les réparer à la volée, c’est assez magique ! Un autre avantage, et là j’aimerais bien que tu expliques comment vous faites, c’est le côté multiplateforme, parce que souvent les logiciels libres sont disponibles sur environnement Windows, Mac, GNU/Linux, mais vous allez encore plus loin, c’est de l’Android, c’est de l’iPhone, c’est OS2. C’est intégré dans certaines box et ça serait intéressant d’en reparler tout à l’heure. Comment vous faites pour ce côté multiplateforme ?

Jean-Baptiste Kempf : Il y a plusieurs raisons. La première raison c’est que VLC est hyper-modulaire, contrairement par exemple à un autre lecteur multimédia qui est sur Linux qui s’appelle MPlayer, qui était là avant. Le cœur de VLC est tout petit, ça doit être un dixième du code, un vingtième du code, et après on a plein de modules. La raison pour laquelle VLC est passé en modules, ça n’est pas du tout une idée, une grande idée en disant « il faut absolument faire ça », c’était, je suis désolé pour le terme technique, pour raccourcir les temps de compilation à l’époque. Quand on faisait une modification on modifiait juste un module et on compilait, c’était beaucoup plus rapide que tout compiler.

Frédéric Couchet : La compilation c’est partir du code source pour arriver à la version compréhensible par l’ordinateur.

Jean-Baptiste Kempf : C’est ça. En fait, pour faire plus simple, c’était juste plus facile de développer, mais ça n’était pas dans le but d’être plus cross platform, c’était vraiment Sam qui voulait coder plus rapidement, donc pour aller plus rapidement dans son développement il est passé en modules. Et ce passage en modules, en fait c’est vraiment un coup de génie, qui n’était peut-être pas forcément vu à l’époque, c’est que ça a permis justement d’être sur plein de plateformes, parce que quand tu vas sur une autre plateforme tu fais juste une nouvelle sortie audio, une sortie vidéo, une nouvelle interface et puis c’est tout ; tu n’as pas à modifier tout le reste. Et, deuxième effet cool qui est très bien, c’est que ça permet aux gens qui rentrent dans le projet de commencer à contribuer sans être capables de comprendre ce qui se passe au cœur. Moi, pendant quasiment deux ans depuis le premier moment où j’ai codé sur VLC, je n’ai jamais rien fait dans le cœur de VLC parce que c’est compliqué ; mais ce n’est pas grave, comme ce sont des modules, tu rajoutes juste une fonctionnalité : tu veux un nouveau format, tu rajoutes juste un module ! Et quand tu veux placer sur d’autres plateformes, que tu as mentionnées, mais on est aussi sur Apple TV, sur Android TV, on a une version qui marche sur la PS4 – elle n’est pas publique parce que, pour des raisons de liberté, on ne peut pas la publier.
En fait, ce que je dis, c’est que VLC est un des logiciels le plus porté sur plein d’autres plateformes, en tout cas interfaces. On est sur plus de plateformes que Chrome, on est sur plus de plateformes que Firefox, que LibreOffice et je ne parle même pas, évidemment, de logiciels propriétaires comme Office ou Apple.
Il faut comprendre que, évidemment, ça prend beaucoup de temps, mais, en fait, le cœur de VLC est géré par cinq personnes. C’est important. Ce sont des gens très bons et je suis poli, à part moi ce sont vraiment des gens exceptionnels au niveau code, qui sont vraiment de classe internationale, qui sont hyper-bons, qui savent ce qu’ils font et c’est ça qui permet de supporter plein de plateformes. Ensuite on est très conservateurs sur notre approche du code. On écrit tout en C, un petit peu de C ++.

Frédéric Couchet : C, c’est un langage de programmation.

Jean-Baptiste Kempf : En langage de programmation C, donc vraiment du bas niveau, parce que c’est un langage qui est très limité mais qui est relativement simple, dont on connaît très bien les limites, donc ça permet à VLC de garder cette qualité. Et un truc important aussi concernant VLC, sa marque, c’est que les gens normaux, c’est-à-dire pas les gens qui passent leurs journées à recompiler leur VLC sur Linux, font confiance au code. Et ça c’est hyper-important. La deuxième raison c’est que dans VLC il y a des gens comme moi qui ont été hyper-embêtants sur la qualité du produit. J’ai emmerdé les autres développeurs des centaines de fois en disant « non, ça ce n’est pas possible, ça casse ce problème pour l’utilisateur ». J’ai passé des heures et des heures sur les forums, sur Twitter, etc., à écouter ce que voulaient nos utilisateurs, c’est hyper-important, ce n’est pas la partie la plus marrante. Pour moi c’est important d’avoir du produit qui fonctionne.

Frédéric Couchet : Justement sur la partie support j’ai une question : est-ce que globalement l’équipe reçoit plus d’encouragements ou de remerciements que de plaintes, ou traditionnellement… ?

Jean-Baptiste Kempf : Non ! On n’entend que des plaintes, voire des insultes ou des menaces de mort.

Frédéric Couchet : À ce point-là !

Jean-Baptiste Kempf : Oui. Des gens ont envoyé des lettres anonymes que j’ai reçues chez mes parents. Il y a des tarés partout ! Par rapport aux centaines de millions d’utilisateurs, en fait c’est ridicule les plaintes. Évidemment, quand tu es de l’autre côté, tu ne vois que la partie négative et c’est vrai que de temps en temps tu as des mecs qui te dises : « C’est trop bien ! » Il y a des mecs qui m’ont envoyé de la bière parce que sur un thread reddit j’ai dû raconter qu’une des bières que j’adore c’est la Kasteel Rouge et il y a quelqu’un qui a envoyé chez mes parents une caisse de Kasteel Rouge, que j’ai bue.

Frédéric Couchet : Est-ce qu'il t'a invité au Dernier Restaurant avant la fin du monde ?

Jean-Baptiste Kempf : Non, on ne m’a pas encore invité au Dernier Restaurant avant la fin du monde, mais on m’a déjà invité pas mal de fois au Dernier bar avant la fin du monde soit celui de Paris soit dans d’autres endroits.

Frédéric Couchet : Il y en a dans d’autres endroits ?

Jean-Baptiste Kempf : Oui. Il y en a dans d’autres endroits.

Frédéric Couchet : D’accord. OK. Tu parlais à l’instant de la qualité, notamment par rapport à l’expérience utilisateur et utilisatrice, il y a un autre sujet qui doit sans doute te faire stresser c’est la sécurité. D’ailleurs je ne sais pas sur combien de machines, si c’est estimable, VLC est installé, mais le problème de sécurité soit par un bug soit par une injection de code malveillant, ça doit te faire flipper !

Jean-Baptiste Kempf : C’est clair que c’est un vrai sujet qui est très compliqué. Je vais d’abord répondre à ta première question qui est combien il y a de VLC installés. On ne fait pas de télémétrie – moi j’appelle ça de l’espionnage, certains appellent ça de la télémétrie, ça s’appelle de l’espionnage même quand c’est Mozilla qui le fait, nous on ne fait pas d’espionnage –, par contre, c’est vrai qu’on peut savoir des choses. On peut savoir le nombre de téléchargements sur notre site web, sachant qu’il y a évidemment plein d’autres sites de téléchargement comme Download.com, Telecharger.fr et toutes les distributions Linux qui redistribuent sans passer pas nous, donc on n’a pas cette information. Mais là, déjà, on voit qu’on est à peu près à 25 millions, 30 millions de téléchargements par mois. Deux tiers, en fait, ce sont des updates, mais le reste ça ne l’est pas. Déjà le fait qu’il y ait pas mal d’updates ça nous donne des informations.

Frédéric Couchet : Les updates ce sont les mises à jour.

Jean-Baptiste Kempf : Les mises à jour. Après, on a des informations de Microsoft, du nombre d’utilisateurs, notamment pour les crash reports.
En fait, on n’a pas d’infos fiables, mais on a une estimation. En nombre d’utilisateurs actifs, ce que tu définis comme une personne qui utilise VLC une fois dans le mois, sous Windows on a 300 millions d’utilisateurs actifs.

Frédéric Couchet : Waouh !

Jean-Baptiste Kempf : Donc tu peux considérer qu’en nombre d’installations on doit être au moins au double, en nombre d’installations !

Frédéric Couchet : Sous les environnements GNU/Linux, FreebSD et autres, libres, on n’a pas d’estimations.

Jean-Baptiste Kempf : Si. À une époque j’avais fait des estimations : grosso modo on prend le nombre sur Windows, on divise par dix et on a la part de marché qu’on a sous Mac OS et on prend exactement la même chose sous Linux, donc ça fait 30 millions. Sur les machines bureau on pense qu’on a 350 millions d’actifs, donc en nombre d’installés c’est peut-être 600 millions, 700 millions. Après il y a les mobiles. On a eu, par exemple sur Android, 250 millions de téléchargements, de comptes qui l’ont téléchargé et 60 millions d’actifs et sur iOS quelque chose de similaire. Ça donne un ordre d’idée.

Frédéric Couchet : C’est une grosse masse.

Jean-Baptiste Kempf : C’est une grosse masse.

Frédéric Couchet : Donc la partie sécurité doit être stressante !

Jean-Baptiste Kempf : En particulier parce que nous on fait du C, on est vraiment au bas niveau, on n’est pas en train d'avoir un langage qui nous aide parce que dans le multimédia, on n’a pas le choix, il faut être hyper-performant. On va le plus proche du matériel, donc on a accès au bas niveau, donc on a accès, en fait, vraiment à tout. Pour ceux qui comprennent, quand on est dans VLC on est vraiment en mode kernel quasiment partout.

Frédéric Couchet : C’est-à-dire qu’on est au plus proche du matériel, donc on peut quasiment tout faire.

Jean-Baptiste Kempf : Et surtout j’ai accès à tout, j’ai accès à tous tes fichiers, si tu crashes VLC, normalement. C’est le même problème qu’a Chrome, sauf que Chrome ils ont une approche, ils ont des millions pour améliorer ça. On a vu, par exemple, la CIA qui a utilisé une fausse version de VLC et, en même temps que tu regardais ton film, il y avait un petit plugin qu’ils avaient rajouté, un petit module de VLC qu’ils avaient rajouté qui, en fait, chiffrait tous tes documents dans ton dossier « Mes documents » sous Windows et les envoyait quelque part. Ce n’était pas notre version de VLC mais c’était une version récupérée quelque part qu’ils redistribuaient et tu ne t’en rends pas compte : tu regardes un film, ça dure deux heures ou trois heures quand c’est Avengers games, donc ton PC travaille, il y a un peu de bruit, ça ne t’étonne pas.
Ça c’est un vrai problème et puis il y a des failles de sécurité, comme pour tous les logiciels, mais les gens font un peu moins les mises à jour que pour Chrome ; pour ton navigateur, tu passes ton temps à faire ça. On a une approche, notamment depuis trois ans, qui est très proactive, où on va notamment analyser le code et faire des choses comme ça pour, justement, trouver des bugs en amont. On a eu un bug bounty par la Commission européenne qui payait des hackers pour essayer de trouver des problèmes dans VLC et ensuite nous on allait réparer.

Frédéric Couchet : C’est le projet FOSSA [Free and Open Source Software Audit] ?

Jean-Baptiste Kempf : Sur le projet FOSSA.

Frédéric Couchet : Le projet FOSSA de la Commission européenne.

Jean-Baptiste Kempf : Évidemment que c’est grâce à Julia Reda.

Frédéric Couchet : L’ancienne eurodéputée du Parti pirate.

Jean-Baptiste Kempf : Évidemment il n’y a qu’elle qui est intéressée par ce genre de truc. C'était vraiment très cool et ça permet de remonter des problèmes, mais ça ne règle pas le problème fondamental. Pour régler le problème fondamental on a une idée avec un système de sandboxing, c’est très compliqué et surtout ce sont des choses qui n’ont jamais été faites.

Frédéric Couchet : Est-ce que tu peux expliquer en une phrase ce qu’est le sandboxing ? Ou après la pause musicale si tu veux.

Jean-Baptiste Kempf : L’idée du sandboxing et je ne pourrai pas faire plus technique que ça…

Frédéric Couchet : Moins technique que ça.

Jean-Baptiste Kempf : Ouais, pardon, c’est que quand VLC a un problème, en fait, il est dans son petit environnement, donc il n’a accès à rien sur ta machine, donc ça n’est pas grave.

Frédéric Couchet : C’est un bac à sable juste pour VLC.

Jean-Baptiste Kempf : C’est ça. En fait, ça c‘est la théorie. En pratique, il va falloir mettre une dizaine de bacs à sable à l’intérieur de VLC et c’est très compliqué.

Frédéric Couchet : On va permettre aux gens de réfléchir en écoutant une pause musicale. Nous allons écouter Jack’s Playing Ball par Jono Bacon. On se retrouve juste après. Belle journée à l’écoute de Cause Commune.

Voix off : Cause Commune 93.1.

Pause musicale : Jack’s Playing Ball par Jono Bacon.

Frédéric Couchet : Nous venons d’écouter Jack’s Playing Ball par Jono Bacon, disponible sous licence libre Creative Commons BY SA, c’est-à-dire Partage dans les mêmes conditions. Vous retrouverez les références sur le site de l’April, april.org et sur le site de la radio, causecommune.fm.

Vous écoutez toujours l’émission Libre à vous ! sur radio Cause Commune 93.1 FM en Île-de-France et partout ailleurs sur le site causecommune.fm. Je vous rappelle que vous pouvez nous appeler si vous voulez poser une question en direct au 09 50 39 67 59.

Nous allons poursuivre notre discussion sur VLC, le lecteur multimédia libre, avec Jean-Baptiste Kempf du projet VideoLAN et de la société Videolabs dont on parlera tout à l’heure.
Juste avant on parlait de technique et notamment de sandboxing, bac à sable, et pendant la pause musicale Jean-Baptiste m’expliquait un petit peu les projets pour la version à priori 5, ça a l’air d’être quand même un sacré enjeu technique.
Là on va parler un petit peu des problématiques juridiques. On va les aborder rapidement parce que chacune de ces problématiques juridiques est complexe en tant que telle. Déjà j’ai une première question parce que tu es connu pour avoir reçu des propositions, parait-il de plusieurs dizaines de millions d’euros, en échange de l’insertion de publicités et de logiciels malicieux dans VLC et tu as refusé. Pourquoi ?

Jean-Baptiste Kempf : C’est tout à fait exact. Ça m’est arrivé au moins trois fois, des mecs qui voulaient : en même temps que ça installe VLC, il t’installe un antivirus Avast ou Avira, changer ta page de démarrage ou d’installer des spyware. Ça c’est hostile à l’utilisateur donc pour moi c’est no way, quel que soit le montant.
J’ai des gens qui m’ont proposé de racheter le nom de domaine videolan.org, ils étaient déjà un peu plus malins parce que c’est un peu plus malin que d’essayer de mettre de la merde dans VLC, mais pareil, ça ne correspond pas à quelque chose qui est bien pour mes utilisateurs ni à la philosophie que j’ai autour du projet. Je ne suis pas contre l’argent en soi, mais l’argent ça doit être fait de façon morale.

Frédéric Couchet : D’accord. Au niveau des problématiques juridiques, on va parler de deux problématiques juridiques précises assez rapidement, les DRM, les menottes numériques et ensuite brevets.
Les DRM, les menottes numériques, on en a déjà parlé avec toi et Marie Duponchelle dans l’émission d’octobre 2018, vous retrouverez le podcast évidemment en ligne, donc les menottes numériques qui empêchent un certain nombre d’usages. Il y a quelques années VLC avait saisi l’HADOPI [Haute autorité pour la diffusion des œuvres et la protection des droits sur Internet] parce que beaucoup de gens ignorent que l’HADOPI, au-delà de son activité bien connue, a normalement la régulation de ce qu’on appelle les mesures techniques de protection, ce que nous on appelle les menottes numériques, et notamment vous l’aviez saisie concernant les Blu-ray, le format des Blu-ray : est-ce que VLC avait les capacités juridiques – non pas techniques parce que techniquement vous saviez le faire, évidemment – pour lire ces fameux Blu-ray. Première question : pourquoi vous avez dû saisir l’HADOPI ? Quelle était la réponse de l’HADOPI et quelle est la situation aujourd’hui par rapport à la lecture notamment de ces Blu-ray ?

Jean-Baptiste Kempf : VLC est capable de lire les DVD depuis 2001 et, en fait, c’était avant les lois LCEN, EUCD…

Frédéric Couchet : LCEN, loi pour la confiance dans l’économie numérique et EUCD c’est la version française de la directive droit d’auteur.

Jean-Baptiste Kempf : C’était des lois qui avaient été faites. On était passé avant ça. Quand on veut mettre la lecture du Blu-ray dans VLC on est après ça et il y a notamment une agence de régulation des mesures techniques de protection qui avait été créée et qui n’avait jamais rien foutu. Ils n’avaient même pas rendu le rapport annuel qu’ils devaient rendre, donc on a mergé ça dans HADOPI au moment de la loi HADOPI. En théorie, c’était à eux de nous aider parce qu’en fait il y a un problème fondamental qui est l’interopérabilité et les mesures techniques de protection. Grosso modo, ce sont deux concepts qui sont impossibles et puis la loi était hyper peu claire, donc on est allé poser des questions, puisqu’en théorie c’était eux le régulateur. On n’a rien compris à la réponse, en particulier parce qu’ils n’ont rien compris à la question. Ils ont fait ça avec une mauvaise foi absolument forte. Ils n’ont jamais réussi à comprendre, ça a mis deux ans avant qu’on réussisse à avoir une question au gouvernement par un député et c’est à ce moment-là qu’ils ont commencé à se bouger. Grosso modo, ils n’ont rien compris à la question, ils ne nous ont même pas posé la question. En fait ils étaient dans un mode complètement politique avec Franck Riester.

Frédéric Couchet : Actuellement ministre de la Culture et anciennement rapporteur du projet de loi HADOPI.

Jean-Baptiste Kempf : À la fin il s’est rendu compte qu’en fait qu’il y avait quelque chose à faire et qu’on n’était pas là juste pour les emmerder, qu’on posait vraiment une question ! Et puis il y avait le secrétaire général de la HADOPI dont j’ai oublié le nom.

Frédéric Couchet : Éric Walter.

Jean-Baptiste Kempf : Éric Walter, qui a essayé de bouger, mais c’était trop tard. J’ai dit publiquement que c’étaient des gros nuls. Je le redirai publiquement.

Frédéric Couchet : Tu es en train de le dire publiquement.

Jean-Baptiste Kempf : Je peux le redire une fois de plus, ça ne me dérange pas. Jacques Toubon qui, évidemment, ne se souvient pas de moi parce que c’était mon maire quand j’habitais dans le 13e arrondissement de Paris, qui a écrit dans la presse que j’étais un méchant, grosso modo.

Frédéric Couchet : Jacques Toubon qui est aussi un ancien ministre de la Culture et qui, à l’époque, devait être député européen, je pense.

Jean-Baptiste Kempf : Peut-être. Aujourd’hui il fait un travail qui est plutôt bien en tant que médiateur civique de la République [Défenseur des droits], je crois que ce qu’il fait est plutôt pas mal. Il m’avait gonflé. Une fois je l’ai croisé, je lui ai dit qu’il n’avait rien compris au sujet et je crois qu’il m’a dit : « C’est possible, je n’ai rien compris ! »

Frédéric Couchet : Donc l’HADOPI a répondu à côté ou n’a pas compris le sujet, aujourd’hui, légalement, comment ça se passe ?

Jean-Baptiste Kempf : Je ne sais pas. Tu p

          

Thoughts on Biodiversity Next

 Cache   
It’s been a while since I’ve posted on iPhylo. Since returning from a fun and productive time in Australia there have been a bunch of professional and personal things that have needed attending too. In amongst all this I attended Biodiversity Next in Leiden, a large (by biodiversity informatics standards) conference with the tag line "Building a global infrastructure for biodiversity data. Together." In this post I try and bring together a few random thoughts on the conference (the Twitter hashtag #biodiversitynext gives a much broader sense of the conference).

Spectacle


The venue for the keynotes was delightful, and guest speakers were ushered on stage with a rock-star soundtrack (which, frankly, grated at bit). Some of the keynotes were essentially TED talks, such as Theo Jansen on his wonderful Strandbeest and Jalila Essaidi on bulletproof skin and other biotechnology. Interesting, polished, hopeful.



Some keynotes were pitches, such as Paul Hebert’s BIOSCAN where we divide the planet into a grid (of squares, really?) and sequence barcodes for everything within each grid. The theme was moving from “artisanal to industrial” scale. BIOSCAN has a rival, the Earth BioGenome Project (EBP) (see https://doi.org/10.1073/pnas.1720115115) which aims to sequence the whole genome of every eukaryote in 10 years (at a cost of $US 4.7 billion). BIOSCAN is a rather cheaper, although Herbert sees it as the precursor to a larger initiative. But what makes BioSCAN more appealing to me is that it includes and explicit geographical and ecological context. BIOSCAN is interested in what species occur where, and who they are interacting with (the “symbiome”). But not everyone is convinced that mega-genomics projects are a good idea, see for example Proposals to “sequence the DNA of all life on Earth” suffer from the same issues as “naming all the species” by @JeffOllerton.

Other keynotes that resonated with me were Maxwell Gomera’s where he points out that for many people biodiversity is a risk (including an anecdote about people in Namibia seeing biodiversity as attracting the unwanted attention of outside interests, and hence something to be actively minimised), and Jorge Soberon on just how much of biodiversity informatics is data driven and theory-free. The presentation by Ana María Hernández Salgar on IPBES was perhaps the least exciting keynote, arguably because she’s tackling a probably intractable problem. We have some spectacular technology for documenting and understanding biodiversity, but no obvious way to change or significantly influence human impacts on that biodiversity.

Optics


The conference managed to score a pretty spectacular own goal by having an all-white, all male panel (“manel”) for one session (moderated by Ely Wallis @elyw).



There was a pointed response to this later in the conference (again moderated by Ely).



Personally I felt that neither panel contributed much beyond platitudes. I don’t think panel discussions like these do much to explore ideas, they are much more about appearances and positions (which makes the manel even more unfortunate).

There were other comments that were tone deaf. One senior figure argued that “money wasn’t a problem”, the implication being that there’s lots of it around, we just have to figure out how to access it. Yet, one of the sessions I attended featured a young researcher from Brazil who had to crowdfund his attendance at the conference. Money (or rather its uneven distribution) is very much a problem.

Infrastructure


The conference had its own app, and it worked well. It certainly made it easier to plan the day, which sadly was mostly realising that the two topics that you were out interested in hearing about were on at the same time. Big conferences have this fundamental problem that there are too many people and too many talks for people to see everything. This makes the event more a statement about community being large enough to stage such an event, than actually being a place to learn what is going on. But I guess the combination of breaks between sessions, social events, and the pre-conference workshops mean there are times and spaces where things can actually get done.

Substance


There was a lot going on at the conference, I am going to pick out just a few highlights for me. These are obviously very biased, and I missed a lot of the talks.

Cordra

The thing I was most interested to learn about was the technology underpinning DISSCO’s approach to putting specimen records online. Alex Hardisty (@AlexHardisty) gave a nice demo of DISSCO’s approach, which uses Cordra. From Cordra’s website:
Cordra is a highly configurable open source software offered to software developers for managing digital objects with resolvable identifiers at scale.
Cordra is from the Corporation for National Research Initiatives (CNRI), the people behind the Handle system which underpins DOIs. It's a NoSQL data store that can generate and manage persistent identifiers (e.g., Handles). I’ve not been following DISSCO closely, but this approach makes a lot of sense, and it will be interesting to see how it develops. Alex demoed a “digital specimen repository”, for example the record for specimen BMNH:2006.12.6.40-41 is here: http://nsidr.org/#objects/20.5000.1025/486a7e883f14f88bba37. Early days, but digitial identifiers for specimens are going to be crucial to efforts to interlink biodiversity data.

Knowledge graphs

I did my best to spread the knowledge graph meme, and Wikidata is attracting growing interest. Unfortunately I couldn’t see Franck Michel’s (@franck_michel2) talk on Bioschemas, but the idea of having light-weight markup for life science data is very attractive. It seems that long-standing dreams of linking things together are starting to slowly take shape.


Traits

This is an area that I have not thought much about. The Encyclopaedia of Life tried to carve out a niche in this area (TraitBank) but their latest iteration abandons the JSON-LD they developed in version 2.0, which seems a strategic blunder given the growth of interest in knowledge graphs, Bioschemas, and Wikidata. It seems that people working on traits are in a sort of pre-GBIF phase looking for ways to integrate diverse data into one or more places where people can play with it. There’s a lot of excitement here, but lots of data wrangling issues to deal with.

Credit and identity

The hashtag #citetheDOI became something of a rallying cry for those interested in GBIF data. Citing data downloaded from GBIF enables GBIF to pass information on usage along to data providers. Yet another example of the most compelling use case for identifiers not being scientific but cultural.

“Get yourself an ORCID” was another rallying cry. The challenge here is that the most obvious beneficiary of you getting an ORCID is not (yet) you, which makes the sales pitch a bit tricky.


People

It may be partly an age thing, but an increasingly important aspect of conference alike this is the chance to catchup with people you know, as well as develop new contacts and (hopefully) have your preconceptions challenged by people smarter than yourself. I spent quite a bit of time with the BHL crowd, which meant teasing them about their obsession with old books, which did not end well:

It was also fun to see Roger Hyam (@RogerHyam) in action again. Roger has a knack for cutting through the noise to make tools that are useful. He gave a nice demo of using the International Image Interoperability Framework (IIIF) to display herbarium images. Under the hood IIIF is JSON-LD and models everything as an annotation, so I think this framework is going to see a lot more use across a range of biodiversity projects. It certainly inspired me to add IIIF to my newly relaunched BioStor.


Agency


It's more a kind of pragmatic Archimedean sense that you might be able to move some subset of the world connected to any system on which you have root access or any project for which you're building a key component—from the leverage point of the command line. (The Emergence of Digital Humanities by Steven E. Jones, emphasis added)
One final thought which struck me is the notion of "agency", in the sense of a person being able to do things. For me one of the joys of biodiversity informatics is that I can make stuff that seems useful (if only to me). If, say, BHL ignores articles, well, you can grab their data and build something that finds those articles. If the data is available, and you can code, then there are few limits to what you can do. Even if you can't code, limits to what people can do are being removed. You have citizen scientists like @SiobhanLeachman (who presented at Biodiversity Next) revelling in the wealth of tools such as Wikipedia, Wikidata, etc. that enable her to add to biodiversity knowledge.

Do that at scale, as demonstrated by Carrie Seltzer's keynote on iNaturalist, and you can get millions of data points added by a passionate, empowered community.

Yet, I would find myself talking to biodiversity professionals working at some of the world's leading museums and herbaria, and they had far less agency than someone like Siobhan. They have no influence over the databases and software they use, even trivial changes aren't made because... reasons. Seemingly obvious suggestions of things that could be done, or offers of additional data are met with responses along the lines of "even if you gave us that data, we couldn't do anything with it because there's not a field in our database."

As a somewhat cranky, independent-minded academic, I greatly value the freedom to create things, and I'm extremely lucky that I can do that. But it is interesting to see that people fascinated by science but who are not employed as scientists often have more agency than the professional scientists. And maybe that's why I'm resistant to large conferences such as Biodiversity Next (and processors such as e-Biosphere 09). They represent the increasing professionalisation of the field, and with that often comes decreasing agency. When I grow up, I want to be a citizen scientist.
          

Senior Data Engineer

 Cache   
Are you a Senior Data Engineer with experience of building data processing platforms? Do you have relational database experience? Want to design, architect and implement real-time data platforms for one of the hottest healthcare start-ups?
This exciting healthcare start-up is currently embarking on an analytics, business intelligence and data science transformation programme. They are seeking to recruit a Senior Data Engineer to help develop the data platforms to enable this to happen.
Responsibilities:
  • Architecting highly scalable distributed systems, using different open source tools
  • Research & discover new methods to acquire data, and new applications for existing data
  • Create reliable ETL pipelines to support data science and BI projects
  • Manage all Big Data infrastructure and projects
    You:
    • BS or MS in Computer Science, Computer Engineering, Software Engineering, etc
    • 3-5 years of experience as a data engineer, developing scalable data processing platforms
    • Some experience on relational databases
    • Experience with Big Data technologies (Hadoop, Spark, Kafka, etc)
      Ideally:
      • Experience developing cloud-based micro-services (AWS, Azure, Google Cloud)
      • Experience with Data Science projects
          

DevOps Engineer

 Cache   
-------------------------

DEVOPS ENGINEER

Location:
CARY, NC

Employment Duration:
FULL TIME

-------------------------

DESCRIPTION

Global Knowledge is the world s leading IT and business skills training provider. Offering the most relevant and timely content delivered by the best instructors, we provide customers around the world with their choice of convenient class times, delivery methods and formats to accelerate their success. Our business skills solutions teach essential communications skills, business analysis, project management, ITIL service management, process improvement and leadership development. With thousands of courses spanning from foundational training to specialized certifications, our core IT training is focused on technology partners such as Amazon Web Services, Cisco, Citrix, IBM, Juniper, Microsoft, Red Hat and VMware. We offer comprehensive professional development for technologies like big data, cloud, cybersecurity and networking.

The DevOps Engineer will be joining the Software Engineering team that develops learning solutions that bring together instructor led training, virtual classrooms, and digital online learning in a singular user experience. The team designs, develops, tests, deploys, and manages theses learning solutions both on-premise and in the cloud following Agile Scrum best practices. This position requires good communication skills, attention to detail, and the ability to work independently or as part of a team.

Global Knowledge is looking for a DevOps Engineer who is passionate about automating development and production environments and loves the challenge of working in a fast-paced and dynamic work environment. In this role, you will be center to helping design, operate, abnd enhance environments that enable rapid development and deployments while achieving high availability. Along with a systemic discipline, we are also looking for candidates who can successfully work within the existing environment, and who are open and passionate about exploring new technologies and processes to improve our overall environment.

ESSENTIAL DUTIES AND RESPONSIBILITIES

* You are responsible for enhancing devops practices inside of Global Knowledge s Software Engineering team.

* You are center to help design and operate highly available software in large distributed and virtual environments.

* Metric driven and focused on continuous improvement.

* Strong expertise in leveraging a wide variety of open source technologies.

* Automation of build environments and IT operations is in your DNA.

* Setup, monitor, and manage continuous integration and continuous deployment environment.

* Setup, monitor, and manage development, test, staging, and production environments on-premise and in the cloud.

* Troubleshoot, diagnose and identify failing systems through the use of instrumentation and software.

* Ensure compliance to corporate IT policies and procedures.

* Other duties as assigned.

SUPERVISORY RESPONSIBILITIES

This role has no direct reports.

QUALIFICATIONS

* Proficiency of Linux at a systems administration level.

* Experience working within highly available and secure systems and network topologies.

* Experience working with Python/Django, MySQL, MongoDB, Elasticsearch, and RabbitMQ.

* Experience working in an Azure cloud environment.

* Experience working with GitHub and Jenkins.

* Experience with .Net Framework, NetDynamics, and Windows at the sysadmin level is a plus.

EDUCATION and/or EXPERIENCE REQUIREMENTS

Bachelors Degree in Computer Science, Information Systems (or equivalent) and 4 years of related experience to include administration, security, network design and management, programming, and troubleshooting.

COMPENTENCIES

* Drive customer focus personally and through teams (Customer Focus)

* Excellent written and oral communication skills to include report writing (Communication)

* Excellent troubleshooting skills (Technical / Professional Knowledge)

* Work successfully in a fast-paced changing environment (Stress Management)

* Work successfully in a team oriented environment (Contributing to Team Success)

* Work unsupervised to complete daily tasks and long-term goals (Managing Work)

* Ability to set own priorities and adjust as needed (Initiative)

* Ensures all GK standards have been met and asks users if satisfied (Follow Up)

* Reacts positively to change and modifies behavior to deal effectively with changes (Adaptability)

* Takes advantage of learning opportunities and anticipates future skill needs (Continuous Learning)

OTHER REQUIREMENTS AND RESPONSIBILITIES

You may be required to work extended hours on short notice due to production issues when other Global Knowledge employees are not required to work.

Global Knowledge is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status.
          

Associate Services Engineer

 Cache   
SunIRef:it Associate Services Engineer Indiana University 383 reviews - Bloomington, IN Indiana University 383 reviews Read what people are saying about working here. The GlobalNOC Software and Systems Engineering group is seeking talented engineers to design, develop, and operate innovative network management systems. Using established industry best practices and internal best practices in version control, automated testing suites, and software life cycle planning, this position provides systems analysis and programming to support the development of software and systems, and provide first-line technical support and solutions for problems related to developed software and systems, and using and developing operational workflows and procedures with impact across teams. With guidance from senior engineers and management, assists in the preparation of development roadmaps, effort estimates, requirements, designs and other project management components for projects with national, regional, metro, and campus level impact. Required Qualifications Bachelors degree in computer science or a related field and some relevant experience in systems analysis and programming. Combinations of education and related experience may be considered. Familiarity with software programming including data structures, algorithms, relational databases, and software development practices. Familiarity with TCP/IP, Linux system administration, and security fundamentals. Excellent interpersonal skills with a customer service orientation. Ability to effectively communicate and exchange information with a diverse variety of individuals, including individuals with varying degrees of technical knowledge. Willingness to work as part of a team in a dynamic and complex environment. Ability to perform with high levels of accuracy, problem-solving, dependability, and responsibility. Preferred Qualifications One year of experience developing and operating software services using Linux and other open source technologies. Experience with reading and writing code and scripts in Perl, Python, JavaScript, Java, or C. Experience working with SQL and NoSQL databases. Experience with IT best practices for IT operations. Experience with automation and config management using tools like ansible, puppet, and chef. Working Conditions / Demands Requires day-to-day technical decisions and personal initiative. Incumbent will need to understand system design and functional requirements, know when to ask for help, and communicate effectively with other technical team members and functional clients. Work will be reviewed periodically by a senior technical team member and verified by customers. Work with internal and external customers to assist with with request prioritization. Employee's efforts will impact network measurement / management applications used / deployed across the United States at the national, regional, metro, and campus levels by GlobalNOC customers. Stability, accuracy, and performance of the service is imperative. Work Location Bloomington, Indiana Ability to work in Indianapolis, Indiana Job Classification Salary Plan: PAE Salary Grade: 2IT FLSA: Exempt Job Function: Information Technology Posting Disclaimer This posting is scheduled to close at 12:01am EST on the advertised Close Date. This posting may be closed at any time at the discretion of the University, but it will remain open for a minimum of 5 business days. To guarantee full consideration, please submit your application within 5 business days of the Posted Date. Equal Employment Opportunity Indiana University is an equal employment and affirmative action employer and a provider of ADA services. All qualified applicants will receive consideration for employment without regard to age, ethnicity, color, race, religion, sex, sexual orientation, gender identity or expression, genetic information, marital status, national origin, disability status or protected veteran status. Indiana University does not discriminate on the basis of sex in its educational programs and activities, including employment and admission, as required by Title IX. Questions or complaints regarding Title IX may be referred to the U.S. Department of Education Office for Civil Rights or the university Title IX Coordinator. See Indiana University's Notice of Non-Discrimination here which includes contact information. Campus Safety and Security The Annual Security and Fire Safety Report, containing policy statements, crime and fire statistics for all Indiana University campuses, is available online. You may also request a physical copy by emailing IU Public Safety at *********** or by visiting IUPD. Contact Us Request Support Telephone: ************ Indiana University - Just posted report job - original job
          

Solutions Consultant - Cyber Security

 Cache   




Emtec is a Global consulting company that provides technology-empowered business solutions for world class organizations. Our Global Workforce of over 800 consultants provide best in class services to our clients to realize their digital transformation journey. Our clients span the emerging, mid-market and enterprise space. With multiple offices worldwide, we are uniquely positioned to deliver digital solutions to our clients leveraging Oracle, Salesforce, Microsoft, Java and Open Source technologies with a focus on Mobility, Cloud, Security, Analytics, Data Engineering and Intelligent Automation. Emtec's singular mission is to create "Clients for Life" - long-term relationships that deliver rapid, meaningful, and lasting business value.

At Emtec, we have a unique blend of Corporate and Entrepreneurial cultures. This is where you would have an opportunity to drive business value for clients while you innovate, continue to grow and have fun while doing it. You would work with team members who are vibrant, smart and passionate and they bring their passion to all that they do - whether it's learning, giving back to our communities or always going the extra mile for our client.





Position Description



As a Solution Consultant - Cyber Security you will support the sales team in new client acquisition and revenue growth and report to the Chief Technology Officer within Emtec's Infrastructure Services practice (EIS). You will have a deep understanding of the Emtec Security Solutions strategy, roadmap, and Security as a Service Technology Partner. Your role will include both pre- and post-sales client support, solutions presentations, client needs analysis and proposal development, and active engagement with our service delivery process. You will work closely with and become the Emtec SME for assigned Technology Partner solutions. You will become a key cybersecurity Trusted Advisor and Advocate for your clients. You will collaborate with the CTO and Marketing organization to craft effective presentations and marketing messaging based upon your cybersecurity industry experience. You will work closely with internal and external customers, as well as various levels of the organization.

Responsibilities:

  • Develop a comprehensive understanding of the Emtec Cyber Security solution portfolio strategy, market position, and competitive advantages
  • Collaborate with and proactively participate with the Emtec sales team in pursuit of new Cyber Security Services clients; providing pre-sales support and client-facing engagement in the prospecting and proposal development process.
  • Engage in advanced security architecture and risk profile discussions with existing and prospective clients, analyze client needs and design/scope solutions accordingly
  • Work closely with assigned Emtec Cyber Security Technology Partners to fully understand their capabilities, solution functionality, and market fit for prospective clients, in short become the Emtec SME for those assigned Partner Solutions
  • Create detailed, professional documentation to be delivered to existing and prospective clients in both written and verbal formats
  • Maintain a high level of knowledge of the general Cyber Security market with an emphasis on new or emerging tools, methods and techniques for both exploitation and defense
  • Provide bi-weekly "state of cybersecurity" updates to other Cyber team members as well as the Emtec practice leadership
  • Maintain an Emtec blog on emerging threats and vulnerabilities associated with digital enterprise adoption
  • Maintain a high level of knowledge about Emtec's services

    Must Have Skills:

    • Minimum 5 years sales experience in delivering cybersecurity consulting services including client-facing communication, security assessments, documentation review, and advisory consultation
    • Bachelor's degree or higher
    • Minimum 3 years of experience with one or more of the following frameworks: ISO 27001/2, NIST Cyber Security Framework, CIS Critical Security, PCI DSS
    • Minimum of 1 or more of the following certifications: CISSP, CISA, CISM
    • Experience dealing with security applications such as SIEM, GRC, Identity Access Management, IDS/IPS, Advanced Persistent Threat, Vulnerability Management Systems
    • Strong technical aptitude with the ability to quickly learn concepts related to IT and Cyber Security services management and solutions
    • Established C-Level (CEO, CIO, CFO, COO, CRM, CISO) customer relationships and management of those relationships

      Professional Skills:

      • Knowledge and experience with security technologies and methodologies such as Risk Assessments, Risk Management, Incident Response, Cyber Forensics, and Risk Policies
      • Excellent communication and presentation skills, both written and verbal
      • Experience briefing senior-level leadership and conveying technical subject matter to audiences of varying backgrounds and skill levels
      • Creative, independent thinker with strong business ethics and integrity
      • Positive leadership and team-oriented skill set
      • Self-starter with ability to build relationships, communicate product knowledge, and earn client trust
      • Ability to solve problems, with critical thinking, judgment, and strong decision-making skills
      • Strong collaboration skills and ability to work closely and effectively with members across departments and at all levels of the organization

        Emtec is an Equal Opportunity Employer

        US citizens and those authorized to work in the US are encouraged to apply


          

Azure DevOps Engineer

 Cache   
Are you someone that likes to work with teams across the organization to help create high performing delivery teams? Do you enjoy implementing DevOps and Application Lifecycle Management (ALM) solutions to break down traditional silos between development, testing, project management, and operations to establish cohesive processes from requirements to deployments? Required experience and skills:
  • Understand how to manage Agile projects using Azure Boards
  • Experience customizing process templates to fit needs from portfolio to teams
  • Know how to surface and visualize Azure DevOps data to an organization through tools in Azure DevOps and Power BI
  • Experience extending Azure DevOps by creating custom extensions
  • Leverage Git version control for providing a workflow for helping to ensure quality through branch policies, CI, and branching strategies
  • Be able to explain various branching strategies and working with clients to fit their needs
  • Create CI process across multiple platforms to compile, execute unit tests, and perform code quality checks.
  • Create Software Delivery Pipelines (SDP) to deploy applications, infrastructure as code across the multiple environments including SDLC controls.
  • Experience including security controls into the SDP process including SAST, DAST, and 3rd party Open Source Software (OSS) scanning
  • Experience using Azure Artifacts to manage 1st party built libraries and upstream 3rd party libraries
  • Leverage Azure DevOps for Test Case Management
  • Experience with automated testing across all levels - unit, service level, and UI testing and incorporating these tests in the pipeline.
  • Experience scripting in PowerShell and developing in C#
  • Experience working with web applications, services, and containerized applications
  • Experience is building Infrastructure as Code for provisioning public cloud infrastructure
  • Building and architecting public cloud applications
  • Customer-oriented, diligent, proactive, focused on achieving customer's business objectives as a top priority
  • Able to work successfully both individually and as a team
  • Easy-going, friendly, communicative, strives to see opportunities rather than problems

    Nice to Haves Experience with Kubernetes Deep Azure Experience


    Do you want to join the most talented team in the industry? We are looking for a creative, collaborative, entrepreneurial-like spirit who thrives in innovation and a fast pace environment. The salary for this position is highly dependent on experience and negotiable for the right candidate. Green House Data offers outstanding competitive benefits! Want to know more? Here are just some of the things that we can offer you:
    • Flexible Paid Time Off (YOU take the time you need)
    • 8 Day Holiday Pay
    • Paid Volunteer Day
    • Employer Contributed 3 Tier Medical Plan Options
    • Employer Contributed Dental Plan
    • 100% Employer-Paid Vision Plan
    • 100% Employer-Paid Short Term Disability Plan
    • 100% Employer-Paid $50,000 Life Insurance Plan including AD&D
    • Voluntary Long Term Disability Plan
    • Voluntary Benefits including; Accident, Critical Illness, and Medical Bridge Options
    • Additional Supplemental Life Insurance Plan including Spouse and Children
    • 3% Employer Match Simple Retirement IRA Plan
    • Life Assistance and Wellness Programs
    • Green Initiatives
    • Training and Development Programs
    • Employee Events
    • 100% Employer Paid Gym Membership
    • AND MORE...
          

Senior Software Engineer

 Cache   
Hey. - Are you a super talented software engineer who is getting tired of working on the same boring stuff all the time? Maybe you've spent a bunch of time keeping up on the latest tech but don't see many opportunities to use it in your day job? Perhaps you're looking for the opportunity to play a key role in shaping the direction and culture of an exciting new startup, working alongside other talented and driven individuals such as yourself? - If any of that sounds familiar and you're ready to finally do something about it, read on... Who are we? We are Prismatic, a well-capitalized startup in Sioux Falls, SD founded by three individuals with a proven track record of building and scaling exceptional software companies. Our mission is to build an embedded iPaaS (Integration Platform as a Service) that changes the way software vendors provide integrations to their customers. We believe there is a massive untapped potential in providing software integrations to end-users, and that many otherwise useful integrations either never get built or never get completely finished because building software integrations is quite difficult and expensive, for a multitude of reasons. Prismatic aims to solve this. - Our team has deep experience in this area, having previously spent fifteen years together building a software company and scaling it into an industry disruptor and a national leader, which involved selling, building, implementing, and supporting hundreds of unique software integrations. We believe we are well positioned to make a serious dent in the iPaaS market, which is a market that by all estimates is poised for a high degree of growth over the next several years as the proliferation of software continues. - We're looking for experienced, talented individuals to join us as early members of our engineering team and help build and shape the first version of our platform. Who are you? You're a highly talented software engineer with the following attributes: You have several years of experience in a senior development role Your teammates regard you as one of the best on the team and frequently seek you out for advice on technical problems You are extremely proficient with JavaScript or Python, ideally both You're able to design efficient database schemas and SQL queries, ideally in PostgreSQL or similar RDBMS You're comfortable working in any layer of the tech stack, be it front-end, back-end, database, CI/CD tooling, etc. You're a very adept learner and develop proficiency with new tools quickly The thought of joining a small startup and being responsible for helping to shape the company culture and the product while it's still in its early days is exciting to you You live in the Sioux Falls, SD area - And serious bonus points if you have any of the following attributes: You have professional experience with Vue.js, React, or Django You have experience building GraphQL APIs You have experience implementing OAuth in a production application You have experience working with various services offered by AWS, Azure, or GCP, ideally with things like AWS Lambda, RDS, etc. You're passionate about and involved in the Open Source community And now, the good part You'd be joining our team of smart, driven, experienced individuals to build a multi-tenant cloud-hosted iPaaS solution using some of the latest technology while adhering to modern best practices such as: 12-factor application design Dockerized development environment Microservices and FaaS-based architecture Unit and E2E testing, code formatting, and code linting integrated into workflow Continuous integration and deployment (CI/CD) GitHub Flow git workflow - Along with that, we offer: Competitive salary plus stock options Health, dental, and vision insurance Company-paid life and disability insurance 401(k) with company match Unlimited PTO policy The best tools, including MacBook Pro docked to big monitors Free snacks and drinks Weekly company cocktail hour on Friday afternoon How to ApplySend your resume to . Tell us which position you're applying for and include a cover letter explaining why you'd be a good fit. - Keywords: Software engineer, software architect, software developer, software development, computer programmer, computer programming
          

Data Scientist I (Mid Level)

 Cache   
PURPOSE OF JOB

Uses advanced techniques that integrate traditional and non-traditional datasets and method to enable analytical solutions; Applies predictive analytics, machine learning, simulation, and optimization techniques to generate management insights and enable customer-facing applications; participates in building analytical solutions leveraging internal and external applications to deliver value and create competitive advantage; Translates complex analytical and technical concepts to non-technical employees

JOB REQUIREMENTS

* Partners with other analysts across the organization to fully define business problems and research questions; Supports SME's on cross functional matrixed teams to solve highly complex work critical to the organization.

* Integrates and extracts relevant information from large amounts of both structured and unstructured data (internal and external) to enable analytical solutions.

* Conducts advanced analytics leveraging predictive modeling, machine learning, simulation, optimization and other techniques to deliver insights or develop analytical solutions to achieve business objectives.

* Supports Subject Matter Experts (SME's) on efforts to develop scalable, efficient, automated solutions for large scale data analyses, model development, model validation and model implementation.

* Works with IT to research architecture for new products, services, and features.

* Develops algorithms and supporting code such that research efforts are based on the highest quality data.

* Translates complex analytical and technical concepts to non-technical employees to enable understanding and drive informed business decisions.

MINIMUM REQUIREMENTS

* Master's degree in Computer Science, Applied Mathematics, Quantitative Economics, Statistics, or related field. 6 additional years of related experience beyond the minimum required may be substituted in lieu of a degree.

* 4 or more years of related experience and accountability for complex tasks and/or projects required.

* Proficient knowledge of the function/discipline and demonstrated application of knowledge, skills and abilities towards work products required.

* Proficient level of business acumen in the areas of the business operations, industry practices and emerging trends required.

Must complete 12 months in current position (from date of hire or date of placement), or must have manager's approval prior to posting.

*Qualifications may warrant placement in a different job level*

PREFERRED

* Expertise in experimental design, advanced statistical analysis, and modeling to discover key relationships in data and applying that information to predict likely future outcomes; fluent in regression, classification, tree-based models, clustering methods, text mining, and neural networks.

* Proven ability to enrich (add new information to) data, advise on appropriate course(s) of action to take based on results, summarize complex technical analysis for non-technical executive audiences, succinctly present visualizations of high dimensional data, and explain & justify the results of the analysis conducted.

* Highly competent at data wrangling and data engineering in SQL and SAS as well as advanced machine learning (ML) techniques using Python; comfortable in cloud computing environments (Azure, GCP, AWS).

* Hands-on experience developing products that utilize advanced machine learning techniques like deep learning in areas such as computer vision, Natural Language Processing (NLP), sensor data from the Internet of Things (IoT), and recommender systems; along with transitioning those solutions from the development environment into the production environment for full-time use.

* PhD in Computer Science, Applied Mathematics, Quantitative Economics, Operations Research, Statistics, or related field with coursework in advanced Machine Learning techniques (Natural Language Processing, Deep Neural Networks, etc).

* Fluent in deep learning frameworks and libraries (TensorFlow, Keras, PyTorch, etc).

* Highly skilled in handling Big Data (Hadoop, Hive, Spark, Kafka, etc).

* Experience in reinforcement learning, knowledge graphs and graph databases, Generative Adversarial Networks (GANs), semi-supervised learning, multi-task learning is a plus.

* Experience in publishing at top ML, computer vision, NLP, or AI conferences and/or contributing to ML/AI-related open source projects and/or converting ML/AI papers into code is a plus.

* Background in Property insurance operations with an understanding of claims, underwriting, and insurance pricing a plus.

* Additional Skills: Ability to translate business problems and requirements into technical solutions by building quick prototypes or proofs of concept with business and technical stakeholders.

* Ability to convert proofs of concept into scalable production solutions.

* Ability to lead teams by following best practices in development, automation, and continuous integration / continuous deployment (CI/CD) methods in an agile work environment.

* Ability to work in and with technical, multidisciplinary teams.

* Willingness to continuously learn and apply new analytical techniques

RELOCATION assistance is AVAILABLE for this position.

The above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job.

Must complete 12 months in current position (from date of hire or date of placement), or must have manager s approval prior to posting.

LAST DAY TO APPLY TO THE OPENING IS 11/06/19 BY 11:59 PM CST TIME.

USAA is an equal opportunity and affirmative action employer and gives consideration for employment to qualified applicants without regard to race, color, religion, sex, national origin, age, disability, genetic information, sexual orientation, gender identity or expression, pregnancy, veteran status or any other legally protected characteristic. If you'd like more information about your EEO rights as an applicant under the law, please click here. For USAA s Affirmative Action and EEO statement, please click here. Furthermore, USAA makes hiring decisions compliant with the Fair Chance Initiative for Hiring Ordinance (LAMC 189.00).

USAA provides equal opportunity to qualified individuals with disabilities and disabled veterans. If you need a reasonable accommodation, please email HumanResources@usaa.com or call 1-800-210-USAA and select option 3 for assistance.
          

Technical Architect (Integrations)

 Cache   
The Technical Architect (Integration) is responsible for establishing current and long-range direction of enterprise application integration technologies aimed at keeping the organization on the forefront of change for our transformation to an agile product-focused culture. This position will focus on integration architecture activities for our digital development teams that support enterprise strategies and objectives.

Key responsibilities include:

*

Plan, develop, refine, optimize, and support the enterprise integration architecture as required to meet the business requirements of the organization.
*

Develop and maintain the enterprise architecture blueprint for the organization, overseeing the use of best practices for modern application integration.
*

Design and develop the application integration architecture foundation to provide agile business solutions in alignment with enterprise architecture direction and design standards.
*

Assess the compatibility and integration of products and services proposed in order to ensure a robust integration architecture across the interdependent technologies.
*

Support highly-available and fault-tolerant enterprise and web-scale software deployments.
*

Partner with platform and cybersecurity architecture teams to integrate data security controls into integration layers that support continuous development, integration, deployment, and delivery processes.
*

Provides technical review reports with recommendations to the leadership team. Reviews technical project requests for large business line technology projects ( $250K) in accordance with Ameren guidelines Promotes the interfacing and control of the organization present technology and dissemination of technological information throughout the organization. .
*

Performs system wide / cross functional team management on projects to evaluate, design and implement new technology systems. Establish effective project plans and timelines, set milestones and manage resources.
*

Ability to think strategically about business, product, and technical challenges.
*

Provide technical direction to team members and is a key contributor. Ability to solve highly technical complex problems and be called on to consult on projects. Viewed as a subject matter expert within the organization.
*

Lead team members and provide work guidance to meet project objectives and assure timeliness, quality and cost effectiveness.

*

Coaches less experienced co-workers and provides feedback to enhance skills and knowledge.

Qualifications

* Bachelor s degree required, preferably in engineering, mathematics, computer science, or business.

* Master's Degree preferred.

* Seven or more years of relevant IT experience required, with a preference of three or more years of architecture experience.

In addition to the above qualifications, the successful candidate will demonstrate:

* Strong understand of enterprise integration patterns and past experience with integrating cloud applications with on-premise and legacy platforms required.

* Past experience and strong understanding of MuleSoft required.

* In-depth experience translating technical concepts for non-technical audiences, integration systems, data transfer approaches, analytics, scrum/agile development methodologies, project management and technical implementation and TOGAF methodologyIn-depth experience translating technical concepts for integration systems, data transfer approaches, scrum/agile development methodologies, and TOGAF methodology

Preferred Skills

Preferred technical skills include experience with agile methodologies, Jira/Confluence (or similar tools), DevSecOps continuous integration/continuous delivery toolchains, cloud services with Amazon Web Services (AWS), testing frameworks, GitHub, Jenkins, development experience with Java, Microsoft .NET, or Angular web design and development, scripting languages, security vulnerability assessment using Veracode and SecureAssist, SOAP and REST web services, MuleSoft ESB, AnyPoint API management, enterprise messaging using JMS, containerization technologies such as Docker, identity and access management (IAM), role-based access control (RBAC), experience with IT compliance and risk management requirements (e.g. security, privacy, PCI, SOX, HIPAA, etc.), relational database experience with Amazon RDS, Oracle, or SQL Server, architecture modeling/diagramming tools, and experience with using a variety of open source technologies.

Additional Information

Ameren s selection process includes a series of interviews and may include candidate testing and/or an individual aptitude or skill-based assessment. Specific details will be provided to qualified candidates.

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran
          

Google open sources Cardboard as it retreats from phone-based VR

 Cache   
Google's decision to back away from phone-based VR may have an upside for creators. The internet giant is releasing a Cardboard open source project that will let developers create VR experiences and add Cardboard support to their apps. It covers ba...
          

Open Source Experts, Optim8 Solutions Moves to Top Tier of SUSE Partner Program

 Cache   

By Anil Ramiah, SUSE Partner Executive, Middle East Since establishing themselves as a key open source consultancy for the Middle East, Optim8 Solutions, based out of the UAE, have dedicated themselves to delivering end-to-end open source solutions – from planning all the way to continuous deployment. When it comes to application delivery, the Internet of […]

The post Open Source Experts, Optim8 Solutions Moves to Top Tier of SUSE Partner Program appeared first on SUSE Communities.


          

First Release of the Open Mutation Miner (OMM) System

 Cache   

We are happy to announce the first major public release of our protein mutation impact analysis system, Open Mutation Miner (OMM), together with a new open access publication: Naderi, N., and R. Witte, "Automated extraction and semantic analysis of mutation impacts from the biomedical literature", BMC Genomics, vol. 13, no. Suppl 4, pp. S10, 06/2012.

OMM is the first comprehensive, fully open source system for extracting and analysing mutation-related information from full-text research papers. Novel features not available in other systems include: the detection of various forms of mutation mentions, in particular mutation series, full mutation impact analysis, including linking impacts with the causative mutation and the affected protein properties, such as molecular functions, kinetic constants, kinetic values, units of measurements, and physical quantities. OMM provides output options in various formats, including populating an OWL ontology, Web service access, structured queries, and interactive use embedded in desktop clients. OMM is robust and scalable: we processed the entire PubMed Open Access Subset (nearly half a million full-text papers) on a standard desktop PC, and larger document sets can be easily processed and indexed on appropriate hardware.

read more


          

The OrganismTagger System

 Cache   


Our open source OrganismTagger is a hybrid rule-based/machine-learning system that extracts organism mentions from the biomedical literature, normalizes them to their scientific name, and provides grounding to the NCBI Taxonomy database. Our pipeline provides the flexibility of annotating the species of particular interest to bio-engineers on different corpora, by optionally including detection of common names, acronyms, and strains. The OrganismTagger performance has been evaluated on two manually annotated corpora, OT and Linneaus. On the OT corpus, the OrganismTagger achieves a precision and recall of 95% and 94% and a grounding accuracy of 97.5%. On the manually annotated corpus of Linneaus-100, the results show a precision and recall of 99% and 97% and grounding with an accuracy of 97.4%. It is described in detail in our publication, Naderi, N., T. Kappler, C. J. O. Baker, and R. Witte, "OrganismTagger: Detection, normalization, and grounding of organism entities in biomedical documents", Bioinformatics, vol. 27, no. 19 Oxford University Press, pp. 2721--2729, August 9, 2011.

read more


          

(USA-VA-Chantilly) Jr. Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Jr. Software Developer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for people who are passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + Requires 4 years of relevant experience + Capable of working independently and as a member of a dedicated team to solve complex problems in a clear and repeatable manner. **Preferred Qualifications:** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service,Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-FL-Melbourne) Jr. Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Jr. Software Developer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for people who are passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + Requires 4 years of relevant experience + Capable of working independently and as a member of a dedicated team to solve complex problems in a clear and repeatable manner. **Preferred Qualifications:** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service,Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-FL-Melbourne) Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Software Engineer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for someone who is passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + “Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + 8 years of relevant experience. **Preferred Qualifications** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service, Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Chantilly) Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Software Engineer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for someone who is passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + “Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + 8 years of relevant experience. **Preferred Qualifications** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service, Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Herndon) Jr. Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Jr. Software Developer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for people who are passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + Requires 4 years of relevant experience + Capable of working independently and as a member of a dedicated team to solve complex problems in a clear and repeatable manner. **Preferred Qualifications:** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service,Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

(USA-VA-Herndon) Software Engineer

 Cache   
**Please review the job details below.** Maxar is looking for a Software Engineer who is interested in leveraging and learning cutting-edge technologies and languages in support of space-based resources and national security. We are looking for someone who is passionate about and experienced with one or more of the following: + Software development projects that focus on integrating and refactoring existing commercial or open source tools, artificial intelligence, machine learning, big data, geospatial imagery, 3d visualization and remote sensing - writing in various languages and scripts from C++ to Go, Python, Java/Groovy, and everything in between. + “Ruthless automation” of system and software "DevSecOps" deployment and test activities. This includes regression, system, and integration testing which assist in software development quality assurance efforts. **Responsibilities:** + Work on a fully cross-functional team fully leveraging developer-focused, agile approaches with dedicated Scrum Master and Product Owner + Stay fresh and curious by engaging in marketing demonstrations, customer training and providing meeting/briefing support as needed. **Minimum Requirements:** + Must have a current/active Top Secret and be willing and able to obtain a TS/SCI with CI polygraph. + 8 years of relevant experience. **Preferred Qualifications** + TS/SCI with polygraph + Degree preferred + Experience with DevSecOps automation, Infrastructure-as-a-Service, Platform-as-a-Service, Containerization, RESTful APIs and services. + Experience with classified government networks, satellite phenomenology, tool integration and workflows such as NiFi a plus. Experience as part of a scrum/agile team, scrum master, or product owner \#cjpost **MAXAR Technologies offers a generous compensation package including a competitive salary; choice of medical plan; dental, life, and disability insurance; a 401(K) plan with competitive company match; paid holidays and paid time off.** We are a vertically integrated, new space economy story, including segments across the value continuum for every moment leading up to and following launch. We lead in satellite communications (building and operating), ground infrastructure, Earth observation, advanced analytics, insights from machine learning, next-generation propulsion, space robotics, on-orbit servicing, on-orbit assembly, and protection of space assets through cybersecurity and monitoring of space systems. By integrating our leading-edge capabilities, we provide innovative, cost-effective solutions, value for customers, and thus unlock the multiplier effect of our combined businesses. **Maxar Technologies values diversity in the workplace and is an equal** **opportunity/affirmative** **action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.**
          

Wired.com: What Does Crowdsourcing Really Mean?

 Cache   

LCM note:  Karl Rove recently took to the WSJ to discuss how campaigns were changing.  One of his suggestions was that the traditional campaign structure would lose meaning and that individuals outside the structure would take a greater role.  Somewhat related, this is an old interview with Douglas Rushkoff about the original meaning and the very different popular understanding of the term "crowdsourcing."

From religion, novels and back again. The strength of community and the dangers of crowdsourcing

Sarah Cove Interviews Douglas Rushkoff via telephone on May 18, 2007

Douglas Rushkoff is an author, professor, media theorist, journalist, as well as a keyboardist for the industrial band PsychicTV. His books include Media VirusCoercionNothing Sacred: The Truth About Judaism (a book which opened up the question of Open Source Judaism), Exit Strategy (an online collaborative novel), and a monthly comic book, Testament. He founded the Narrative Lab at New York University's Interactive Telecommunications Program, a space which seeks to explore the relationship of narrative to media in an age of interactive technology.

We spoke about the notion of crowdsourcing, Open Source Religion, and collaborative narratives.

Sarah Cove: What is crowdsourcing for you?

Douglas Rushkoff: Well, I haven't used the term crowdsourcing in my own conversations before. Every time I look at, it rubs me the wrong way.

To read the rest if this interview click here.

 

 


          

Updated Joomla to 3.9.13

 Cache   

Joomla (ID : 413) package has been updated to version 3.9.13. Joomla is an award-winning content management system (CMS), which enables you to build Web sites and powerful online applications. Many aspects, including its ease-of-use and extensibility, have made Joomla the most popular Web site software available. Best of all, Joomla is an open source

The post Updated Joomla to 3.9.13 first appeared on Rad Web Hosting Blog.


          

Updated OrangeHRM to 4.3.4

 Cache   

OrangeHRM (ID : 192) package has been updated to version 4.3.4. OrangeHRM aims to be the worlds leading open source HRM solution for small and medium sized enterprises (SMEs) by providing a flexible and easy to use HRM system affordable for any company worldwide. View Demo and review of OrangeHRM here:http://www.softaculous.com/apps/erp/OrangeHRM Get started with premium

The post Updated OrangeHRM to 4.3.4 first appeared on Rad Web Hosting Blog.


          

Updated Matomo to 3.12.0

 Cache   

Matomo (ID : 171) package has been updated to version 3.12.0. Matomo (formerly Piwik) is the leading open-source analytics platform that gives you more than just powerful analytics: Matomo aims to be an open source alternative to Google Analytics. Matomo is a PHP MySQL software program that you download and install on your own webserver.

The post Updated Matomo to 3.12.0 first appeared on Rad Web Hosting Blog.


          

The Open Source Smart Home

 Cache   

[Tijmen Schep] sends in his project, Candle Smart Home, which is an exhibit of 12 smart home devices which are designed around the concepts of ownership, open source, and privacy.

The central controller runs on a Raspberry Pi which is running Mozilla’s new smart home operating system. Each individual device …read more


          

GitHub Sponsors anche in Italia

 Cache   

Microsoft porta GitHub Sponsors in Italia, una piattaforma che consente di sostenere progetti Open Source sottoscrivendo degli abbonamenti mensili

Leggi GitHub Sponsors anche in Italia


          

Executive: Program Manager, DevOPS - Houston, Texas

 Cache   
Program Manager, DevOPS Reporting to the Senior Director, Product Development, the Program Manager, DevOps is responsible for running and supporting PROS enterprise applications including problem management, incident management, change management, configuration management, capacity planning, monitoring, and alert management. This position will be a key role in the product organization that will interact with Professional Services, Cloud, Support, Engineering and Product Management teams at PROS. As an influencer of an engineering team, you are responsible and accountable for maintaining and improving service uptime, cost of ownership, resiliency, and service stability. To be successful, you must have the ability to dive deep into current systems, utilizing tools and data to measure performance as well as envision, scope, and design future systems. The right person will be highly technical, analytical, and have experience interacting and influencing technical teams. You must be able to work with internal teams to ensure that provided services are suitable to meet the needs to the service. PROS is powering modern commerce with dynamic pricing science! The Company - PROS: PROS Holdings, Inc. (NYSE: PROS) is leading the shift to modern commerce, helping competitive enterprises create a personalized and frictionless experience for their customers. Powered with Dynamic Pricing Science, PROS solutions make it possible for companies to price, configure and sell their products and services with speed, precision and consistency across all sales channels. Our customers lead their markets across more than 10 sectors, and benefit from 30 years of accumulated knowledge and data science infused into our purpose-built industry solutions. PROS drives more than 200 million prices and 1.7 billion forecasts every day for enterprises in more than 30 industries around the globe. Our mission is to help companies and the people who work for them outperform. To learn more, visit pros.com. A Day in the Life of the ______________ -About the role: Program Manage active projects in the DevOps pipeline for the respective product family from Contract Signing to Support Handoff --- Manage product family "Factory Operations" using the Programs module in ITI --- Improve "Factory Operations" within and across Product and Project core teams --- Communicate to Product Core Teams and Project Core Teams on any needed improvements, such as Standard Packages, DevOps improvements, etc. --- Ensure that QE Lead for the product family oversees the proper creation of test cases throughout the project --- Enforce strong compliance with our agreed Change, Incident, Request, knowledge, and Problem Management processes Works with other DevOps Program Managers and DevOps leadership to continue to improve our best practices overtime --- Working with Cloud, Engineering and PS to ensure clean delivery through the process --- Communicate with Product Core Teams to ensure the right coverage from Engineering/Architecture --- Ensure that SOWs are reviewed by PS, PM, and Dev to insure that minimize customizations and add key features to our product Oversee that proper ITSM and QA processes are followed by all Teams --- Ensure that QE Lead for the product family oversees the proper creation of test cases throughout the project --- Enforce strong compliance with our agreed Change, Incident, Request, knowledge, and Problem Management processes Works with other DevOps Program Managers and DevOps leadership to continue to improve our best practices overtime --- Manage product family "Factory Operations" using the Programs module in ITI --- Improve "Factory Operations" within and across Product and Project core teams --- Communicate to Product Core Teams and Project Core Teams on any needed improvements, such as Standard Packages, DevOps improvements, etc. About you: Are a seasoned engineering operations manager who can grow and inspire a team of talented engineers, working with a wide variety of open source technologies to support a web based, multi-tier, cloud-based, application environment Have deep understanding of modern web technologies -- in a customer facing production environment -- with strong networking experience and advanced application troubleshooting skills Have deep experience with configuration management, maintenance, the CLI and a strong understanding of ITIL principles and Agile practices Have a strong understanding and experience with virtualization technologies - experience using a variety of hypervisors and have experience building on the cloud Have experience tuning performance of Java applications in virtual environments Have familiarity with service management techniques and tools with experience with issue tracking/ticket/project/workflow software like Jira Demonstrate an understanding (and importance of) using best practice and process, governance and oversight - collaborating with many diverse teams -- to support a secure environment and adhere to compliance regulations Are a natural team leader who can motivate and encourage technical excellence and have been successful working across organizational boundaries, bringing together people with diverse perspectives and experience to find solutions Are capable of leading a discussion with executive management and displays mature mindset and operates independently Are comfortable working within a distributed team located in multiple time zones and can work cross functionally with many different teams Are able to participate on various company-wide projects with occasional travel to other PROS offices and being on a rotating on-call schedule participating in incident management and resolution Skills & Personal Characteristics: BS in Information Technology, Computer Science or equivalent experience A minimum of 4-5 years experience managing an operations organization in a 24x7 global infrastructure as well as a record of individual technical achievement A strong background in internet service deployments and understanding of the web technologies: tcp/ip, http, load balancers, web servers, relational & nosql databases Familiarity in Linux system administration, preferably RHEL or CentOS Multi-year experience supporting production instances of large scale SaaS, Cloud-based or mobile applications Familiarity with Jira or other issue and ticket tracking/project/workflow software Understanding and experience with public cloud technologies - experience using Microsoft Azure is preferred Understanding of logging, infrastructure management and monitoring tools. Experience with modern application technologies and design patterns including Cloud infrastructure, distributed computing, horizontal scaling, & database (SQL and NoSQL) technologies Experience with Agile and sprint/scrum based working methodologies Strong technical leadership including solid communication and analytical skills with thorough understanding of product development and successful problem definition, decomposition, estimation and resolution Experience with automation technologies like Puppet, Chef or similar, understanding of container technologies like Docker Experience with software development technologies and understanding of how software gets made and delivered: SDLC, IDE's, Code repositories like Stash, QA/Testing environments, build servers and release management Why PROS? PROS culture and the truly extraordinary people who work here are at the very core of our success. We have a passion for what we do, and we won't stop until we've delivered on our promises. We're committed to the success of our customers. That's why we think harder and dream bigger - so our customers can go even further than they ever imagined possible. This is a unique opportunity to join a company that has 30+ years of proven success with a long runway of more success. Analysts Our people make PROS stand out from the rest. if you want to be a part of something truly extraordinary, come help us shape the future of how companies compete and win in their markets. Work Environment: Most work activities are performed in an office or home-office environment and require little to moderate physical exertion. Work activities may require periods of extended hours, critical deadlines and stressful situations. To successfully complete the tasks of this position, individuals must be able to communicate clearly (in writing and orally), comprehend business terminology, interpret numerical data. This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. This job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities or working conditions associated with the position.Why PROS?PROS culture and the truly extraordinary people who work here are at the very core of our success. We have a passion for what we do, and we won't stop until we've delivered on our promises. We're committed to the success of our customers. That's why we think harder and dream bigger - so our customers can go even further than they ever imagined possible. This is a unique opportunity to join a company that has 30+ years of proven success with a long runway of more success. Analysts Our people make PROS stand out from the rest. if you want to be a part of something truly extraordinary, come help us shape the future of how companies compete and win in their markets.Work Environment:Most work activities are performed in an office or home-office environment and require little to moderate physical exertion. Work activities may require periods of extended hours, critical deadlines and stressful situations. To successfully complete the tasks of this position, individuals must be able to communicate clearly (in writing and orally), comprehend business terminology, interpret numerical data. This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. This job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities or working conditions associated with the position. ()
          

The Open Source Smart Home

 Cache   

[Tijmen Schep] sends in his project, Candle Smart Home, which is an exhibit of 12 smart home devices which are designed around the concepts of ownership, open source, and privacy.

The central controller runs on a Raspberry Pi which is running Mozilla’s new smart home operating system. Each individual device …read more


          

JFML: A Java Library to Design Fuzzy Logic Systems According to the IEEE Std 1855-2016

 Cache   
JFML: A Java Library to Design Fuzzy Logic Systems According to the IEEE Std 1855-2016 Soto-Hidalgo, Jose Manuel; Alonso, José M.; Acampora, Giovanni; Alcalá Fernández, Jesús Fuzzy logic systems are useful for solving problems in many application fields. However, these systems are usually stored in specific formats and researchers need to rewrite them to use in new problems. Recently, the IEEE Computational Intelligence Society has sponsored the publication of the IEEE Standard 1855-2016 to provide a unified and well-defined representation of fuzzy systems for problems of classification, regression, and control. The main aim of this standard is to facilitate the exchange of fuzzy systems across different programming systems in order to avoid the need to rewrite available pieces of code or to develop new software tools to replicate functionalities that are already provided by other software. In order to make the standard operative and useful for the research community, this paper presents JFML, an open source Java library that offers a complete implementation of the new IEEE standard and capability to import/export fuzzy systems in accordance with other standards and software. Moreover, the new library has associated a Website with complementary material, documentation, and examples in order to facilitate its use. In this paper, we present three case studies that illustrate the potential of JFML and the advantages of exchanging fuzzy systems among available software
          

Questions/Réponses à propos d'ArcGIS Urban

 Cache   
Suite aux premières présentations d'ArcGIS Urban lors de SIG2019, j'ai collecté un ensemble de questions que vous m'avez posées sur la nouvelle solution de planification urbaine de la plateforme ArcGIS. En espérant que cela vous sera utile, je vous propose une synthèse des questions et des réponses relatives à la version 1.0 actuelle et à la version 1.1 (prévue avant la fin de l'année).
  
 
   

Quelles sont les principales capacités d'Urban ?
  • Afficher une représentation numérique 3D de votre territoire dans laquelle tous les projets de développement urbain sont visualisés à partir d'une seule base de données centralisée par votre SIG pour une collaboration entre les différents acteurs, partenaires et parties prenantes.
  • Visualiser les zonages réglementaires en 3D: convertir le texte légal en une représentation cartographique 3D pouvant être utilisée pour la planification détaillée de scénarii allant jusqu'au niveau de la parcelle.
  • Générer les volumes plausibles (possibles) des bâtiments et analyser l'impact des plans à l'aide d'indicateurs de capacité générées automatiquement et en comparant différents scénarios de conception.

Comment ArcGIS Online et ArcGIS Urban travaillent-ils ensemble ?
ArcGIS Urban est construit sur ArcGIS Online. Il utilise des couches web hébergées pour afficher et gérer les données publiées dans votre organisation ArcGIS Online. L'application utilise les concepts habituels de groupes et d'utilisateurs d'ArcGIS Online pour contrôler l'accès à ces couches.

Peut-on utiliser ArcGIS Urban sur un portail ArcGIS Enterprise ?
Non, ArcGIS Urban n’est pour l'instant disponible que sur ArcGIS Online. En revanche, ArcGIS Urban peut consommer certaines couches web d'un portail ArcGIS Enterprise.

Comment ArcGIS Pro et ArcGIS Urban travaillent-ils ensemble ?
La base de données d'ArcGIS Urban est stockée en tant que couches (et tables) web hébergées et peuvent donc être ouvertes dans ArcGIS Pro lors de la connexion à votre organisation ArcGIS Online, où elles ont été créées. Les couches de fonctions peuvent être affichées et modifiées via ArcGIS Pro. Toutefois, aucune intégration dédiée n'est disponible.

Comment CityEngine et ArcGIS Urban travaillent-ils ensemble ?
Une intégration dédiée est disponible entre CityEngine et Urban . CityEngine peut automatiquement se connecter à un modèle urbain et charger des plans et des projets depuis Urban. Les modifications et les modifications apportées dans CityEngine peuvent être synchronisées à nouveau avec Urban et de nouveaux scénarios peuvent être créés.
   
En quoi ArcGIS Urban diffère-t-il de CityEngine ? 
ArcGIS Urban et CityEngine ont beaucoup de capacités en commun (par exemple, ils utilisent le même moteur procédural pour générer des modèles 3D), mais ils diffèrent également. Dans CityEngine, vous pouvez écrire vos règles CGA personnalisées. ArcGIS Urban fournit une règle intégrée qui ne peut pas être modifiée. Ce que vous pouvez modifier dans ArcGIS Urban, ce sont les paramètres qui régissent la règle, tels que la hauteur maximale, le ratio de surface au sol autorisé ou encore les distances de retrait par rapport aux limites des parcelles. ArcGIS Urban est une application web, tandis que CityEngine est un outil bureautique installé localement. En général, ArcGIS Urban est une plateforme ciblée sur les workflows de planification urbaine et facilite la communication entre les parties prenantes. CityEngine prend en charge la création de scénarii d'aménagements urbains détaillés et génère des modèles 3D sophistiqués pour des scénarii élaborés.
  
   
Ai-je toujours besoin de CityEngine si j'utilise ArcGIS Urban ?
ArcGIS Urban peut tout à fait fonctionner sans être couplé à CityEngine. En couplant les deux applications, vous pourrez retravailler dans CityEngine les plans élaborés avec ArcGIS Urban pour leur données plus de détails et de réalisme. Une fois modifiés dans CityEngine, il sera très simple de les publier sur votre portail pour un usage, notamment, dans ArcGIS Urban. Depuis la version 2019.1 de CityEngine, une interface dédiée à ces workflows est intégrée dans l'application.
  
Peut-on utiliser ArcGIS Urban avec des données de CAO/DAO (par exemple: AutoCAD) ?
Oui, il existe principalement deux workflows pour importer des données CAO/DAO dans ArcGIS Urban:
  • Intégration CAO/DAO via CityEngine: L’intégration étroite entre ArcGIS Urban et CityEngine permet d’importer des données de différents formats 3D (dxf, dae, fbx, glTF, kml/kmz, obj) dans CityEngine et, à partir de là, les modèles peuvent être publiés et directement associés à un scénario de projet ou de plan.
  • Intégration de la CAO/DAO via ArcGIS Pro: ArcGIS Urban peut afficher les couches de scène d'Objets 3D (multipatch) et les couches de scènes de bâtiments (BIM) publiées via ArcGIS Pro. Ces couches peuvent provenir à la fois de logiciels de CAO/DAO ou de logiciels BIM tels que Revit. Après la publication des couches à partir d'ArcGIS Pro, elles peuvent être utilisées comme couches du contexte global ou associées en tant que sources externes dans un scénario de projet ou de plan.
        
Les modèles BIM peuvent-ils être ajoutés à ArcGIS Urban ? Si oui, combien et quelles sont les recommandations pour obtenir les meilleures performances ?
Oui, les modèles BIM peuvent être ajoutés à ArcGIS Urban, mais ils doivent être préalablement publiés en couches de scènes de bâtiments. Pour obtenir les meilleurs résultats, il est recommandé de publier une version allégée du modèle BIM (conserver uniquement les couches pertinentes). Plusieurs modèles BIM peuvent être associés à différents scénarios, mais pour des raisons de performances, ils ne seront pas chargés/affichés en même temps.

Ajout d'un modèle BIM sur un projet
Ajout d'un modèle 3D texturé sur un projet
  
Puis-je travailler hors ligne avec ArcGIS Urban ?
Non, ArcGIS Urban est une application web et nécessite une connexion Internet stable.

Quels navigateurs sont pris en charge par ArcGIS Urban ?
Comme beaucoup d'application exploitant WebGL, ArcGIS Urban fonctionne mieux avec Chrome, mais Firefox, Safari et Edge sont également pris en charge. Seul Internet Explorer ne peut pas être utilisé avec ArcGIS Urban car il ne prend pas en charge la technologie sous-jacente à la modélisation procédurale des bâtiments 3D.
  
Combien coûte Urban et comment puis-je l'acheter ?
Une licence ArcGIS Urban doit être associée à un type d'utilisateur Creator de votre organisation ArcGIS Online. Une licence ArcGIS Urban est une souscription annuelle dont vous pourrez connaître le montant en contactant votre interlocuteur commercial habituel chez Esri. 

Comment l'usage d'ArcGIS Urban affecte-t-il mes crédits ArcGIS Online ? Est-ce qu'un grand territoire consomme plus de crédits ?
Une grande ville ne consomme pas nécessairement plus de crédits. La consommation de crédit dépend du nombre de plans, de projets et d'indicateurs stockés dans votre zone urbaine. Plus le nombre d'entités stockées dans la base de données est élevé, plus le nombre de crédits nécessaires à leur stockage est important. Pour anticiper la consommation de crédit d'ArcGIS Online, vous pouvez consulter cette page. Des crédits supplémentaires sont utilisés dans ArcGIS Urban lors de l'utilisation de la recherche d'adresse et pour les requêtes d'élévation (lorsque vous n'utilisez pas de couche d'altitude personnalisée).
  
Les étudiants peuvent-ils utiliser ArcGIS Urban sans avoir à acheter une licence ?
Oui, ArcGIS Urban fait partie de l'accord sur les établissements d'enseignement. Avec la licence pour l'éducation, les universités et les écoles peuvent mettre des licences à la disposition des étudiants pour leurs cours. 

ArcGIS Urban prend-il en charge le développement urbain de sites vierges ?
Oui, le développement de nouvelles installations est pris en charge en milieu urbain De nouvelles parcelles peuvent être ajoutés et générés à la volée avec les outils de dessin 3D. Vous pouvez dessiner de nouvelles parcelles dans ArcGIS Urban, et vous pouvez également subdiviser de grandes parcelles directement dans l'application. Pour des zones très vastes, vous pouvez aussi ouvrir le plan dans CityEngine et générer automatiquement des parcelles basées sur le réseau de rues importées ou dessinées dans l'application bureautique.

Outre la planification urbaine, d’autres secteurs peuvent-ils tirer parti de l’utilisation d'ArcGIS Urban ? 
Oui, les concepts sous-jacents d'ArcGIS Urban peuvent également être utilisés pour résoudre d'autres problèmes, tels que:
  • Architecture, ingénierie et construction (AEC): comment pouvons-nous suivre tous nos projets proposés, planifiés, en cours ou achevés ?
  • Immobilier : Comment pouvons-nous gérer et afficher tous les projets en cours dans notre organisation ?
  • Éducation : Comment pouvons-nous expliquer aux étudiants la densité et les tendances en matière de développement urbain avec des outils 3D et de manière interactive ?

Et bien d’autres… faites-nous savoir si vous avez une idée pour une application dans votre domaine métier ou votre secteur d'activité.

Des données externes peuvent-elles être ajoutées à ArcGIS Urban à partir de différentes sources de données ?
Oui, des données externes peuvent être ajoutées à ArcGIS Urban. Pour cela vous utilisez le gestionnaire de données intégré à ArcGIS Urban pour charger des données à partir de couches d’entités existantes ou de feuilles de calcul (csv, excel, par exemple). Des couches existantes telles que les fonds de cartes, les bâtiments 3D existants, les parcelles ou encore les zones réglementaires peuvent être ajoutés. Dans la configuration des plans et des projets, des couches externes peuvent être attachées à chaque scénario. Pour les indicateurs de contexte, l'ajout de scènes web externes peuvent être préparées à l'aide des capacités de SmartMapping puis ajoutées à votre projet ArcGIS Urban.

Existe-t-il des indicateurs environnementaux intégrés dans Urban ?
Oui, les indicateurs du Living Atlas d'ArcGIS Online incluent l'indicateur Trust for Public Land (TPL) appelé ParkServe®. La plate-forme ParkServe® fournit des informations sur les systèmes de parcs et le pourcentage associé de résidents des villes et villages et des communautés situées à moins de 10 minutes à pied d'un parc. Cet indicateur est actuellement disponible uniquement aux États-Unis. Cependant, ce type d'indicateur peut être construit avec ArcGIS Pro et les données de votre SIG.

Les indicateurs de capacité intégrés peuvent-ils être personnalisés dans ArcGIS Urban ?
Oui, les indicateurs de capacité intégrés (population, logements, emplois) peuvent être configurés pour répondre aux normes de votre ville. Les valeurs par défaut correspondent à une moyenne pour l'ensemble des États-Unis et peuvent être utilisées comme référence si vous ne connaissez pas les indicateurs tels que la surface habitable moyenne par habitant. Ces hypothèses sont liées aux types d'utilisation de l'espace dans ArcGIS Urban et peuvent être configurées dans le gestionnaire de données.

Exemple d'indicateur de contexte relatif à l'évolution du nombre de logements
Exemple d'indicateur de contexte relatif à l'accessibilité aux espaces verts

Puis-je exporter des données d'ArcGIS Urban ?
Oui, vous pouvez utiliser les couches d'entités et les tables de la base de données d'ArcGIS Urban pour exporter les données et les utiliser dans d'autres outils ou à des fins d'analyse. Une fonction d'exportation avancée sera ajoutée dans la prochaine version 1.1 d'ArcGIS Urban. Cette capacités à venir permettra d'exporter des indicateurs de capacité vers une feuille Excel ainsi que l'exportation de scénarii complets vers un scène web (3D) ou une carte web (2D). De plus, ArcGIS Urban vous permet de générer des captures d’écran haute résolution au sein de l’application.
  
Quel type de données est généré par ArcGIS Urban ?
L'application crée des volumes de bâtiments dits "plausibles" ainsi que des enveloppes maximisant les contraintes réglementaires défini par les zones ou les parcelles. L'application calcule les indicateurs de capacité en fonction de la surface de plancher brute (de chaque étage) et du type utilisation de l'espace des bâtiments. Les indicateurs de capacité actuellement disponibles sont le nombre de personnes, le nombre de logements et le nombre d'emplois. Les indicateurs sont calculés sur la base d’hypothèses telles que l'occupation moyenne de surface utile par habitant, le nombre d'emplois par type d'activité, .... Ces hypothèses peuvent être configurées par l'utilisateur. Les indicateurs de capacité sont affichés dans un tableau de bord qui peut être filtré sur l’ensemble de la zone d’étude ou sur une sélection effectuée par l’utilisateur.

Volumes de bâtiments plausibles et indicateurs de capacité
  
Quels autres types de données ArcGIS Urban peut-il rapporter en plus des indicateurs de capacité ?
Divers modèles analytiques (certains propriétaires et d'autres open source) sont utilisés à des fins de planification. ArcGIS Urban peut fournir des contenus en entrée pour ces modèles. Souvent, ces modèles sont également géolocalisés, ce qui permet à ArcGIS Urban de fournir des informations plus détaillées (passé, actuel et futur) pour ces modèles (parcelles, bâtiments, occupation des sols, ...). Donc si vous avez le modèle, vous pouvez utiliser les résultats générés par ArcGIS Urban, pour analyser l'impact de différents scenarii sur divers autres indicateurs qui vous intéressent, tels que la les déplacements, les besoins énergétiques, les exigences de stationnement...
  
Des widgets personnalisés peuvent-ils être ajoutés à ArcGIS Urban ?
Non, les widgets personnalisés ne peuvent pas être ajoutés à ArcGIS Urban. Si vous souhaitez configurer votre propre application web, vous pouvez utiliser Web AppBuilder for ArcGIS, ArcGIS Experience Builder ou l'API JavaScript ArcGIS.
  
Est-ce qu'ArcGIS Urban fonctionne pour n'importe quel territoire ?
Oui, Urban est conçu pour fonctionner sur des petites agglomérations ou de grandes métropoles, et dans une certaine mesure, au niveau de départements ou de régions.
  
Quelle superficie peut faire la zone urbaine ?
La quantité de contenu qu'ArcGIS Urban peut afficher n’est pas limitée, car les données (bâtiments, par exemple) sont diffusées sois la forme d'un streaming web continu et sont chargées à différents niveaux de détails, adaptés à l'étendue d’affichage et du niveau d'échelle. Lors de la création de nouveaux plans dans Urban (une ZAC par exemple), il est recommandé de ne pas couvrir plus de 50 000 parcelles (ou bâtiments) par plan pour des raisons de performances et pour une expérience utilisateur optimale. Cependant, une ville peut contenir un nombre infini de plans.
On notera aussi que la technologie des couches de scènes d'ArcGIS Online vous permet d’agrandir la taille et la densité de données 2D/3D de votre territoire au fur et à mesure du chargement des entités en fonction de votre étendue et de votre niveau de zoom. Le niveau de détail des bâtiments 3D est ajusté en permanence en fonction de ces paramètres. Dans l'exemple montré dans ma démonstration ArcGIS Urban à la conférence SIG2019, le territoire couvert était de 530 km2 avec plus de 350 000 bâtiments 3D texturés. 
  
Dois-je disposer d'un fond de carte 3D pour commencer à utiliser ArcGIS Urban?
Non, vous n'avez pas besoin du modèle des bâtiments 3D existants. Bien sûr, il est recommandé de créer un fond de carte 3D pour une meilleure expérience et pour que des décisions de planification plus précises puissent être prises. Avec ArcGIS Pro, vous disposez de toute les fonctionnalités permettant de générer un fond de carte 3D plus ou moins précis selon les sources de données dont vous disposez. Rapprochez de votre distributeur Esri pour être conseillé. A minima, Esri France est en mesure de vous proposer une maquette 3D sur l'ensemble de la France en LOD1 réalisée à partir de la BDTopo de l'IGN.
  
Comment Urban gère-t-il la végétation (arbres, parcs, espaces ouverts, etc.) ?
Les arbres 3D peuvent être importés en tant que couche web de contexte si vous en avez. Une couche d'arbre 3D peut être produite si vous avez accès à un jeu de données Lidar ou si vous avez une couche de points pour vos arbres. Les couches de zones inondables, d'espaces vert, de données socio-démographiques, ....peuvent être intégrées à ArcGIS Urban en tant qu'indicateur ou couche contextuelle si elles sont hébergées en tant que couche web sur votre portail. Leur rendu 2D ou 3D sera préalablement préparé dans la visionneuse de scène de votre portail. Avec la version 1.1 à venir, il sera possible d'ajouter à votre base de données Urban une couche pour les arbres 3D existants (couche de scène) qui, comme pour les bâtiments, pourra être masquée par les nouveaux développements.

Existe-t-il des services professionnels requis pour ArcGIS Urban ? Ai-je besoin d'un kit de lancement ?
Non, mais il est recommandé de passer par des jours d'assistance pour la configuration initiale d'ArcGIS Urban. Vous pouvez cependant configurer vous-même ArcGIS Urban à l'aide du gestionnaire de données de l'application.
  
   
Si vous avez vous d'autres questions, n'hésitez pas à les poser sur ce blog, à votre interlocuteur technique/commercial habituel mais aussi dans l'espace GeoNet dédié à ArcGIS Urban.

          

SpamAssassin: Welcome to SpamAssassin

 Cache   
Welcome. Welcome to the home page for the open-source Apache SpamAssassin Project. Apache SpamAssassin is the #1 Open Source anti-spam platform giving system administrators a filter to classify email and block spam (unsolicited bulk email).
          

Ignite 2019 : la première version finale d'Edge basé sur Chromium arrive en début d'année prochaine, une RC est disponible en téléchargement avec le nouveau logo

 Cache   
Ignite 2019 : la première version finale d'Edge basé sur Chromium arrive en début d'année prochaine
Une RC est disponible en téléchargement avec le nouveau logo

En décembre de l'année précédente, Microsoft a annoncé son intention d'adopter le projet open source Chromium dans le développement de Microsoft Edge sur desktop afin de « créer une meilleure compatibilité web pour ses clients et réduire la fragmentation du web pour tous les développeurs web. » Les premières builds officielles...
          

Executive: Program Manager, DevOPS - Houston, Texas

 Cache   
Program Manager, DevOPS Reporting to the Senior Director, Product Development, the Program Manager, DevOps is responsible for running and supporting PROS enterprise applications including problem management, incident management, change management, configuration management, capacity planning, monitoring, and alert management. This position will be a key role in the product organization that will interact with Professional Services, Cloud, Support, Engineering and Product Management teams at PROS. As an influencer of an engineering team, you are responsible and accountable for maintaining and improving service uptime, cost of ownership, resiliency, and service stability. To be successful, you must have the ability to dive deep into current systems, utilizing tools and data to measure performance as well as envision, scope, and design future systems. The right person will be highly technical, analytical, and have experience interacting and influencing technical teams. You must be able to work with internal teams to ensure that provided services are suitable to meet the needs to the service. PROS is powering modern commerce with dynamic pricing science! The Company - PROS: PROS Holdings, Inc. (NYSE: PROS) is leading the shift to modern commerce, helping competitive enterprises create a personalized and frictionless experience for their customers. Powered with Dynamic Pricing Science, PROS solutions make it possible for companies to price, configure and sell their products and services with speed, precision and consistency across all sales channels. Our customers lead their markets across more than 10 sectors, and benefit from 30 years of accumulated knowledge and data science infused into our purpose-built industry solutions. PROS drives more than 200 million prices and 1.7 billion forecasts every day for enterprises in more than 30 industries around the globe. Our mission is to help companies and the people who work for them outperform. To learn more, visit pros.com. A Day in the Life of the ______________ -About the role: Program Manage active projects in the DevOps pipeline for the respective product family from Contract Signing to Support Handoff --- Manage product family "Factory Operations" using the Programs module in ITI --- Improve "Factory Operations" within and across Product and Project core teams --- Communicate to Product Core Teams and Project Core Teams on any needed improvements, such as Standard Packages, DevOps improvements, etc. --- Ensure that QE Lead for the product family oversees the proper creation of test cases throughout the project --- Enforce strong compliance with our agreed Change, Incident, Request, knowledge, and Problem Management processes Works with other DevOps Program Managers and DevOps leadership to continue to improve our best practices overtime --- Working with Cloud, Engineering and PS to ensure clean delivery through the process --- Communicate with Product Core Teams to ensure the right coverage from Engineering/Architecture --- Ensure that SOWs are reviewed by PS, PM, and Dev to insure that minimize customizations and add key features to our product Oversee that proper ITSM and QA processes are followed by all Teams --- Ensure that QE Lead for the product family oversees the proper creation of test cases throughout the project --- Enforce strong compliance with our agreed Change, Incident, Request, knowledge, and Problem Management processes Works with other DevOps Program Managers and DevOps leadership to continue to improve our best practices overtime --- Manage product family "Factory Operations" using the Programs module in ITI --- Improve "Factory Operations" within and across Product and Project core teams --- Communicate to Product Core Teams and Project Core Teams on any needed improvements, such as Standard Packages, DevOps improvements, etc. About you: Are a seasoned engineering operations manager who can grow and inspire a team of talented engineers, working with a wide variety of open source technologies to support a web based, multi-tier, cloud-based, application environment Have deep understanding of modern web technologies -- in a customer facing production environment -- with strong networking experience and advanced application troubleshooting skills Have deep experience with configuration management, maintenance, the CLI and a strong understanding of ITIL principles and Agile practices Have a strong understanding and experience with virtualization technologies - experience using a variety of hypervisors and have experience building on the cloud Have experience tuning performance of Java applications in virtual environments Have familiarity with service management techniques and tools with experience with issue tracking/ticket/project/workflow software like Jira Demonstrate an understanding (and importance of) using best practice and process, governance and oversight - collaborating with many diverse teams -- to support a secure environment and adhere to compliance regulations Are a natural team leader who can motivate and encourage technical excellence and have been successful working across organizational boundaries, bringing together people with diverse perspectives and experience to find solutions Are capable of leading a discussion with executive management and displays mature mindset and operates independently Are comfortable working within a distributed team located in multiple time zones and can work cross functionally with many different teams Are able to participate on various company-wide projects with occasional travel to other PROS offices and being on a rotating on-call schedule participating in incident management and resolution Skills & Personal Characteristics: BS in Information Technology, Computer Science or equivalent experience A minimum of 4-5 years experience managing an operations organization in a 24x7 global infrastructure as well as a record of individual technical achievement A strong background in internet service deployments and understanding of the web technologies: tcp/ip, http, load balancers, web servers, relational & nosql databases Familiarity in Linux system administration, preferably RHEL or CentOS Multi-year experience supporting production instances of large scale SaaS, Cloud-based or mobile applications Familiarity with Jira or other issue and ticket tracking/project/workflow software Understanding and experience with public cloud technologies - experience using Microsoft Azure is preferred Understanding of logging, infrastructure management and monitoring tools. Experience with modern application technologies and design patterns including Cloud infrastructure, distributed computing, horizontal scaling, & database (SQL and NoSQL) technologies Experience with Agile and sprint/scrum based working methodologies Strong technical leadership including solid communication and analytical skills with thorough understanding of product development and successful problem definition, decomposition, estimation and resolution Experience with automation technologies like Puppet, Chef or similar, understanding of container technologies like Docker Experience with software development technologies and understanding of how software gets made and delivered: SDLC, IDE's, Code repositories like Stash, QA/Testing environments, build servers and release management Why PROS? PROS culture and the truly extraordinary people who work here are at the very core of our success. We have a passion for what we do, and we won't stop until we've delivered on our promises. We're committed to the success of our customers. That's why we think harder and dream bigger - so our customers can go even further than they ever imagined possible. This is a unique opportunity to join a company that has 30+ years of proven success with a long runway of more success. Analysts Our people make PROS stand out from the rest. if you want to be a part of something truly extraordinary, come help us shape the future of how companies compete and win in their markets. Work Environment: Most work activities are performed in an office or home-office environment and require little to moderate physical exertion. Work activities may require periods of extended hours, critical deadlines and stressful situations. To successfully complete the tasks of this position, individuals must be able to communicate clearly (in writing and orally), comprehend business terminology, interpret numerical data. This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. This job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities or working conditions associated with the position.Why PROS?PROS culture and the truly extraordinary people who work here are at the very core of our success. We have a passion for what we do, and we won't stop until we've delivered on our promises. We're committed to the success of our customers. That's why we think harder and dream bigger - so our customers can go even further than they ever imagined possible. This is a unique opportunity to join a company that has 30+ years of proven success with a long runway of more success. Analysts Our people make PROS stand out from the rest. if you want to be a part of something truly extraordinary, come help us shape the future of how companies compete and win in their markets.Work Environment:Most work activities are performed in an office or home-office environment and require little to moderate physical exertion. Work activities may require periods of extended hours, critical deadlines and stressful situations. To successfully complete the tasks of this position, individuals must be able to communicate clearly (in writing and orally), comprehend business terminology, interpret numerical data. This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. This job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities or working conditions associated with the position. ()
          

This week's Postgres news

 Cache   

#330 — November 6, 2019

Read on the Web

Postgres Weekly

Postgres 12 Initial Query Performance Impressions — We’ve been getting excited about Postgres 12 for ages here, but how does it really perform? Kaarel set up a stress test with various levels of scale and.. it’s a mixed bag with no obvious conclusions to draw.

Kaarel Moppel

Building Columnar Compression in a Row-Oriented Database — How Timescale has achieved 91%-96% compression in the latest version of their TimescaleDB time-series data extension for Postgres.

Timescale

Hands-On PostgreSQL Training with Experts — Special rate for hands on PostgreSQL training with local 2ndQuadrant experts at 2Q PGConf 2019 in Chicago. Courses include: PostgreSQL Database Security, PostgreSQL Multi-master Replication, Postgres Optimization, PostgreSQL Business Continuity.

2ndQuadrant PostgreSQL Training sponsor

postgres-checkup: A Postgres Health Check Tool — A diagnostics tool that performs ‘deep analysis’ of a Postgres database’s health, detect issues, and produces recommendations for resolving any issues found. v1.3.0 has just been released.

Postgres.ai

Application Connection Failover using HAProxy with Xinetd — I’m a huge fan of haproxy, a powerful but easy to manage TCP and HTTP proxy/load balancer, so I’m looking forward to the rest of this series.

Jobin Augustine

Implementing K-Nearest Neighbor Space Partitioned Generalized Search Tree Indexes — K-nearest neighbor answers the question of “What is the closest match?”. PostgreSQL 12 can answer this question, and use indexes while doing it.

Kirk Roybal

Installing the PostgreSQL 12 Package on FreeBSD — You have to do some work since the final release of Postgres 12 isn’t in the quarterly package update yet.

Luca Ferrari

Installing Postgres on FreeBSD via Ansible

Luca Ferrari

📂 Code and Projects

PostgREST 6.0: Serve a RESTful API from Your Postgres Database — It’s not new, but it’s a mature project that’s been doing the rounds on social media again this week, so let’s shine a spotlight on it again :-)

Joe Nelson et al.

Take the Guesswork Out of Improving Query Performance — Based on the query plan, pgMustard offers you tips to make your query faster. Try it for free.

pgMustard sponsor

Managing PostgreSQL's Partitioned Tables with Rubypg_partition_manager is a new gem for maintaining partitioned tables that need to be created and dropped over time as you add and expire time-based data in your app.

Benjamin Curtis

Pgpool-II 4.1.0 Released — Adds connection pooling and load balancing to Postgres. 4.1 introduces statement level load balancing and auto failback.

Pgpool Global Development Group

supported by

💡 Tip of the Week

Putting multiple LIKE patterns into an array

A simple way to perform arbitary searches over the contents of columns is by using the LIKE clause in your queries. For example, in a table of blog posts, this query could find all posts with a title containing the string 'Java':

SELECT * FROM posts WHERE title LIKE '%Java%';

IF you want to create more elaborate queries, things can soon become unwieldy:

SELECT * FROM posts WHERE title LIKE '%Java%' OR title LIKE '%Perl%' OR title LIKE '%Python%';

Postgres supports two SQL operators called ANY (SOME is an alias meaning the same thing) and ALL that can be used to perform a single check across a set of values, and we can use this with LIKE queries.

ANY and ALL are more commonly used with subqueries, but we can put multiple LIKE match patterns into an array and then supply this to ANY or ALL like so:

SELECT * FROM posts WHERE title LIKE ANY(ARRAY['%Java%', '%Perl%', '%Python%']);

There's also a way to write array literals in a shorter style, if you prefer:

SELECT * FROM posts WHERE title LIKE ANY('{%Java%,%Perl%,%Python%}');

Naturally, while these queries will find any rows where title matches against any of the supplied patterns, you could also use ALL to ensure you only get back titles which contain all of the patterns.

This week’s Tip of the Week is sponsored by DigitalOcean. Find out how engineers at DigitalOcean built a scalable marketplace for developers on top of their managed Kubernetes service.

🗓 Upcoming Events

  • PG Down Under (November 15 in Sydney, Australia) — The second outing for this annual, Australian Postgres conference.
  • 2Q PGCONF 2019 (December 4-5, 2019 in Chicago) — A conference dedicated to exchanging knowledge about the world’s most advanced open source database: PostgreSQL
  • PgDaySF (January 21, 2020 in San Francisco) — Bringing the PostgreSQL international community to the heart of San Francisco and Silicon Valley.
  • PgConf.Russia (Febuary 3-5, 2020 in Moscow, Russia) — One day of tutorials and two days of talks in three parallel sessions.
  • PGConf India (Febuary 26-28, 2020 in Bengaluru, Maharashtra, India) — A dedicated training day and a multi-track two-day conference.
  • pgDay Paris 2020 (March 26, 2020 in Paris, France) — Learn more about the world’s most advanced open source database among your peers.

          

lnxw48a1: lnxw48a1 favorited something by sean: While I have no qualms about Apache as a license, the BSL effectively says "You can't use our software to produce a competing service", which is actually a huge restriction on what you can do with the source code.This is really fucking bad. It reeks of a company that wants to double down on its IP because they're struggling to balance running a business with being true to their Open Source roots.

 Cache   
lnxw48a1 favorited something by sean: While I have no qualms about Apache as a license, the BSL effectively says "You can't use our software to produce a competing service", which is actually a huge restriction on what you can do with the source code.

This is really fucking bad. It reeks of a company that wants to double down on its IP because they're struggling to balance running a business with being true to their Open Source roots.
          

WinUI 3.0 with Ryan Demopoulos

 Cache   
What's happening with Windows client-side development? Carl and Richard talk to Ryan Demopoulous about WinUI 3.0, the next version of the WinUI stack, which represents a major shift in how Windows applications are going to be built and supported in the future. Ryan starts the conversation focused on the current WinUI 2, which is open source, but largely focuses only on UWP. WinUI 3 expands the horizons to support .NET Core and more - the alpha bits shipped at Ignite, check it out!

          

Field Programmable Gate Array Engineering Analyst

 Cache   
Job Number: R0061036 Field Programmable Gate Array Engineering Analyst Key Role: Work as part of a diverse contract team to support a DoD scientific and technical intelligence client and provide the required expertise to perform in-depth technical analysis of the capabilities and vulnerabilities in foreign military systems. Author concise scientific and intelligence assessments in conformance with intelligence community (IC) analytic standards, which convey the results of that analysis to the client's DoD and IC partners and consumers. Conduct analysis on the Field Programmable Gate Arrays (FPGA) associated with military systems and communications devices and collaborate with multiple analytic partners across DoD and the IC. Basic Qualifications: -5 years of experience with the design, development, or technical analysis of field programmable gate array technology -2 years of experience with working in a classified environment -Knowledge of embedded hardware and software that allows FPGAs to communicate and function -Knowledge of programming languages, including Python -Ability to use Microsoft Office for word processing, developing, and maintaining spreadsheets and databases and preparing presentations -TS/SCI clearance -BA or BS degree Additional Qualifications: -Experience with all-source scientific and technical intelligence research, analysis and production or analytic support to DoD and US government agencies -Experience with hardware definition languages (HDL) -Experience with secure coding practices -Experience with reverse engineering, including Ida Pro or open source alternative -Knowledge of military systems and telecommunication systems design -Possession of excellent oral and written communication skills -BA or BS degree in EE, Computer Engineering, or a related field preferred; MS degree in EE or Computer Engineering a plus Clearance: Applicants selected will be subject to a security investigation and need to meet eligibility requirements for access to classified information; TS/SCI clearance is required. We’re an EOE that empowers our people—no matter their race, color, religion, sex, gender identity, sexual orientation, national origin, disability, veteran status, or other protected characteristic—to fearlessly drive change. #LI-AH1, APC3, CJ1
          

It's About Time Oracle and VMware Partnered Up

 Cache   
eWEEK NEWS ANALYSIS: VMware and Oracle—sworn enemies for years, though not in the open source world—are now playing nicely together in catering to all those thousands of customers globally who are using them both in the data center and in the cloud.
          

Java Developer

 Cache   
RESPONSIBILITIES:Kforce has a client in search of a Java Developer in Chandler, Arizona (AZ).

Start your career off on the right foot with one of 'America's Top 500 Companies', as awarded by Forbes. Our client is known as a place where people feel included, valued, supported and respected. Innovative thinking and industry-leading technology allow their associates to thrive and grow in their careers. And with a priority placed on their culture and company principles, their commitment to diversity, ethics and the communities in which they operate is their driving force.REQUIREMENTS:


  • 5+ years of Java development with good Object Oriented foundation
  • Experience and familiarity building modern Spring applications with Spring Boot; Strong background with Spring and related projects (Boot, MVC, JDBC)
  • Experience with Pivotal Cloud Foundry and deploy cloud native applications to PCF
  • Experience with architecting and implementing 12 factor apps
  • Experience with designing and implementing highly available, scalable distributed systems
  • Microservices design and implementation
  • Development of Java Web Services REST/SOAP/WSDL/XML/SOA
  • Unit testing frameworks using Junit and Mockito
  • Employs agile development practices and mindset for design, architecture, coding, testing, managing source code, continuous delivery practices and quality reviews
  • Participates in sprint planning; Provides work estimates to deliver product stories; Owns development stories
  • DevOps, Continuous Delivery process refinement and tool sets such as Jenkins, monitoring tools such as Splunk, AppDynamics, etc.
  • Experience with open source tools and technologies
  • Oracle database development and tuning experience, SQL
  • Gradle or Maven build tools
  • Source control systems such as Git
  • Continuous integration and deployment using Jenkins or any other similar tool
  • Experience with Kafka


    Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
          

Intelligence Planner

 Cache   
JOB SUMMARY



Responsible for serving as a J2 Intelligence Planner to provide support to USSOCOM J2 in Intelligence Planning/Joint Intelligence Preparation of the Operational Environment/Intelligence Preparation of the Battlefield (IP/JIPOE/IPB). Provide subject matter expertise; conduct strategic studies and assessments for analysis; perform assessment, and synchronization of the UCP CWMD mission during the transfer from USSTRATCOM to USSOCOM to more effectively synchronize the global DOD CWMD mission. Conduct CWMD mission analysis, CWMD planning and overseeing of the transfer of resources to USSOCOM

* Work with Intelligence Analysts and Operational Planners to develop CWMD Joint Intelligence Preparation of the Operational Environment (JIPOE).

* Synchronize full-intelligence support to the CWMD global campaign plan.

* Integrate Intelligence products/analysis into Joint Planning Group (JPG) to support campaign plan development.

* Travel to various CONUS and OCONUS non-hazardous locations as required to ensure support of the planning process.

* Performs other duties as assigned.

* Provide focused intelligence planning and analysis of priority threats related to the CWMD mission space in accordance with national level and DoD level guidance.

* Provide intelligence assessments and update analysis based on indicators and warning associated with the priority threats and emerging threats in the CWMD mission space in accordance with national level and DoD level guidance.

* Utilize the SOF Geospatial Enterprise (SGE) to develop geospatial displays to integrate operational data, intelligence reporting and open source data to create geospatial representations of the operating environment and execute briefings to decision makers and other actions offers.

* Develop User Defined Operational Pictures (UDOP) utilizing SGE and C2IE to collate and tailor geospatial representations of the operating environment to inform decision makers within the CWMD Fusion Cell and the Commander of USSOCOM.

* Support the USSOCOM effort to develop a global intelligence and operations picture (CIP/COP) in accordance with the priorities supporting United States Special Operations Command (USSOCOM).

* Integrate the Department of Defense Functional Campaign Plan for Countering Weapons of Mass Destruction (DoD FCP CWMD) into the Joint Staff directed Command and Control of the Information Environment (C2IE) platform in order to integrate the CWMD Fusion Cell into the Joint Staff Global Integration Process.

* Provide technical and analytical support to the Government Lead of the Situational Awareness section at the CWMD Coordination Conference; participate in multiple information sessions and working groups supporting the Situational Awareness section.

MINIMUM QUALIFICATIONS



Bachelor's Degree or ten (10) years of relevant experience OR an equivalent combination of relevant training and/or military experience

DEPARTMENTAL REQUIREMENTS



* Must possess an active Top Secret Clearance with Sensitive Compartmented Information (TS/SCI).

* Two (2) years of experience in doctrinal intelligence support leveraging intelligence community architectures.

* Two (2) years of experience in intelligence community analysis, production, and collection and warning requirements.

* Two (2) years of experience in CCMD plans support for Counter Weapons of Mass Destruction (CWMD) mission set required.

* Former experience as a senior O-4/O-5mid-grade GG-13 (step 5 or higher) with at least two (2) years of CWMD planning experience at the 4-Start CCMD level required.

* Ten (10) years relevant CWMD experience.



JOB CATEGORY



Administrative

ADVERTISED SALARY



$60,000 to $80,000

WORK SCHEDULE



Begin time: 8:30 AM
End time: 5:00 PM

WORKING CONDITION(S)



* Work in high, precarious places

* Overnight or extended duration travel

PRE-EMPLOYMENT REQUIREMENTS



* Clearance by the Department of Defense

* Criminal Background Check

* Driver's License Check

* Drug Screening

* Fingerprinting Check

* Reference Checks



OTHER INFORMATION



* Ability to travel locally and nationally.

* Ability to travel internationally.

* This is a time limited appointment which may be terminated at any time with 30days notice.



HOW TO APPLY



Prospective Employee

If you have not created a registered account, you will be asked to create a username and password for use of the system. It is recommended that you provide an active/valid e-mail account as that will be the main source of communication regarding your status within the process. In this account, you are able to track your applicant status in "My Applications".

In order to be considered eligible for the position as an internal candidate, departmental staff must meet minimum requirements of the position, be in good performance standing, and have been continuously employed at the University for at least six months.

Before you begin the process, we recommend that you are prepared to attach electronic copies of your resume, cover letter or any other documents within the application process. It is recommended that you combine your cover letter and resume/curriculum vitae into one attachment. Attached documents should be in Microsoft Word or PDF format. All applicants are required to complete the online application including work history and educational details (if applicable), even when attaching a resume.

*This posting will close at 11:59 pm of the close date.

HOW TO APPLY



Current Employee

As a current employee, you must log into Employee Self Service (ESS) to apply for this and any other internal career opportunity of interest. In this account, you are able to track your applicant status in "My Applications".

In order to be considered eligible for the position as an internal candidate, departmental staff must meet minimum requirements of the position, be in good performance standing, and have been continuously employed at the University for at least six months.

Before you begin the process, we recommend that you are prepared to attach electronic copies of your resume/ curriculum vitae, cover letter or any other documents within the application process. It is recommended that you combine your cover letter and resume into one attachment. Attached documents should be in Microsoft Word or PDF format.

*This posting will close at 11:59 pm of the close date.

DISCLOSURES



Clery Notice

In compliance with the Jeanne Clery Disclosure of Campus Security Policy and Crime Statistics Act, the University Police department at Florida International University provides information on crimes statistics, crime prevention, law enforcement, crime reporting, and other related issues for the past three (3) calendar years. The FIU Annual Security report is available online at: https://police.fiu.edu/wp-content/uploads/sites/54/2016/04/Campus_Security_Report__Safety_Guide.pdf.

To obtain a paper copy of the report, please visit the FIU Police Department located at 885 SW 109th Avenue, Miami, FL, 33199 (PG5 Market Station).



Pay Transparency

Florida International University will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant.

FIU is a member of the State University System of Florida and an Equal Opportunity, Equal Access Affirmative Action Employer all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
          

Data Scientist Engineer

 Cache   
Octo is currently seeking a Platform Security Engineer to join a growing team on an exciting and highly visible project for a DoD customer.The project you will be working is to define and design the data architecture and taxonomy in preparation for conducting extensive analysis of the data ingested via Air Force existing legacy applications to a more evolvable architecture that can better leverage a cloud environment to deliver better technology, reduce program sustainment costs, and higher system reliability. Our approach is to transform legacy applications to be cloud native and reside on a Platform as a Service (PaaS).?Additionally, modernize current applications by breaking them down into loosely coupled micro-services, and leveraging a continuous integration / continuous delivery pipeline to enable an agile DevOps Strategy.Octo Data Scientists on this project will have an opportunity to ?receive 6+ months of Pivotal Cloud Foundry training as part of the standard on-boarding process for this project.You?As a Data Scientist at Octo, you will be involved in the analysis of unstructured and semi-structured data, including latent semantic indexing (LSI), entity identification and tagging, complex event processing (CEP), and the application of analysis algorithms on distributed, clustered, and cloud-based high-performance infrastructures. Exercises creativity in applying non-traditional approaches to large-scale analysis of unstructured data in support of high-value use cases visualized through multi-dimensional interfaces. Handle processing and index requests against high-volume collections of data and high-velocity data streams. Has the ability to make discoveries in the world of big data.??Requires strong technical and computational skills??- engineering, physics, mathematics,??coupled with??the ability??to code design, develop, and deploy sophisticated applications using advanced unstructured and semi-structured data analysis techniques and utilizing high-performance computing environments. Has the ability to utilize advance tools and computational skills to interpret, connect, predict and make discoveries in complex data and deliver recommendations for business and analytic decisions.??Experience with software development, either an open-source enterprise software development stack (Java/Linux/Ruby/Python) or a Windows development stack (.NET, C#, C++). Experience with data transport and transformation APIs and technologies such as JSON, XML, XSLT, JDBC, SOAP and REST. Experience with Cloud-based data analysis tools including Hadoop and Mahout, Acumulo, Hive, Impala, Pig, and similar. Experience with visual analytic tools like Microsoft Pivot, Palantir, or Visual Analytics. Experience with open source textual processing such as Lucene, Sphinx, Nutch or Solr. Experience with entity extraction and conceptual search technologies such as LSI, LDA, etc. Experience with machine learning, algorithm analysis, and data clustering.Us?We were founded as a fresh alternative in the Government Consulting Community and are dedicated to the belief that results are a product of analytical thinking, agile design principles and that solutions are built in collaboration with, not for, our customers. This mantra drives us to succeed and act as true partners in advancing our client?s missions.What we?d like to see?
  • Full-stack software development experience with a variety of server-side languages such as Java, C#, PHP, or Javascript (NodeJS)
  • Experience with modern front-end frameworks like React, Vue, or Angular
  • Intimate knowledge of agile and lean philosophies and experience successfully leading software teams in the practice of these philosophies
  • Experience with Continuous Delivery and Continuous Integration techniques using tools like Jenkins or Concourse
  • Experience with test-driven development and automated testing practices
  • Experience with data analytics, data science, or data engineering, MySQL and/or Postgres, GraphQL, Redit, and/or Mongo
  • Experience with building and integrating at the application and database level REST/SOAP APIs and messaging protocols and formats such as Protobuf, gRPC, and/or RabbitMQ
  • Experience with Pivotal Cloud Foundry
  • Experience with Event/Data Streaming services such as Kafka
  • Experience with Enterprise Service Bus and Event Driven Architectures
  • Experience with prototyping front-end visualization with products such as ElasticStack and/or Splunk
  • Strong communication skills and interest in a pair-programming environment Bonus points if you?
  • Possess at least one of the Agile Development Certifications
    • Certified Scrum Master
    • Agile Certified Practitioner (PMI-ACP)
    • Certified Scrum Professional
    • Have proven experience writing and building applications using a 12-factor application software architecture, micro services, and API
    • Are able to clearly communicate and provide positive recommendation of improvements to existing software applicationsYears of Experience:??5 years or moreEducation:?Associates in a Technical Discipline ? Computer Science, Mathematics, or equivalent technical degreeClearance:?SECRET
          

An introduction to monitoring with Prometheus

 Cache   
Wheel of a ship

Metrics are the primary way to represent both the overall health of your system and any other specific information you consider important for monitoring and alerting or observability. Prometheus is a leading open source metric instrumentation, collection, and storage toolkit built at SoundCloud beginning in 2012.


read more
          

My first contribution to open source: Make a fork of the repo

 Cache   
User experience vs. design

Previously, I explained how I ultimately chose a project for my contributions. Once I finally picked that project and a task to work on, I felt like the hard part was over, and I slid into cruise control. I knew what to do next, no question. Just clone the repository so that I have the code on my computer, make a new branch for my work, and get coding, right?


read more
          

Software Engineer - AWS/DevOps (572) with Security Clearance

 Cache   
Must already have TS/SCI clearance (with Full Scope Polygraph) used in the past 24 months 1-3 year US government contract The Sponsor runs a portfolio of COTS products which together make up the Communities and the Sponsors Access Control Services. The Sponsor is increasing its mission on the multifabric environments, particularly in relation to supporting the Sponsor's Open Source Data Layer Services and the Sponsor's Internet Network, to include O365 Integration. This JITR will establish a team to stand up the full suite of capabilities on the multifabric side to support these missions. These teams will be guided by and informed by the high side choices, and will be compliant with the Sponsor's Architecture,however these capabilities will be deployed independent of the high-side baselines, in order to establish the capability, and will be brought together overtime. Deliveries could be deployed on other networks supporting the Multi Fabric Initiative Strategies - and the target deployments for this JITR may be on any multifabric environment. The successful offeror will have demonstrated experience with establishing, and accrediting COTS and Identity Access Management Technology, in the Sponsors environment. The contractor team shall have the following required skills and demonstrated experience: - Demonstrated experience with Container Technology such as Docker, Kubernetes, etc., or commitment to receive training within 45 days of award
- Demonstrated experience with OpenShift Technology, or commitment to receive training within 45 days of award
- Demonstrated experience working with DevOps
- Demonstrated experience working with Amazon Web Services Environments
- Demonstrated experience collaborating with management, IT customers and other technical and non-technical staff and contractors at all levels.
- Demonstrated experience working with ICD 508 Accessibility compliance
- Demonstrated experience working with COTS products in Containers deployment methods
- Demonstrated experience working with Amazon Web Services environments, including S3, EMR, SQS, and SNS, to design, develop, deploy, maintain, and monitor web applications within AWS infrastructures.
- Demonstrated experience providing technical direction to software and data science teams.
- Demonstrated experience with Apache Spark.
- Demonstrated experience with PostgreSQL.
- Demonstrated experience working with RDS databases.
- Demonstrated experience developing complex data transformation flows using graphical ETL tools.
- Demonstrated experience engineering large scale data-acquisition, cleansing, transforming, and processing of structured and unstructured data.
- Demonstrated experience translating product requirements into system solutions that take into account technical, schedule, cost, security, and policy constraints.
- Demonstrated experience working in an agile environment and leading agile projects.
- Demonstrated experience providing technical direction to project teams of developers and data scientists who build web-based dashboards and reports.
          

Обзор Skaffold для разработки под Kubernetes

 Cache   


Полтора года назад, 5 марта 2018, компания Google выпустила первую альфа-версию своего Open Source-проекта для CI/CD под названием Skaffold, целью которого стало создание «простой и воспроизводимой разработки под Kubernetes», чтобы разработчики могли сфокусироваться именно на разработке, а не на администрировании. Чем может быть интересен Skaffold? Как оказалось, у него есть несколько козырей в рукаве, благодаря которым он может стать сильным инструментом для разработчика, а может — и инженера по эксплуатации. Познакомимся с проектом и его возможностями. Читать дальше →

Читать полностью


          

Keynote do ApacheCon 2019: A jornada de James Gosling para o código open source

 Cache   

No ApacheCon North America 2019 em Las Vegas, James Gosling palestrou sobre sua jornada pessoal ao código open source. As principais conclusões de Gosling foram: o código open source permite que os programadores aprendam lendo o código-fonte, os desenvolvedores devem prestar atenção aos direitos de propriedade intelectual para evitar abusos, e os projetos podem ganhar vida própria.

By Anthony Alford Translated by Roberto Ueti
          

Cloud Software Engineer

 Cache   
SunIRef:it Cloud Software Engineer Cisco Careers 5,121 reviews - San Jose, CA Cisco Careers 5,121 reviews Read what people are saying about working here. Software Engineer, Cloud Security About Our Business Group Cisco Cloud Security is a leading provider of network security services, enabling the world to connect to the Internet with confidence on any device, anywhere, anytime. The Secure Internet Gateway team is building the next generation of firewall, proxy and inspection services as highly-available distributed systems using cloud engineering practices. We are looking for talented engineers to help us launch the next generation of cloud-delivered security services, and deliver a stronger offering with unprecedented customer experience. Our engineering team is composed of highly skilled individuals who are comfortable working in a fast paced and technically challenging environment. Members are involved with all stages of the product development process from solving complex engineering problems to working directly with customers. About Us We are building the future of cloud-delivered security at Cisco. Our team is responsible for the development and operations of security applications built on the Cisco Umbrella platform. Our cloud security services are mission critical to the success of the next evolution of Cisco Security. Our product is the Security Internet Gateway, a new security paradigm using cloud engineering to change the way security is managed and delivered. About You You have a strong interest in developing cloud-native solutions to network and security problems You have strong programming skills and are passionate about both building and running applications You are a natural problem-solver and trouble-shooter You have the ability to think creatively and are self-motivated You have love working together as a team and have a desire to speak up and share ideas You have an engineering degree (or equivalent) with 2+ years of software development experience You have experience with: Linux internals and the open source stack Networking, routing/switching, L3/L4, TCP/IP C programming Cloud-native architectures Additional Desirable Knowledge: Experience with Python, Golang, Bash and Linux Security products such as firewalls, proxies, inspection engines, classification engines Continuous integration and continuous deployment Agile practices and iterative development About Cisco We connect everything: people, processes, data, and things. We innovate everywhere, taking bold risks to shape the technologies that give us smart cities, connected cars, and handheld hospitals. And we do it in style with unique personalities who aren't afraid to change the way the world works, lives, plays and learns. We are thought leaders, tech geeks, pop culture aficionados, and we even have a few purple haired musicians. We celebrate the creativity and diversity that fuels our innovation. We are dreamers and we are doers. We Are Cisco. #GD2015 LI-MM1 Dicesvs Cisco Systems - Today report job - original job
          

Modern Data Engineer

 Cache   
About UsInterested in working for a human-centered technology company that prides itself on using modern tools and technologies? Want to be surrounded by intensely curious and innovative thinkers?

Seeking to solve complex technical challenges by building products that work for people, meet and exceed the needs of businesses, and work elegantly and efficiently?

Modeling ourselves after the 1904 World's Fair, which brought innovation to the region, 1904labs is seeking top technical talent in St. Louis to bring innovation and creativity to our clients.

Our clients consist of Fortune 500 and Global 2000 companies headquartered here in St. Louis. We partner with them on complex projects that range from reimagining and refactoring their existing applications, to helping to envision and build new applications or data streams to operationalize their existing data. Working in a team-based labs model, using our own flavor of #HCDAgile, we strive to work at the cutting edge of technology's capabilities while solving problems for our clients and their users.The RoleAs a Modern Data Engineer you would be responsible for developing and deploying cutting edge distributed data solutions. Our engineers have a passion for open source technologies, strive to build cloud first applications, and are motivated by our desire to transform businesses into data driven enterprises. This team will focus on working with platforms such as Hadoop, Spark, Hive, Kafka, Elasticsearch, SQL and NoSQL/Graph databases as well as cloud-based data services.

Our teams at 1904labs are Agile, and we work in a highly collaborative environment. You would be a productive member of a fast paced group and have an opportunity to solve some very complex data problems.Requirements3+ years of progressive experience as a Data Engineer, BI Developer, Application Developer or related occupation.


  • Agile: Experience working in an agile team oriented environment
  • Attitude / Aptitude: A passion for everything data with a desire to be at the cutting edge of technology and consistently deliver working software while always keeping an eye on opportunities for innovation.
  • Technical Skills (You have experience with 2 or more of these bulletpoints):






      • Programming in Java (Or similar JVM language such as Scala, Groovy, etc) and/or Python
      • Architecting and integrating big data pipelines
      • Working with large data volumes; this includes processing, transforming and transporting large scale data using technologies such as: MR/TEZ, Hive SQL, Spark, etc.
      • Have a strong background in SQL / Data Warehousing (dimensional modeling)
      • Have a strong background working with and/or implementing architecture for RDBMS such as: Oracle, MySQL, Postgres and/or SQLServer.

      • Experience with traditional ETL tools such as SSIS, Informatica, Pentaho, Talend, etc.
      • Experience with NoSQL/Graph Data Modeling and are actively using Cassandra, HBase, DynamoDB, Neo4J, Titan, or DataStax Graph
      • Installing/configuring a distributed computing/storage platform, such as Apache Hadoop, Amazon EMR, Apache Spark, Apache Hive, and/or Presto
      • Working with one or more streaming platforms, such as Apache Kafka, Spark Streaming, Storm, or AWS Kinesis
      • Working knowledge of the Linux command line and shell scripting







        Desired Skills



        • Analytics: Have working knowledge of analytics/reporting tools such as Tableau, Spotfire, Qlikview, etc.
        • Open Source: Are working with open source tools now and have a background in contributing to open source projects.


          Perks




          • Standard Benefits Program (medical, dental, life insurance, 401(k), professional development and education assistance, PTO).
          • Innovation Hours - Ten percent (10%) of our work week is set aside to work on our own product ideas in a highly collaborative and supportive environment. The best part: The IP remains your own. We are a high-growth culture and we know that when we help people focus on personal and professional growth, collectively, we can achieve great things.
          • Dress Code - we don't have one


            This job is located in St. Louis, MO. While we would prefer local candidates your current location is not the most important factor; please help us understand why you would like to call St. Louis home if you would be relocating.
          

Software Engineering Manager

 Cache   
THE CHALLENGEEventbrite's business continues to grow and scale rapidly, powering millions of events. Event creators and event goers need new tools and technologies that empower them to create/have some of the most memorable of life's moments, live experiences. One of our most important elements in achieving our company goals is our people. As an engineering manager you're responsible for the careers, productivity, and quality (among other things) of Eventbrite's builders. THE TEAMWe're a people-focused Engineering organization: the women and men on our team value working together in small teams to solve big problems, supporting an active culture of mentorship and inclusion, and pushing themselves to learn new things daily. Pair programming, weekly demos, tech talks, and quarterly hackathons are at the core of how we've built our team and product. We believe in engaging with the community, regularly hosting free events with some of the top technical speakers, and actively contributing to open source software (check out Britecharts as an example!). Our technology spans across web, mobile, API, big data, machine learning, search, physical point of sale, and scanning systems. This role is based in Eventbrite's Nashville office. We're one of 5 Eventbrite engineering offices around the world. For a little taste of what the team is like and how Eventbrite's Nashville office hashttp://bit.ly/NashEngTHE ROLEWe're looking for a people-focused manager to help support the career growth of our engineers and collaborate on improvement within our organization.THE SKILL SET





    • Demonstrated experience in recruiting a well-rounded, diverse technical team
    • You have a strong technical background and can contribute to design and architectural discussions - coach first, player second.
    • You support your team in providing context and connecting it with how the team impacts the organization
    • Experience working with a highly collaborative environment, coaching a team who ships code to production often
    • With the help of other engineering managers, you develop a sustainable, healthy work environment which is both encouraging and challenging
    • In a leadership/management position for 2-5 years with demonstrated growth of high-functioning engineering teams



      ABOUT EVENTBRITEEventbrite is a global ticketing and event technology platform, powering millions of live experiences each year. We empower creators of events of all shapes and sizes - from music festivals, experiential yoga, political rallies to gaming competitions -- by providing them the tools and resources they need to seamlessly plan, promote, and produce live experiences around the world. Last year, the team served 795,000 creators hosting nearly 4 million experiences across 170 countries. Meet some of the Britelings that make it happen.
      IS THIS ROLE NOT AN EXACT FIT?Sign up to keep in touch and we'll let you know when we have new positions on our team.

      Eventbrite is a proud equal opportunity/affirmative action employer supporting workforce diversity. We do not discriminate based upon race, ethnicity, ancestry, citizenship status, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), marital status, registered domestic partner status, caregiver status, sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, genetic information, military or veteran status, mental or physical disability, political affiliation, status as a victim of domestic violence, assault or stalking, or other applicable legally protected characteristics. Applicant Privacy Notice

          

Google’s OpenTitan aims to Create an Open Source Secure Enclave

 Cache   

Google wants Android phones to have a Secure Enclave chip like iPhones. Its OpenTitan project aims to help design an open source one.

OpenTitan is loosely based on a proprietary root-of-trust chip that Google uses in its Pixel 3 and 4 phones. But OpenTitan is its own chip architecture and extensive set of schematics developed by engineers at lowRISC, along with partners at ETH Zurich, G+D Mobile Security, Nuvoton Technology, Western Digital, and, of course, Google.

The consortium will use community feedback and contributions to develop and improve the industry-grade chip design, while lowRISC will manage the project and keep suggestions and proposed changes from going live haphazardly.

You can view the OpenTitan Github repo here, but it’s not fully fleshed out yet.

read more



          

DevOps Engineer

 Cache   
SunIRef:it DevOps Engineer HemaTerra Technologies - Baltimore, MD 21230 $90,000 - $120,000 a year HemaTerra Technologies is seeking a DevOps engineer who will be responsible for supporting the delivery of cloud-based software platforms, services and solutions to our customers. Candidates in this role will be leading engineering efforts and advanced operational support in the areas of delivery of cloud-based solutions, customer support, process improvement, and product enhancements. This is a hands on role and will be expected to have strong technical skills. Key Job Functions Maintain and execute process for code application deployments Automate monitoring tools to monitor system health and reliability to support high uptime requirements Troubleshoot production deployments and performance issues Specialized Knowledge & Skills Minimum 3 years of experience of Linux administration, including system designs, configurations, maintenance, upgrades and administration. Experience in MySQL/MariaDB databases Experience with AWS hosting (EC2, RDS, Aurora etc..) Experience with enterprise monitoring tools Experience creating cloud infrastructure Ability to perform defect root cause analysis and provide resolution Ability to use a variety of open source technologies and cloud services Scripting experience with Linux shell environments Experience in the installation, configuration, and maintenance of both open source licensed and Commercial-off-the-Shelf software tools. Experience with Git Skills that are pluses: Ansible Containers Jenkins AWS CloudWatch Visit our website at ***************** to learn more about us. We are located in Federal Hill with free parking and remodeled offices. We offer a laid back atmosphere, casual dress, paid vacation, 401k and Health Insurance. Job Type: Full-time Job Type: Full-time Salary: $90,000.00 to $120,000.00 /year Experience: DevOps: 2 years (Preferred) Education: Bachelor's (Preferred) Location: Baltimore, MD (Required) Additional Compensation: Bonuses Work Location: One location Benefits: Health insurance Dental insurance Vision insurance Retirement plan Paid time off Tuition reimbursement This Job Is Ideal for Someone Who Is: Autonomous/Independent -- enjoys working with little direction Adaptable/flexible -- enjoys doing work that requires frequent shifts in direction This Company Describes Its Culture as: Innovative -- innovative and risk-taking Aggressive -- competitive and growth-oriented - Just posted report job If you require alternative methods of application or screening, you must approach the employer directly to request this as Indeed is not responsible for the employer's application process.
          

Xaxxon’s OpenLIDAR sensor is tiny, inexpensive and open source

 Cache   

Xaxxon’s OpenLIDAR Sensor is a rotational laser scanner with open software and hardware, intended for use with autonomous mobile robots and simultaneous-location-and-mapping (SLAM) applications. Xaxxon Technologies is a Vancouver-based developer and manufacturer of open source robotic devices. Its most recent offering is a standalone, OpenLIDAR sensor for robotic developers, educators and hobbyists. It consists of [...]

The post Xaxxon’s OpenLIDAR sensor is tiny, inexpensive and open source appeared first on SPAR 3D.


          

Обзор Skaffold для разработки под Kubernetes

 Cache   


Полтора года назад, 5 марта 2018, компания Google выпустила первую альфа-версию своего Open Source-проекта для CI/CD под названием Skaffold, целью которого стало создание «простой и воспроизводимой разработки под Kubernetes», чтобы разработчики могли сфокусироваться именно на разработке, а не на администрировании. Чем может быть интересен Skaffold? Как оказалось, у него есть несколько козырей в рукаве, благодаря которым он может стать сильным инструментом для разработчика, а может — и инженера по эксплуатации. Познакомимся с проектом и его возможностями. Читать дальше →
          

DevOps Engineer - Spaceflight Industries - Seattle, WA

 Cache   
Citizen or Green Card holder only). Experience with integrating open source software with internal development is highly desired. Experience with Python or GO.
From Spaceflight Industries - Fri, 25 Oct 2019 15:25:51 GMT - View all Seattle, WA jobs
          

Lead/Senior Software Engineer - Mr. Cooper - Milwaukee, WI

 Cache   
If you are comfortable with any modern open source OO architecture (Python, PHP, Node, Ruby), you'll do great. With releases about every two weeks (or less).
From Mr. Cooper - Wed, 23 Oct 2019 20:36:06 GMT - View all Milwaukee, WI jobs
          

🔧 #howto - Installare WordPress via Apache su Debian/Ubuntu e derivate

 Cache   
🔧 #howto - Installare WordPress via Apache su Debian/Ubuntu e derivate

Abbiamo già impostato lo stack LAMP su Debian/Ubuntu, ora vogliamo aprire un sito web per la nostra società o semplicemente creare il nostro portfolio personale. Per questa impresa scegliamo WordPress, un CMS gratuito ed open source utilizzabile da tutti, anche dai meno esperti vista la sua semplicità.

In questa guida vediamo appunto come Installare Wordpress via server Apache.

Prerequisiti

Per poter installare correttamente WordPress sulla distro da voi utilizzata è necessario avere lo stack LAMP. Per sapere come aggiungerlo su Debian, Ubuntu e derivate, potete consultare questa guida.

Se vogliamo essere più precisi, i prerequisiti consigliati sono i seguenti:

  • Apache 2.4
  • MySQL 5.6 o MariaDB 10.0
  • PHP 7.3

Download

Per prima cosa, entrate nella cartella dove desiderate ospitare il vostro sito web realizzato con WordPress. (In questa guida si lavorerà in /var/www/wordpress/)

cd /var/www/

Fatto ciò, scarichiamo l'ultima versione di WordPress in italiano utilizzando wget:

wget https://it.wordpress.org/latest-it_IT.tar.gz

Scompattiamo il pacchetto compresso:

tar -xzvf latest-it_IT.tar.gz

Entriamo nella cartella di WordPress:

cd /var/www/wordpress/

Dando il comando ls noteremo molti file assieme ad alcune cartelle, ma procediamo con ordine.

Configurazione di Apache2

Per poter accedere in futuro al nostro sito con WordPress, abbiamo bisogno di configurare a dovere il web server che utilizziamo, in questo caso Apache2, creando un file di configurazione dedicato. Vediamo come fare.

Creiamo un nuovo file nella cartella /sites-available/ di Apache:

sudo nano /etc/apache2/sites-available/wordpress.conf

Andiamo a scrivere quanto dovuto nel file:


        ServerName dominio.it

        ServerAdmin email@amministratore.com
        DocumentRoot /var/www/wordpress

        
            Options FollowSymLinks
            AllowOverride Limit Options FileInfo
            DirectoryIndex index.php
            Require all granted
        

        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

Usciamo dall'editor di testo e creiamo un link simbolico in /sites-enabled/:

# Per Debian e Ubuntu
sudo ln -s /etc/apache2/sites-available/wordpress.conf /etc/apache2/sites-enabled/

Abilitiamo il nostro sito:

sudo a2ensite wordpress.conf

Disabilitiamo la pagina di default di Apache (se non avete toccato nulla):

sudo a2endissiste 000-default.conf

Riavviamo Apache:

sudo systemctl reload apache2

Apache2 è ora configurato correttamente per poter eseguire WordPress.

Configurazione del database

WordPress non ha solamente bisogno di una cartella sull'hard disk del vostro server per poter funzionare a dovere, ma anche bisogno di scrivere dei dati in un database. Vediamo come andare a creare un utente dedicato esclusivamente al database su cui andrà a lavorare il CMS.

Creazione di un utente

Eseguiamo l'accesso in MySQL o MariaDB con l'utente superuser (o root, anche se sconsigliato):

sudo mysql -u nomeutentesuperuser -p

Creiamo un nuovo utente:

CREATE USER nomeutente@localhost IDENTIFIED BY 'vostrapassword';

Creazione di un database

Dopo aver creato l'utente, andiamo a realizzare il database:

CREATE DATABASE nomedatabasewordpress;

Diamo tutti i permessi all'utente creato in precedenza per lavorare senza problemi sul database:

GRANT ALL PRIVILEGES ON nomedatabasewordpress.* TO nomeutente@localhost;

Informiamo MySQL o MariaDB dei cambiamenti e usciamo:

FLUSH PRIVILEGES;
EXIT;

Congratulazioni! Avete impostato correttamente un utente e un database per WordPress.

Configurazione di wp-config.php

Manca poco all'installazione vera e propria di WordPress, ma prima abbiamo ancora bisogno di scrivere nel file di configurazione, chiamato wp-config.php.

Per prima cosa, copiamo il file d'esempio:

cp wp-config-sample.php wp-config.php

Ora, andiamo a sistemare un momento il file di configurazione appena generato. Appena entreremo con il nostro editor di testo noteremo subito dei commenti e dei parametri da modificare, ma procediamo con ordine.

Cambiamo il nome del database di WordPress in quello appena creato:

define('DB_NAME', 'nomedatabasewordpress');

Forniamo a WordPress il nome utente e la password dell'utente del database generati in precedenza.

define('DB_USER', 'nomeutente');
define('DB_PASSWORD', 'password');

Ora, qui non dobbiamo più modificare nulla, spostiamoci più sotto nella sezione "Chiavi univoche di autenticazione e di salatura". Per una questione di sicurezza, è altamente raccomandato generare delle stringhe da inserire in "Mettere la vostra frase unica qui" nel valore appropriato.

Generiamo le chiavi da questo link e copiamo ed incolliamo quanto fornito da WordPress andando a sostituire i parametri già presenti. Pertanto:

define('AUTH_KEY',         'Mettere la vostra frase unica qui');
define('SECURE_AUTH_KEY',  'Mettere la vostra frase unica qui');
define('LOGGED_IN_KEY',    'Mettere la vostra frase unica qui');
define('NONCE_KEY',        'Mettere la vostra frase unica qui');
define('AUTH_SALT',        'Mettere la vostra frase unica qui');
define('SECURE_AUTH_SALT', 'Mettere la vostra frase unica qui');
define('LOGGED_IN_SALT',   'Mettere la vostra frase unica qui');
define('NONCE_SALT',       'Mettere la vostra frase unica qui');

Si trasformerà in:

define('AUTH_KEY',         'Valore generato dal link');
define('SECURE_AUTH_KEY',  'Valore generato dal link');
define('LOGGED_IN_KEY',    'Valore generato dal link');
define('NONCE_KEY',        'Valore generato dal link');
define('AUTH_SALT',        'Valore generato dal link');
define('SECURE_AUTH_SALT', 'Valore generato dal link');
define('LOGGED_IN_SALT',   'Valore generato dal link');
define('NONCE_SALT',       'Valore generato dal link');

Ora, usciamo dall'editor di testo e accediamo al file d'installazione di WordPress dal nostro dominio:

http://dominio.it/wp-admin/install.php

Configurazione di WordPress

Appena caricata la pagina di installazione di WordPress ci troveremo una cosa simile di fronte:

Pagina di installazione di WordPress

Inseriamo quanto richiesto (evitando di utilizzare nomi utenti come admin, amministratore, ecc e password deboli), clicchiamo il bottone Installa WordPress e... congratulazioni! Avete installato WordPress correttamente. Facciamo l'accesso con i nostri dati ed ecco a voi la dashboard.

Conclusione

Ora sta a voi imparare ad usare questo potente strumento, impostando tutto più come vi pare e piace. Per fornire più sicurezza al vostro sito web, potete consultare questa guida realizzata da me sul forum di FelineSec, gruppo facente parte del network GenteDiLinux.

Per dubbi e chiarimenti, utilizzate il nostro gruppo Telegram.

Alexzan Mar, 11/05/2019 - 21:50
          

eyeOS

 Cache   
Description:
eyeOS is an open source web desktop following the cloud computing concept that leverages collaboration and communication among users. It acts as a platform for web applications written using the eyeOS Toolkit. It includes a Desktop environment with 67 applications and system utilities. Besides browser access on the PC it's also accessible by portable devices via its mobile front end.
Version:
1.9.0.1
Maintainer:
QNAPAndy & Christopher
Resource:
Download:

          

OpenLDAP

 Cache   
Description:
OpenLDAP is an open source implementation of the Lightweight Directory Access Protocol (LDAP) and is packaged with phpLDAPadmin to provide the web-based administration functionality.
Version:
2.4.23
Maintainer:
QNAP Systems, Inc.
Resource:
Download:
ARM (x10/x12/x19 series) [TS-x10/ x12/ x19 / x19 P+ / x19P II series]
Intel x86 [TS-x39/ x59/ 509/ 809/ 809U-RP/ SS-x39/ x59 Pro+/ x59 ProII/ TS-x79 Series]

          

vtiger CRM

 Cache   
Description:
vtigerCRM is an open source CRM application ideal for small and medium businesses with comparable functionality to SugarCRM and Salesforce.com. It offers sales force & marketing automation, customer support service, inventory, security and activity management, group calendars, E-mail integration and many more features required for building a complete and production level CRM system.
Version:
5.2.0
Maintainer:
QNAP Systems, Inc.
Resource:
Download:

          

XDove

 Cache   
Description:
XDove named after XMail & Dovecot the 2 open source offerings that are combined to provide a complete set of Email server functionalities which is one-click installable on your QNAP NAS. XDove not only provides SMTP, POP3 and IMAP services, it also comes with a variety of features like multiple virtual domains and accounts, AJAX webmail with extended functionalities including personal folders, address book, calendar and real-time chat among users under the same mail domain. Besides the mail services XDove offers scheduled backup and restore of your mailboxes from multiple domains which gives you an extra protection on the top of your RAID data redundancy.
Version:
1.3
Maintainer:
Ad"Novea & QNAPAndy
Resource:
Download:
ARM (x09 series) [TS-109/ 209/ 409/ 409U]
ARM (x10/x12/x19 series) [TS-x10/ x12/ x19 / x19 P+ / x19P II series]
Intel x86 (x39 series) [TS-x39/ x59/ 509/ 809/ 809U-RP/ SS-x39/ x59 Pro+/ x59 ProII/ TS-x79 Series]

          

Joomla

 Cache   
Description:
Joomla! is a free, open source content management system for publishing content on the world wide web and intranets. The system includes features such as page caching to improve performance, RSS feeds, printable versions of pages, news flashes, blogs, polls, website searching, and language internationalization.
Version:
1.5.20
Maintainer:
QNAP Systems, Inc.
Resource:
Download:

          

phpMyAdmin

 Cache   
Description:
phpMyAdmin is an open source tool written in PHP intended to handle the administration of MySQL over the Internet. Currently it can create and drop databases, create/drop/alter tables, delete/edit/add fields, execute any SQL statement, and manage keys on fields.
Version:
3.3.5
Maintainer:
QNAP Systems, Inc.
Resource:
Download:

          

MLDonkey

 Cache   
Description:
MLDonkey is a door to the "donkey" world, a multi-network, multi-platform open source P2P application used to exchange big files on the Internet and present most features of the basic Windows donkey client and additionally supports overnet, fasttrack, bittorrent and gnutella protocols (and more)! The core works best with Sancho the premier graphical user interface for MLDonkey and you can download it here.
Version:
3.0.0
Maintainer:
Peter Piper
Resource:
Download:
ARM (x09 series) [TS-109/ 209]
ARM (x09 series) [TS-409/ 409U]
ARM (x10/x12/x19 series) [TS-x10/ x12/ x19 / x19 P+ / x19P II series]
Intel x86 [TS-x39/ x59/ 509/ 809/ 809U-RP/ SS-x39/ x59 Pro+/ x59 ProII/ TS-x79 Series]

          

Entwickler bei maxcluster: Mitarbeiter-Interview mit Dennis

 Cache   

Entwickler sind sowieso alle Nerds, die sich mit dutzenden von Monitoren umgeben und sich ausschließlich von Cola und Chips ernähren? Nicht bei maxcluster. Wie unsere Kollegen im Development arbeiten und was sie am Unternehmen und an ihren Aufgaben besonders mögen, wollen wir in diesem Blogbeitrag behandeln.

Seit März 2018 ist Dennis Hering Full Stack Software-Entwickler bei maxcluster und damit involviert in alle Entwicklungsarbeiten im Unternehmen. Im Interview hat er uns einen kleinen Einblick in seine tägliche Arbeit gegeben. Und widerlegt damit sämtliche Vorurteile gegenüber Entwicklern

1. Wer bist Du und macht Dich aus?

Ich heiße Dennis Hering, bin 32 Jahre alt, verheiratet und habe einen dreijährigen Sohn. Ich bin leidenschaftlicher Entwickler und bin auch in meiner Freizeit nicht zu stoppen, wenn es um IT und Technik geht: Unser Haus habe ich in den letzten zwei Jahren in ein Smart-Home umgebaut, dass keine Wünsche offen lässt. Bei mir kann man also auf jeden Fall behaupten, dass ich mein Hobby zum Beruf gemacht habe.

2. Wie bist Du zu maxcluster gekommen?

Über eine Initiativ-Bewerbung. Ich hatte die letzten Jahre für Microsoft gearbeitet, einem Weltkonzern mit sehr ausgeprägten Hierarchien. Viel Reisetätigkeit und lange Arbeitszeiten waren dort an der Tagesordnung. Als mein Sohn geboren wurde, hat sich mein Fokus verschoben und ich wollte stärker Familie und Beruf miteinander vereinbaren. Also habe ich in der Region gezielt nach jungen KMU (kleines mittelständisches Unternehmen) gesucht, für die Familienfreundlichkeit kein Fremdwort ist. Bei meiner Recherche bin ich dann auf maxcluster gestoßen, das mich mit seinem Angebot überzeugt hat. Innerhalb von einer Woche nach meinem Bewerbungsgespräch hatte ich dann die Zusage und ich freue mich jeden Tag darüber, für dieses Unternehmen zu arbeiten.

3. Was ist das Besondere an Deiner Arbeit?

Auch wenn es vielleicht abgedroschen klingt: Ich lerne jeden Tag etwas Neues dazu. Obwohl ich beispielsweise bereits bei Microsoft auch Erfahrungen mit Open Source-Plattformen hatte, war ich dort doch lange Zeit der "Lone Ranger", wenn ich mich damit beschäftigt habe. Hier bei maxcluster arbeiten wir (fast ausschließlich) in einer LINUX-Umgebung und viele Dinge, die mich an Windows gestört haben, existieren hier einfach nicht. Die Umstellung war am Anfang herausfordernd, mittlerweile bin ich jedoch regelrecht erleichtert, mich auf Open Source und die damit verbundenen Vorteile fokussieren zu können. 

Schön ist auch, dass ich einen Teil meiner Arbeit von zu Hause erledigen darf. Dies setzt zum einen natürlich Eigenmotivation und Selbstdisziplin voraus, aber auch ein stabiles Vertrauensverhältnis zwischen meinem Teamleiter, unserer Geschäftsführung und mir. Ich bin dadurch sehr flexibel und kann auch bei Betreuungsproblemen oder familiären Notfallsituationen schnell reagieren, sodass ich Familie und Beruf perfekt vereinbaren kann.

4. Was war bisher das spannendste Projekt?

Momentan arbeite ich daran, dass wir unseren Echtzeitserver umziehen, da wir eine neue Platform für unser Managed Center entwickeln. Hierfür müssen bestehende Systeme angepasst und erweitert werden. Ich muss zugeben, dass ich bisher ein Infrastructure-Muffel war und mich nicht gerne mit Hardware und Bash-Skripten beschäftigt habe, aber sich in die Details dieses Projektes reinzufuchsen, macht wirklich Spaß. Ich werde herausgefordert, meine Komfortzone zu verlassen - bekomme aber auch die...


          

Ruby on Rails Podcast 293: Speed as a Feature with Gannon McGibbon

 Cache   
Gannon McGibbon is a Software Developer at Shopify. He primarily works on improving codebase health of Shopify's monolithic Rails app. Gannon regularly contributes to open source with commits on Rails, Ruby, and Rubocop. He joined Brittany to discuss his latest blog post, "How to Write Fast Ruby on Rails code".
          

Other: Solution Architect - Reston, Virginia

 Cache   
Summary / DescriptionWe are seeking motivated, career and customer oriented Solution Architect interested in joining our team in Reston, VA and exploring an exciting and challenging career with Unisys Federal Systems. Duties: * Participate in planning, definition, and high-level design of the solution and build in quality* Actively participate in the development of the Continuous Delivery Pipeline, especially with enabler Epics* Define architecture diagrams with interfaces* Work with customers and stakeholders to help establish the solution intent information models and documentation requirements* Collaborate with stakeholders to establish critical nonfunctional requirements at the solution level* Work with senior leadership and technical leads to develop, analyze, split, and realize the implementation of enabler epics* Participate in PI Planning and Pre- and Post-PI Planning, System and Solution Demos, and Inspect and Adapt events* Define and develop value stream and program Enablers to evolve solution intent, work directly with Agile teams to implement* Plan and develop the Architectural Runway in support of upcoming business Features and Capabilities* Work with Management to determine capacity allocation for enablement work* Design highly complex solutions with potentially multiple applications and high transaction volumes.* Analyze a problem from business and technical perspective to develop a fit solution* Ability to document structure, behavior and work to deliver a solution to a problem to stakeholder and developers* Make recommendations about platform and technology adoption, including database servers, application servers, libraries, and frameworks* Write proof-of-concept code (may also participate in writing production code)* Keep skills up to date through ongoing self-directed training* Advise senior management on how products and processes could be improved* Help application developers to adopt new platforms through documentation, training, and mentoring* Create architecture documentation* Deep understanding of industry patterns for application architecture and integration* Good written and verbal communication skills with the ability to present technical details* Ability to come up with a detailed architecture that includes infra, security, disaster recovery/BCP plans Requirements* BA or BS plus 10 years of experience * 10+ years of experience in IT Solution application design, development & delivery with focus in application architecture* 5+ years of experience in building multi-tier applications using an applicable technology skill set such as Java* Experience working with complex data environments* Experience using modern software development practices and technologies, including Lean, Agile, DevOps, Cloud, containers, and microservices.* Dev & Unit Test tools such as Eclipse, git, JFrog Artifactory, Docker, JUnit, SonarQube, Contrast (or Fortify)* Expert using relevant tools to support development, testing, operations and deployment (e.g. Atlassian Jira, Chef (or Maven), Jenkins, Pivotal Cloud Foundry, New Relic, Atlassian HipChat, Selenium, Apache JMeter, BlazeMeter)* Experience architecting or creating systems around Open Source, COTS and custom development* Experience in designing SSO solution using SAML, XACML protocols.* Experience on multiple application development projects with similar responsibilities* Demonstrated experience in utilizing frameworks like Struts, Spring, and Hibernate'- Formal training or certification in Agile software development methods- AWS Certfied Solution Architect- training/certification in Enterprise Architecture preferred About UnisysDo you have what it takes to be mission critical? Your skills and experience could be mission critical for our Unisys team supporting the Federal Government in their mission to protect and defend our nation, and transform the way government agencies manage information and improve responsiveness to their customers. As a member of our diverse team, you'll gain valuable career-enhancing experience as we support the design, development, testing, implementation, training, and maintenance of our federal government's critical systems. Apply today to become mission critical and help our nation meet the growing need for IT security, improved infrastructure, big data, and advanced analytics. Unisys is a global information technology company that solves complex IT challenges at the intersection of modern and mission critical. We work with many of the world's largest companies and government organizations to secure and keep their mission-critical operations running at peak performance; streamline and transform their data centers; enhance support to their end users and constituents; and modernize their enterprise applications. We do this while protecting and building on their legacy IT investments. Our offerings include outsourcing and managed services, systems integration and consulting services, high-end server technology, cybersecurity and cloud management software, and maintenance and support services. Unisys has more than 23,000 employees serving clients around the world. Unisys offers a very competitive benefits package including health insurance coverage from first day of employment, a 401k with an immediately vested company match, vacation and educational benefits. To learn more about Unisys visit us at www.Unisys.com. Unisys is an Equal Opportunity Employer (EOE) - Minorities, Females, Disabled Persons, and Veterans.#FED# ()
          

Other: Robotics Systems Engineer - Reston, Virginia

 Cache   
OVERVIEW Draper is an independent, nonprofit research and development company headquartered in Cambridge, MA. The 1,800 employees of Draper tackle important national challenges with a promise of delivering successful and usable solutions. From military defense and space exploration to biomedical engineering, lives often depend on the solutions we provide. Our multidisciplinary teams of engineers and scientists work in a collaborative environment that inspires the cross-fertilization of ideas necessary for true innovation. For more information about Draper, visit www.draper.com. Our work is very important to us, but so is our life outside of work. Draper supports many programs to improve work-life balance including workplace flexibility, employee clubs ranging from photography to yoga, health and finance workshops, off site social events and discounts to local museums and cultural activities. If this specific job opportunity and the chance to work at a nationally renowned R&D innovation company appeals to you, apply now www.draper.com/careers. Equal Employment Opportunity Draper is committed to creating a diverse environment and is proud to be an affirmative action and equal opportunity employer. We understand the value of diversity and its impact on a high-performance culture. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. Draper is committed to providing access, equal opportunity and reasonable accommodation for individuals with disabilities in employment, its services, programs, and activities. To request reasonable accommodation, please contact hr@draper.com. RESPONSIBILITIES Do you have a passion for cutting edge systems such as spacecraft, unmanned aerial vehicle (UAV), autonomous vehicles, etc.* Want to join the company that first brought mankind to the moon and has been a leader in innovation and technology since 1932* The Draper System Modeling and Assurance Division is seeking a Robotics Engineer with broad technical skills to work within multidisciplinary teams to conceptualize, develop, and analyze robotics systems with Model-Based Engineering (MBE) methodologies. The ideal candidate has interest and experience in all aspects of robotic systems develoment cycle: from design and trade space analysis, to autonomy and simulation development, and to field test and troubleshoot. This position is in Reston, VA. QUALIFICATIONS Required Qualifications: * BS, MS, or PhD in Engineering (Aerospace, EE, ME), Computer Science, or other relevant technical field of study. * Strong capability to program and debug in C++ and Python in Linux environment * Experience with MATLAB & Simulink Preferred Qualifications: * Familiar with ROS (Robot Operating System) and Gazebo * Familiar with open source flight controllers including Ardupilot and PX4 * Experience with field testing of systems and sub-systems Active Seceret Clearance Required and Top Secret clearance with SCI eligibility preferred. ()
          

Обзор Skaffold для разработки под Kubernetes

 Cache   


Полтора года назад, 5 марта 2018, компания Google выпустила первую альфа-версию своего Open Source-проекта для CI/CD под названием Skaffold, целью которого стало создание «простой и воспроизводимой разработки под Kubernetes», чтобы разработчики могли сфокусироваться именно на разработке, а не на администрировании. Чем может быть интересен Skaffold? Как оказалось, у него есть несколько козырей в рукаве, благодаря которым он может стать сильным инструментом для разработчика, а может — и инженера по эксплуатации. Познакомимся с проектом и его возможностями. Читать дальше →
          

Introducing Tracee: Aqua Security's newest open source project. Tracee uses eBPF to trace events in containers. Read more on our blog: https://hubs.ly/H0lGbVf0  by @lizrice

 Cache   

Introducing Tracee: Aqua Security's newest open source project. Tracee uses eBPF to trace events in containers. Read more on our blog: https://hubs.ly/H0lGbVf0  by


          

A duck. Giving a look at DuckDB since MonetDBLite was removed from CRAN

 Cache   

You may know that MonetDBLite was removed from CRAN.
DuckDB comming up.



Breaking change

> install.packages('MonetDBLite')
Warning in install.packages :
  package MonetDBLite is not available (for R version 3.6.1)

People who based their works on MonetDBLite may ask what happened, what to do. Not to play a risky game with database and tools choices for future works… (“It’s really fast but we may waste some time if we have to replace it by another solution”).

It’s the game with open source. Remember big changes in dplyr 0.7.
Sometimes we want better tools, and most of the time they become better. It’s really great.
And sometimes we don’t have time and energy to adapt our work to tools that became better in a too iterative way. Or in a too subjective way.
We want it to work, not break.
Keeping code as simple as possible (and avoid nebulous dependencies, so, tidy?) is one of the key point.
Stocking data in a database is another one.

All that we can say is that “we’re walking on works in progress”. Like number of eggshells, more works in progress here probably means more breaking changes.

Works in progress for packages, also for (embedded) databases!

From Monet to Duck

MonetDBLite philosophy is to be like a “very very fast SQLite”. But it’s time for change (or it seems to be).
Then we can thanks MonetDBLite developers as it was a nice adventure to play/work with MonetDB speed!
As a question, is there another person, some volunteers, possibilities to maintain MonetDBLite (somewhere a nice tool)?
There are not so many informations for the moment about what happened and that’s why I write this post.

Here, I read that they are now working on a new solution, under MIT License, named DuckDB, see here for more details.

As I’m just a R user and haven’t collaborate to the project, I would just say for short: DuckDB takes good parts from SQLite and PostGreSQL (Parser), see here for complete list, it looks promising. As in MonetDB, philosophy is focused on columns and speed. And dates for instance are handled correctly, not having to convert them in “ISO-8601 - like” character strings.

It can be called from C/C++, Python and R.

Here is a post about python binding.

I also put a link at the bottom of this page which give some explanations about the name of this new tool and DuckDB developers point’s of view about data manipulation and storage1.

Beginning with duckDB in R

Create / connect to the db

# remotes::install_github("cwida/duckdb/tools/rpkg", build = FALSE)

library(duckdb)
library(dplyr)
library(DBI)

# Create or connect to the db
con_duck <- dbConnect(duckdb::duckdb(), "~/Documents/data/duckdb/my_first.duckdb")
#con <- dbConnect(duckdb::duckdb(), ":memory:")

con_duck
<duckdb_connection bae30 dbdir='/Users/guillaumepressiat/Documents/data/duckdb/my_first.duckdb' database_ref=04e40>

iris

dbWriteTable(con_duck, "iris", iris)
tbl(con, 'iris')

Put some rows and columns in db

> dim(nycflights13::flights)
[1] 336776     19
> object.size(nycflights13::flights) %>% format(units = "Mb")
[1] "38.8 Mb"

Sampling it to get more rows, then duplicating columns, two time.

# Sample to get bigger data.frame
df_test <- nycflights13::flights %>% 
  sample_n(2e6, replace = TRUE) %>% 
  bind_cols(., rename_all(., function(x){paste0(x, '_bind_cols')})) %>% 
  bind_cols(., rename_all(., function(x){paste0(x, '_bind_cols_bis')}))
> dim(df_test)
[1] 2000000      76
> object.size(df_test) %>% format(units = "Mb")
[1] "916.4 Mb"

Write in db

tictoc::tic()
dbWriteTable(con_duck, "df_test", df_test)
tictoc::toc()

It take some times compared to MonetDBLite (no benchmark here, I just run this several times and it was consistent).

# DuckDB      : 23.251 sec elapsed
# SQLite      : 20.23 sec elapsed
# MonetDBLite : 8.4 sec elapsed

The three are pretty fast.
Most importantly if queries are fast, and they are, most of the time we’re allwright.

I want to say here that’s for now it’s a work in progress, we have to wait more communication from DuckDB developers. I just write this to share the news.

Glimpse

> tbl(con_duck, 'df_test') %>% glimpse()
Observations: ??
Variables: 76
Database: duckdb_connection
$ year                                   <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,
$ month                                  <int> 11, 10, 3, 5, 12, 9, 7, 3, 9, 4, 7, 6, 1, 1, 9, 10, 9, 8, 4, 1, 4, 9, 6
$ day                                    <int> 29, 7, 1, 2, 18, 18, 20, 7, 15, 25, 22, 1, 29, 18, 30, 27, 27, 22, 19, 
$ dep_time                               <int> 1608, 2218, 1920, NA, 1506, 1917, 1034, 655, 1039, 1752, 2018, 1732, 82
$ sched_dep_time                         <int> 1612, 2127, 1920, 2159, 1500, 1900, 1030, 700, 1045, 1720, 1629, 1728, 
$ dep_delay                              <dbl> -4, 51, 0, NA, 6, 17, 4, -5, -6, 32, 229, 4, -9, -3, -4, -3, 9, 38, 34,
$ arr_time                               <int> 1904, 2321, 2102, NA, 1806, 2142, 1337, 938, 1307, 2103, 2314, 1934, 11
$ sched_arr_time                         <int> 1920, 2237, 2116, 2326, 1806, 2131, 1345, 958, 1313, 2025, 1927, 2011, 
$ arr_delay                              <dbl> -16, 44, -14, NA, 0, 11, -8, -20, -6, 38, 227, -37, -16, -12, -10, -39,
$ carrier                                <chr> "UA", "EV", "9E", "UA", "DL", "DL", "VX", "UA", "UA", "AA", "B6", "UA",
$ flight                                 <int> 1242, 4372, 3525, 424, 2181, 2454, 187, 1627, 1409, 695, 1161, 457, 717
$ tailnum                                <chr> "N24211", "N13994", "N910XJ", NA, "N329NB", "N3749D", "N530VA", "N37281…
$ origin                                 <chr> "EWR", "EWR", "JFK", "EWR", "LGA", "JFK", "EWR", "EWR", "EWR", "JFK", "
$ dest                                   <chr> "FLL", "DCA", "ORD", "BOS", "MCO", "DEN", "SFO", "PBI", "LAS", "AUS", "…
$ air_time                               <dbl> 155, 42, 116, NA, 131, 217, 346, 134, 301, 230, 153, 276, 217, 83, 36, 
$ distance                               <dbl> 1065, 199, 740, 200, 950, 1626, 2565, 1023, 2227, 1521, 1035, 2133, 138
$ hour                                   <dbl> 16, 21, 19, 21, 15, 19, 10, 7, 10, 17, 16, 17, 8, 14, 8, 19, 15, 16, 20
$ minute                                 <dbl> 12, 27, 20, 59, 0, 0, 30, 0, 45, 20, 29, 28, 35, 50, 25, 0, 35, 55, 0, 
$ time_hour                              <dttm> 2013-11-29 21:00:00, 2013-10-08 01:00:00, 2013-03-02 00:00:00, 2013-05
..                                                                                                                     
..                                                                                                                     
..                                                                                                                     
$ minute_bind_cols                       <dbl> 12, 27, 20, 59, 0, 0, 30, 0, 45, 20, 29, 28, 35, 50, 25, 0, 35, 55, 0, 
$ time_hour_bind_cols                    <dttm> 2013-11-29 21:00:00, 2013-10-08 01:00:00, 2013-03-02 00:00:00, 2013-05
$ year_bind_cols_bis                     <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,
$ month_bind_cols_bis                    <int> 11, 10, 3, 5, 12, 9, 7, 3, 9, 4, 7, 6, 1, 1, 9, 10, 9, 8, 4, 1, 4, 9, 6
$ day_bind_cols_bis                      <int> 29, 7, 1, 2, 18, 18, 20, 7, 15, 25, 22, 1, 29, 18, 30, 27, 27, 22, 19, 
..                                                                                                                     
..                                                                                                                     
..                                                                                                                     
$ distance_bind_cols_bind_cols_bis       <dbl> 1065, 199, 740, 200, 950, 1626, 2565, 1023, 2227, 1521, 1035, 2133, 138
$ hour_bind_cols_bind_cols_bis           <dbl> 16, 21, 19, 21, 15, 19, 10, 7, 10, 17, 16, 17, 8, 14, 8, 19, 15, 16, 20
$ minute_bind_cols_bind_cols_bis         <dbl> 12, 27, 20, 59, 0, 0, 30, 0, 45, 20, 29, 28, 35, 50, 25, 0, 35, 55, 0, 
$ time_hour_bind_cols_bind_cols_bis      <dttm> 2013-11-29 21:00:00, 2013-10-08 01:00:00, 2013-03-02 00:00:00, 2013-05

Count

> tbl(con_duck, 'df_test') %>% count()
# Source:   lazy query [?? x 1]
# Database: duckdb_connection
        n
    <dbl>
1 2000000

Dates

Compared to SQLite it handles dates/times correctly. No need to convert in character.

tbl(con_duck, 'df_test') %>% select(time_hour)
# Source:   lazy query [?? x 1]
# Database: duckdb_connection
   time_hour                 
   <dttm>                    
 1 2013-11-29 21:00:00.000000
 2 2013-10-08 01:00:00.000000
 3 2013-03-02 00:00:00.000000
 4 2013-05-03 01:00:00.000000
 5 2013-12-18 20:00:00.000000
 6 2013-09-18 23:00:00.000000
 7 2013-07-20 14:00:00.000000
 8 2013-03-07 12:00:00.000000
 9 2013-09-15 14:00:00.000000
10 2013-04-25 21:00:00.000000
# … with more rows
tbl(con_sqlite, 'df_test') %>% select(time_hour)
# Source:   lazy query [?? x 1]
# Database: sqlite 3.22.0 [/Users/guillaumepressiat/Documents/data/sqlite.sqlite]
    time_hour
        <dbl>
 1 1385758800
 2 1381194000
 3 1362182400
 4 1367542800
 5 1387396800
 6 1379545200
 7 1374328800
 8 1362657600
 9 1379253600
10 1366923600
# … with more rows

Some querying

Running some queries

dplyr

It already works nicely with dplyr.

> tbl(con_duck, 'iris') %>% 
+   group_by(Species) %>% 
+   summarise(min(Sepal.Width)) %>% 
+   collect()
# A tibble: 3 x 2
  Species    `min(Sepal.Width)`
  <chr>                   <dbl>
1 virginica                 2.2
2 setosa                    2.3
3 versicolor                2  
> tbl(con_duck, 'iris') %>% 
+     group_by(Species) %>% 
+     summarise(min(Sepal.Width)) %>% show_query()
<SQL>
SELECT "Species", MIN("Sepal.Width") AS "min(Sepal.Width)"
FROM "iris"
GROUP BY "Species"

sql

Run query as a string

dbGetQuery(con_duck, 'SELECT "Species", MIN("Sepal.Width") FROM iris GROUP BY "Species"')
     Species min(Sepal.Width)
1  virginica              2.2
2     setosa              2.3
3 versicolor              2.0

Like for all data sources with DBI, if the query is more complex, we can write it comfortably in an external file and launch it like this for example:

dbGetQuery(con_duck, readr::read_file('~/Documents/scripts/script.sql'))

“Little” benchmarks

Collecting this big data frame

This has no sense but give some idea of read speed. We collect df_test in memory, from duckdb, monetdb and sqlite.

> microbenchmark::microbenchmark(
+   a = collect(tbl(con_duck, 'df_test')),
+   times = 5)
Unit: seconds
 expr     min       lq     mean   median   
          

Formats 2018 : ok ; et digression cartographique

 Cache   

Utiliser pmeasyr sur les données M03 à M12 2018, en attendant 2019. Présentation d’un code pour faire une mappemonde interactive avec R.

Package prêt pour 2018 M03 - 2018 M12

Ça y est, le package permet d’intégrer les données 2018 M03, il faut pour cela le mettre à jour :

devtools::install_github('IM-APHP/pmeasyr')


Digression cartographique

Carte choroplèthe à partir de codes COG Insee

Entre autres nouveautés, l’apparition d’une variable dans les fichiers Anohosp, PAYSPAT, utilisant les COG Insee.

J’en profite pour inclure ici un code pour faire une mappemonde1 en utilisant :

  • la correspondance des codes pays INSEE vers les codes pays ISO
  • le package leaflet
  • des données simulées à partir d’une loi uniforme (mais le code est imaginé pour fonctionner avec les codes contenus dans les fichiers ano 2018)


library(dplyr, warn.conflicts = FALSE)

# Télécharger les COG pays INSEE
httr::GET('https://www.insee.fr/fr/statistiques/fichier/2666684/pays2017-txt.zip',
          httr::write_disk(path = '~/Documents/data/insee/pays2017.txt.zip', overwrite = T))
# Les importer
iso <- readr::read_tsv('~/Documents/data/insee/pays2017.txt.zip', locale = readr::locale(encoding = "latin1")) %>%
  filter(ACTUAL == 1) %>%
  dplyr::mutate(COG = ifelse(LIBCOG == 'FRANCE', "99100", COG)) %>%
  dplyr::mutate(PAYSPAT = substr(COG,3,5),
                n = round(runif(nrow(.), 1, 1e3), 0))


# Jointure
library(sp)
library(sf)
library(leaflet)
library(maps)
library(rworldmap)
am_map <- joinCountryData2Map(iso, joinCode = "ISO2", nameJoinColumn = "CODEISO2")
qpal <- colorQuantile(rev(viridis::viridis(5)),
am_map$n, n=5)

crs.molvidde <- leafletCRS(
  crsClass="L.Proj.CRS", code='ESRI:53009',
  proj4def= '+proj=moll +lon_0=0 +x_0=0 +y_0=0 +a=6371000 +b=6371000 +units=m +no_defs',
  resolutions = c(65536, 32768, 16384, 8192, 4096, 2048))

l <- leaflet(
am_map,
options = leafletOptions(
maxZoom = 5,  attributionControl = FALSE, crs= crs.molvidde)) %>% #
addGraticule(style= list(color= '#999', weight= 0.5, opacity= 1)) %>%
addGraticule(sphere = TRUE,
            style= list(color= '#777', weight= 1, opacity= 0.25)) %>%
 addPolygons(
   label=~stringr::str_c(LIBCOG, ' - ', n),
   labelOptions= labelOptions(direction = 'auto'),
   weight=1, color='#ffffff', opacity=1,
   fillColor = ~ qpal(n), fillOpacity = 1,
   highlightOptions = highlightOptions(
     color= ~ qpal(n), weight = 2,
     bringToFront = TRUE, sendToBack = TRUE)) %>%
 addLegend(
   "topright", pal = qpal, values = ~ n, labels = ~n,
   title = htmltools::HTML("Quintiles bidons"),
   opacity = 1 )

l

Signalement : projet finess_etalab

Le projet finess_etalab permet de simplifier l’intégration des données finess de data.gouv.fr avec R. Une présentation de ce projet est faite ici.

Ce projet est open source et sa “mise en package” est en cours de réalisation par Joris Muller, ici !



  1. Deux posts intéressants pour produire ce type de carte avec R :


          

Open Systems Pharmacology community – an open access, open source, open science approach to modeling and simulation in pharmaceutical sciences. – PubMed – NCBI

 Cache   
Abstract:  Systems Pharmacology integrates structural biological and pharmacological knowledge and experimental data enabling dissection of organism and drug properties and providing excellent predictivity. The development of systems pharmacology models is a significant task requiring massive amounts of background information beyond individual trial data. Qualification of models needs repetitive demonstration of successful predictions. Open Systems Pharmacology is […]
          

New: Open crowdsourced list of Society Journals – Our Research blog

 Cache   
Unpaywall Journals needed data on whether a given journal is associated with an academic society, to help inform librarians in their subscription decisions. Alas there was no open source of this information. Thanks to 60+ contributors over the last week, all Elsevier and Wiley journals have now been annotated with whether or not they are […]
          

Source Ecommerceleitfaden Analyse Evaluierung Und Vergleich Von Open Source Webshopsystemen

 Cache   
Source Ecommerceleitfaden Analyse Evaluierung Und Vergleich Von Open Source Webshopsystemen
          

GitHub Sponsors anche in Italia

 Cache   
Microsoft porta GitHub Sponsors in Italia, una piattaforma che consente di sostenere progetti Open Source sottoscrivendo degli abbonamenti mensili Leggi GitHub Sponsors anche in Italia
          

Decentered Media Podcast 10 – Open Source Culture

 Cache   
This evening I met with Gareth Lapworth and Owen Williams to discuss the culture of open source software and ICT support for community-focussed organisations. We [...]
          

CiviCRM Bootcamp

 Cache   
A few weeks ago I went to the CiviCRM Bootcamp in London, which was an opportunity for me to find out more about the use and development of the CiviCRM system. CiviCRM is an open source Contact Relationship Managment application that has been built by a community of contributors and supporters. CiviCRM is web-based software [...]
          

OpenStack Train Keeps Open Source Cloud Moving Forward

 Cache   

Much has happened with OpenStack since it got its start in July 2010, with vendors coming and going, but one thing that has remained strong is the level of contribution and activity in the open source ...
          

BeeBEEP 5.8.2

 Cache   

BeeBEEP to oparty na licencji open source komunikator wykorzystujący sieć p2p. Program pozwala na swobodne przesyłanie plików pomiędzy innymi użytkownikami znajdującymi się w zasięgu sieci lokalnej LAN.

BeeBEEP nie wymaga instalacji na dysku, przez co nie pozostawia po sobie jakichkolwiek śladów w systemie. Istnieje możliwość prowadzenia grupowych konwersacji oraz wysyłania wiadomości także do osób, które aktualnie są wylogowane. Wszelka nasza aktywność zapisywana jest w historii.


          

Practical AI 63: Open source data labeling tools

 Cache   

What’s the most practical of practical AI things? Data labeling of course! It’s also one of the most time consuming and error prone processes that we deal with in AI development. Michael Malyuk of Heartex and Label Studio joins us to discuss various data labeling challenges and open source tooling to help us overcome those challenges.

Discuss on Changelog News

Sponsors

Featuring

Notes and Links


          

훌륭한 개발 문화의 이면(6) – 입사하고픈 회사를 위한 오픈 전략 구사하기

 Cache   

많은 개발자가 이런 저런 이유로 이직을 하게 되지만 정작 입사하려는 회사에 대해 잘 알고 오는 경우는 드뭅니다. 입사 인터뷰를 해 보면, 이직 사유로 급여나 복지, 하는 싶은 일, 지인의 추천 등을 많이 듣게 되고, 정작 회사의 개발 스택이나 일하는 문화에 대해서는 잘 모르는 경우가 많습니다. 사실 그런 정보를 접할 기회가 적거나 없는 경우가 대부분이기 때문이죠.

2004년 초 제가 다음에 입사해서 보니 이상적인 수평적인 회사 문화와 뛰어난 개발 문화를 가지고 있었습니다. 그게 외부에 잘 안 알려진 것이 오히려 이상할 정도였죠. 당시에 잘 나가고 있다 보니 채용에 대한 어려움이 없기도 했지만요. 저는 2003년부터 본격적으로 개인 블로그를 시작했는데, Mozilla와 Firefox, 그리고 웹 표준에 때한 이야기와 함께 다음에 대한 회사 이야기 등이 단골 글감이었습니다.

2006년에 제주로 이주하면서 여유 시간이 좀 많이 남은 편이라 블로그 글쓰기와 함께 새로운 일을 모색하던 시점이었습니다. 당시 개발자들 사이에서는 최신 기술 변화를 파악하기 위해 블로그 쓰기와 구독이 유행이 되던 시점이라, 제 블로그도 감사하게도 구글 리더기 기준 구독자 숫자가 6만명에 달하기도 했습니다. 자연스럽게 다음의 이야기들이 외부로 회자되기 시작하였죠.

그런데, 개발자 인터뷰 도중에 입사 이유에 대한 질문을 받은 분들로 부터 이상한 피드백을 듣기 시작했습니다. 바로 차니 블로그에 적힌 “솔직한 회사 이야기를 보고 좋은 개발 문화를 가진 것 같아서…”, “기술에 대한 고민이 많은 회사라는 걸 알게 돼서… ” 등의 이야기를 하는 사람이 늘었다는 것입니다. 이러 인해 당시 경영진들은 최종 인터뷰에 올라 오신 분들이 이야기를 듣고 외부 개발자에게 다음에 대해 솔직하게 알리는 것이 장기적으로 얼마나 중요한지 깨닫게 되었습니다. 회사의 명예를 드높였다고 그해 연말 스타상 후보로 뽑히기도 했습니다. 하하…

■ 사내 글쓰기 문화를 만들어라
자신의 회사의 기술 스택과 개발 문화를 알리는 것은 장기적인 인력 확보에 필수 불가결한 요소입니다. 그래서, 다음을 기점으로 국내에서는 주요 인터넷 회사들의 개발 블로그가 만들어지고, 본격적으로 운영되기 시작했습니다. 요즈음에는 스타트업까지 기술 블로그를 만들거나 회사 업무를 개인 블로그에 정리해서 올리는 것도 장려하기도 합니다.

좋은 기술 블로그를 운영한다는 것은 사내 공유 문화가 활성화 되어 있다는 것을 반증합니다. 남들 따라서 외부 블로그를 만들었다 해서, 몇몇 사내 개발자에게 부탁해서 글 동냥을 하는 것도 한계가 있습니다. 제가 직접 입안하고 실행했던 사내 개발자 전략 중에 대표적으로 개발 직군 자산 포인트와 함께 테크노트(Technote)라는 글쓰기 및 공유 제도가 있었습니다. 개발자들이 매월 1개씩 자신이 관심 있는 주제, 하고 있는 일, 간단한 기술 팁들을 위키 형식으로 자유롭게 공유하는 것입니다.

테크노트를 Daum DNA 웹사이트에 공개(2007)
테크노트를 잘 작성하면,KPI에 반영하여 인사 상 평가에도 유리하게 하는 방식으로 적극 장려했습니다. 물론 억지로 시킨다고 개발자들이 다 하지는 않지요. 또한, 바쁜 프로젝트를 하거나 하는 경우, 예외적으로 팀장이 면제해 주기도 했습니다. 어쩠거나 수백 명이 1년에 수천 개 글을 만들어 내니 양적으로 뿐만 아니라 질적으로 우수한 것들이 많았습니다. 그중에 일부는 사내 개발자 컨퍼런스 발표 주제로, 일부는 외부 블로그로, 또 일부는 디브온 다음(DevOnDaum)같은 외부 개발자 행사에 발표 세션에서 재활용되었습니다.

테크노트를 개인 역량 성과에 반영한다는 CTO 메일(2009)
외부 기술 블로그를 운영하거나 하려는 기업들은 먼저 우리 회사에 글쓰기 및 공유를 장려하는 문화를 가지고 있는지 반추해 보아야 합니다. 그렇지 않다면 그것부터 시작해 보는 게 어떨까요?

■ 오픈 소스 전략은 개발자의 마음을 산다
전통적으로 영리를 추구하는 회사에서 자신들이 가진 기술 자산을 공개한다는 건 상식적으로 생각하기 어렵습니다. 하지만, IT 회사 특히 개발자를 채용하는 기업에서는 다양한 방식으로 기술 자산을 외부에 알려야 합니다. 특히, 인터넷 닷컴 시대에 접어들면서 오픈 소스 소프트웨어가 주류가 되고 많은 개발자들이 이를 활용하고 있기 때문에 잘 나가고 있는 회사들은 당연히 오픈 소스로 공개하거나, 오픈 소스에 공헌하거나, 오픈 소스 커뮤니티 활동에 참여합니다. 굳이 예를 들지 않더라도 글로벌 기업이나 주요 국내 IT 기업은 모두 이런 전략을 구사하고 있습니다.

오픈 소스로 전략은 단순히 자랑 용도가 아닙니다. 여기에는 몇 가지 이유가 있습니다. 먼저 자사가 공헌하는 오픈 소스의 종류와 사내 자산을 오픈 소스로 공개하게 되면, 필연적으로 개발자 팬을 만들고 입사하고 싶은 사람이 늘어납니다. 결과적으로 회사에 입사하려는 개발자 재교육에 드는 비용도 줄어듭니다. 어떤 회사의 오픈 소스 기반 모범 사례를 자연스럽게 개발자들에게 전파되기 때문에 따라해 보려는 사람들이 늘기 마련입니다.

다음에서 일하고 있는 동안에 제가 심혈을 기울였던 것 중 하나가 바로 3O(Open API, Open Standards, Open Source)였습니다 그중에서도 어떤 오픈 소스를 사용하여 개발을 하고 있는지 외부에 공개하고, 외부로 공개 가능한 다양한 오픈 소스 프로젝트를 만들고, 오픈 소스 커뮤니티와도 밀접한 관계를 수립하려고 노력했었습니다.

Daum의 개방형 기술 전략 및 자바 기술 로드맵(2007)
오픈 소스를 좋아하는 개발자가 채용되다 보면 결과적으로 사내에 공유 문화가 잘 뿌리내리게 됩니다. 외부로 공개되는 오픈 소스는 전사 개발팀이 사용하는 프레임워크, 라이브러리 등이 많은데, 여러모로 사내 피드백을 받기가 쉬워 집니다. 물론 처음부터 좋은 공유 문화가 있었다면 금상첨화겠으나, 잘 짜인 오픈 소스 전략으로도 그런 문화를 잘 만들 수 있습니다.

■ 장기적인 커뮤니티와의 관계를 수립해야
오픈 소스 프로젝트의 리더는 개발자들의 내외부의 커뮤니티에서 존경을 받는 역할을 수행합니다. 회사에서는 다양한 개발자 역할 모델이 필요합니다. 개발자가 관리자와 임원으로 가는 것이 이상적으로 보이는 회사는 장래가 불투명합니다. 개발자로 백발이 날리도록 코딩을 할 수 있는 것 뿐만 아니라, 수석 엔지니어 및 아키텍트가 되어 개별 프로젝트에 조언 역할을 하거나, 에반젤리스트로서 기술을 외부에 알리고, CTO급 임원으로서 전사 기술 전략을 주도하는 경력 단계까지 마련하고, 오픈 소스 공헌을 하는 스타 개발자가 되도록 다양한 경력 경로를 선택할 수 있어야 합니다.

특히, 제가 제일 자랑스럽게 생각하는 건 국내 최초로 오픈 소스 과목을 제주대에 개설하고, 커뮤니티 리더들을 제주에 초청해서 매년 질 높은 공헌을 할 수 있었던 것입니다. 커뮤니티와 장기적 신뢰 관계를 구성하는 것은 매우 중요한 요소입니다. 이를 토대로 다음의 외부 개발자 컨퍼런스였던 디브온은 커뮤니티와 함께 만들어 갈 수 있었죠.

커뮤니티와 함께한 다음 개발자 컨퍼런스 DevOn 2013 전경
기술 커뮤니티를 중요시 하는 기술 전략은 양면이 있습니다. 업계에는 커뮤니티에서 잘 알려진 개발자를 채용했는데, 생각보다 일을 못하거나 소통 능력이 부족하다는 평판이 있기도 합니다. 다만, 짧은 주기로 회사를 옮겨 다니는 소수 때문에 지속적으로 오픈 소스 개발과 커뮤니티 공헌을 하는 다수가 피해를 봐서는 안되겠죠. 이들 대부분은 자기 주도성이 뛰어나고, 학습 능력이 좋으며, 대인 관계도 원만합니다.

마지막으로 우리 회사도 개발자가 안 온다, 이런 기술 전략을 구사해 봐야겠다, 이런 거 할 사람을 뽑아야겠다고 생각하는 대표나 CTO분께 조언합니다. 여러분 회사에서 이런 거 잘 하시는 분들 분명히 있습니다. 그런 사람이 눈에 띄지 않는다면 그건 회사 개발 문화를 이렇게 만든 여러분들 때문입니다. 업계 이름난 사람 뽑아서 이런 일 시키겠다고 해봐야 크게 성과가 나기 힘듭니다.

오히려 여러분 회사의 개발자들에게 20% 아니 10% 만큼의 여유를 주고, 그 여유로 내부부터 공유하는 문화를 만드세요. 공유 잘하는 개발자들이 외부로 나가서 활동하는 것을 장려하세요. 이들이 사내에서 시기와 뒷담화의 대상이 되지 않도록 충분한 보상과 칭찬을 하시기 바랍니다.

우수한 개발자들은 그 진심을 알아보고 자연적으로 따라오게 될 것입니다.

연재 목차


          

Faire un Mind map, c'est facile !

 Cache   
Réaliser un Mind Map n'est pas une affaire en soi. Un papier et quelques feutres de couleurs et l'affaire est jouée. Les idées viennent toutes seules au fur et à mesure. Ah ! au fait, une gomme est aussi forte utile. On ne réussit pas du premier coup, d'ailleurs c'est un peu le but du jeu, il s'agit de tâtonner. Un bon logiciel, open source de préférence, est encore une meilleure solution. Voyons tout cela avec une petite méthode en cinq temps, une infographie et de nombreux exemples. #management #business #developpementpersonnel [...]
          

Open Source Hardware Certifications For October

 Cache   

October Certified Open Source Hardware The Open Source Hardware Association (OSHWA) runs a free program that allows creators to certify that their hardware complies with the community definition of open source hardware.  Whenever you see the certification logo, you know that the certified hardware meets this standard. The certification site […]

Read more on MAKE

The post Open Source Hardware Certifications For October appeared first on Make: DIY Projects and Ideas for Makers.


          

Google open sources Cardboard as it retreats from phone-based VR

 Cache   

          

LANC Remastered: Open Source PS4 IP Grabber, Puller & Sniffer Tool

 Cache   
none
          

TV-B-Gone, the hidden-in-your-glasses edition

 Cache   

Facelesstech created a fun pair of "smart glasses" with an embedded a miniature attiny85 Arduino controller, and followed it up with a pair that concealed a TV-B-Gone (Mitch Altman's open source hardware gadget that cycles through all known switch-TV-off codes, causing any nearby screens to go black). It's a much less creepy use than the spy glasses with embedded cameras sold by Chinese state surveillance vendors. I'd certainly buy a pair! (via JWZ) Read the rest


          

[[qanda:topic_unsolved]] How large SSD should I buy for open source developing?

 Cache   
@jronald 500GB will be more than enough in most cases. Do you know how much space you need when building ungoogle-chromium?
          

Candle - privacy friendly smart home

 Cache   
La domotique «open source» qui respecte votre vie privée
(Permalink)
          

ไม่ทำต่อแล้ว กูเกิลเปิดซอร์ส Cardboard SDK ยกหน้าที่พัฒนาให้ชุมชน

 Cache   

กูเกิลประกาศเปิดซอร์สโครงการ Google Cardboard SDK ไว้บน GitHub เพื่อให้นักพัฒนาภายนอกนำไปพัฒนาต่อได้ สัญญาอนุญาตเป็น Apache License 2.0

เงื่อนไขของกูเกิลค่อนข้างเปิดกว้าง นำไปทำได้ทุกอย่างยกเว้นใช้ชื่อ Google Cardboard ที่เป็นเครื่องหมายการค้าของกูเกิลเท่านั้น (แต่ยังอนุญาตให้ระบุได้ว่าเป็นแอพที่ทำงานร่วมกับ Google Cardboard ได้)

ช่วงหลังกูเกิลดูไม่สนใจกับตลาด VR อีกแล้ว ดังจะเห็นได้จากการหยุดซัพพอร์ตแว่น Google Daydream บน Pixel 4 และการเปิดซอร์ส Cardboard ก็ระบุชัดว่าจะยกหน้าที่การพัฒนาให้ชุมชนโอเพนซอร์สแทน โดยกูเกิลจะไม่เข้ามาร่วมพัฒนาอีกมากนักแล้ว

ที่มา - Google, 9to5google

No Description


          

Survey and solutions for potential cost reduction in the design and construction process of nearly zero energy multi-family houses

 Cache   

The CoNZEBs project formed the main part of a session at the IAQVEC conference in Bari in September 2019. CoNZEBs project partner held in total 6 presentations including a project overview, different project results and one of the national exemplary NZEB buildings presented in the end-user brochure. The papers are published as open source documents in IOP Conference Series: Materials Science and Engineering, Volume 609.

 


          

Cost-efficient Nearly Zero-Energy Buildings (NZEBs)

 Cache   

The CoNZEBs project formed the main part of a session at the IAQVEC conference in Bari in September 2019. CoNZEBs project partner held in total 6 presentations including a project overview, different project results and one of the national exemplary NZEB buildings presented in the end-user brochure. The papers are published as open source documents in IOP Conference Series: Materials Science and Engineering, Volume 609.

 


          

IBM Bought RedHat

 Cache   

Good day ;

Recently IBM bought RedHat, which is the base Linux distribution for most of the NI  supported Linux distributions of LabVIEW for Linux.  This has possibly accelerated  the release of RedHat Enterprise Linux version 8, which uses a 4.x Linux kernel, which won't work with LabVIEW ( at least in my experience ).

CentOS and Scientific Linux will eventually follow suit, as they use RedHat's open source repositories.

What is the migration path for LabVIEW to support the RHEL 8 and derivative distributions?

Are there discussions with IBM that could be reported here for the community?

Have any other community members evaluated the RHEL 8 public beta for compatibility with other (e.g. LabWindows/CVI) NI products?

Thanks for your viewing my questions, and especially for any answers.

 


          

С++ Developer в Infopulse, Львов

 Cache   

Необходимые навыки

Excellent communication skills
Experience with C\C++ and Python (or desire to learn)
Knowledge of how to use Open Source project is desired
Demonstrable troubleshooting and debugging ability
Ability to cooperate in a distributed environment where peers are spread across regions
Four year BS/BA degree or equivalent in Computer Science or related technical area
3 — 5 years of industry experience (MS/MA in lieu of less, direct industry experience).

Обязанности

Develop and support Quantum’s NAS and Appliance Controller product
Participate in architectural decisions and implementation of new product capabilities
Collaborate with the team to deliver enterprise-ready products
Attend and actively participate in Agile “standup” meetings
Estimate schedule duration and report on progress towards goals

О проекте

Infopulse welcomes talented professionals to join our project as a C++ Developer at our Lviv office.

Our client Quantum Corp. (NYSE:QTM) is a leading expert in data storage, archiving and protection. Infopulse has been cooperating with Quantum for many years on creating and testing software for robotized tape libraries and high-performance file systems used by NASA, Amazon and others, and ensuring storage of petabytes of data.

Quantum is looking for an engineer eager to develop and support their enterprise NAS product. NAS (Network Attached Storage) is more than just NFS and SMB, it’s any protocol or interface that can send data to network shares. Additionally, access to user data must be highly available, secure and scalable.

Our ideal specialist is a professional with a solid understanding of software engineering and best practices of developing extensible and maintainable enterprise-class software. Also, we want someone who’s eager to learn. Technologies used in our product include Samba, FTP, Webservices, directory services (Active Directory and LDAP), Kerberos and DNS.


          

Research Associate - China/East Asia Assessments

 Cache   

Overview:

The Institute for Defense Analyses (IDA) operates three Federally Funded Research and Development Centers supporting federal decision making – two serving the Department of Defense (DOD) and one serving the Office of Science and Technology Policy in the Executive Office of the President of the United States. IDA assists the United States Government in addressing important national security issues, particularly those requiring scientific and technical expertise.

This position is in IDA’s Intelligence Analyses Division (IAD). IAD supports the Department of Defense, the Intelligence Community, and other U.S. Government Agencies by providing analyses of critical intelligence and national security issues. IAD is involved in analytical activities across a wide range of cross disciplinary national security and intelligence matters including: programmatic assessments; technical evaluations; social science research; strategic analysis; physical, cyber, and human threat evaluations; and open source exploitation. Short and long term sponsoring arrangements with U.S. Government entities provide a steady stream of intellectual challenges for the IAD staff in the furtherance of national objectives and for identifying opportunities to improve public sector performance.

Responsibilities:

IDA seeks candidates for a Research Associate position within IAD. This position will work alongside highly experienced analysts who are working on projects for the DoD and the Intelligence Community. Primary work will be accomplished in Alexandria, VA at the IDA headquarters in a Sensitive Compartmented Information Facility.

Research Associates (RA) are members of our full-time professional staff. RAs typically work as members of study teams and at the direction of senior study leaders and analysts. A Research Associate must be capable of operating unsupervised on assignments for which they are qualified. RAs will generally perform fact-finding tasks, develop preliminary analyses, and shape findings and recommendations for the team. RAs are expected to be familiar with standard information collection, compilation, summary and analysis methods; able to develop or apply appropriate methods to support needs of particular studies or analyses; use strong communication (oral and written) and interpersonal skills to contribute effectively to a team approach, problem solving, and to interact with Government and industry representatives. The analyst is expected to make immediate, substantive contributions to the development and completion of written and oral deliverables for sponsoring organizations.

Qualifications:

  • Candidates must have a Bachelor’s degree with a focus on Chinese/East Asian: politics, foreign and/or military policy, military doctrine or national security policies in the post-World War II era.
  • Candidates with a Ph.D. will not be considered.
  • It is preferred that candidates have demonstrated reading/research skills in the Chinese (Mandarin) language through time spent living/working in the region.
  • Candidates must possess strong analytical skills in developing and implementing analytical methodologies and then fusing developed all-source information into clear and concise conclusions and recommendations.
  • Candidates must have strong oral and written communication skills, as well as good interpersonal skills.
  • Candidates must have a demonstrated ability to contribute effectively to a team approach to problem definition and resolution.
  • Candidates should have a working knowledge of the U.S. national security establishment and the U.S. Intelligence Community.
  • Candidates are REQUIRED to have an active TS clearance and recent SCI access (within the last 6 months)/current SSBI. Candidates without this level of clearance will not be considered for the position.
  • Candidates are encouraged to submit a brief cover letter (in addition to a resume) describing their career interests, skill sets, and specific abilities that are applicable to this job.

U.S. Citizenship is required

Ability to obtain and maintain a security clearance is required

Equal Opportunity Employer

APPLY HERE: https://chu.tbe.taleo.net/chu01/ats/careers/v2/viewRequisition?org=INSTITUTEDA&cws=39&rid=1539


          

Research Associate – Russia Assessments

 Cache   

Overview

The Institute for Defense Analyses (IDA) is a Federally Funded Research and Development Center (FFRDC) supporting the Department of Defense and other agencies that require rigorous and objective analysis of national security issues.

This position is in IDA’s Intelligence Analyses Division (IAD). IAD supports the Department of Defense, the Intelligence Community, and other U.S. Government Agencies by providing analyses of critical intelligence and national security issues. IAD is engaged in cross-disciplinary activities across a wide range of national security and intelligence issues including: programmatic assessments; technical evaluations; social science research; strategic analysis; physical, cyber, and human threat evaluations; and open source exploitation. Short and long term sponsoring arrangements with U.S. Government entities provide a steady stream of intellectual challenges for the IAD staff in the furtherance of national objectives and for identifying opportunities to improve public sector performance.

Responsibilities

IDA seeks candidates for a Research Associate (RA) position within IAD. This position will work as a member of a team of highly experienced, full-time professional staff members who are working on projects for the DoD and the Intelligence Community. Primary work will be accomplished in Alexandria, VA at the IDA headquarters in a Sensitive Compartmented Information Facility.

The RA is a member of our full-time professional staff. The RA will typically work as a member of one or more study teams under the leadership and direction of a senior study leader and other, more seasoned analysts. The RA is expected to be adaptable and self-motivated, demonstrating a capacity for independent thought, sound judgment, and creativity in applying quantitative and qualitative analytical methods to complex policy problems. The RA must be capable of operating unsupervised on assignments for which he/she is qualified. 

The RA will: contribute to research teams with information gathering research in primary sources, various analyses and databases; participate in various analytical fora, meetings and briefings; capture results and insights for future analysis and presentation; and draft reports and briefings meeting all deadlines. Accordingly, the RA is expected to demonstrate: knowledge of strategic and military issues in one or more geographic areas; proven academic research skills; excellent oral and written communication; and competency in the use of supporting information technologies. An academic or professional background with Russia is preferred.

Qualifications

  • Candidates must have a Bachelor’s degree.  A focus on Russian politics, foreign and/or military policy, military doctrine or national security policies in the post-World War II era is preferred.
  • It is preferred that candidates have demonstrated reading/research skills in the Russian language.
  • Candidates must possess proven analytical skills in researching primary sources and various databases and then fusing developed all-source information into clear and concise conclusions and recommendations. 
  • Candidates must have strong oral and written communication skills, as well as good interpersonal skills.
  • Candidates must have a demonstrated ability to contribute effectively to a team approach to problem definition and resolution.
  • Candidates should have a working knowledge of the U.S. national security establishment and the U.S. Intelligence Community.
  • Candidates are REQUIRED to have an active TS clearance and recent SCI access (within the last 6 months)/current SSBI. Candidates without this level of clearance will not be considered for the position.
  • Candidates are encouraged to submit a brief cover letter (in addition to a resume) describing their career interests, skill sets, and specific abilities that are applicable to this job.

U.S. Citizenship is required
Ability to obtain and maintain a security clearance is required
Equal Opportunity Employer

APPLY HERE: 

https://chu.tbe.taleo.net/chu01/ats/careers/v2/viewRequisition?org=INSTITUTEDA&cws=39&rid=1384 


          

Open Systems Pharmacology community - an open access, open source, open science approach to modeling and simulation in pharmaceutical sciences. - PubMed - NCBI

 Cache   
Abstract:  Systems Pharmacology integrates structural biological and pharmacological knowledge and experimental data enabling dissection of organism and drug properties and providing excellent predictivity. The development of systems pharmacology models is a significant task requiring massive amounts of background information beyond individual trial data. Qualification of models needs repetitive demonstration of successful predictions. Open Systems Pharmacology is a community that develops, qualifies and shares professional open source software tools and models in a collaborative open science way.
          

Comment on Piston Cloud Launches pentOS, An Enterprise OpenStack Distribution by Cloud Computing Report Card: Grading Our Predictions - CloudCow

 Cache   
[…] on in the academic and individual user community. In addition, at least two companies, Nebula and Piston, have received venture funding to create spin off distros of the OpenStack open source code […]
          

Comment on WordPress Customized Comment Form by Casper Reiff

 Cache   
Wordpress is open source, witch means, all versions of wp is free. If you've paid for it, you've been scammed...
          

Addio Douglas C. Engelbart,il papà del mouse

 Cache   
Addio Douglas C. Engelbart,il papà del mouse
E’ morto a 88 anni Doug Engelbart, il papà del mouse: il primo modello, di legno e metallo, venne presentato negli anni Sessanta.Doug non si arricchì mai per questa sua invenzione, poiché il brevetto scadde prima della diffusione mondiale dell’oggetto.Engelbart, ha lavorato come addetto ai radar durante la Seconda Guerra Mondiale, era nato a Portland, nell’Oregon nel 1925 e si era laureato in ingegneria.
Visionario e innovatore, è stato uno dei tanti  che ha contribuito all’evoluzione della scienza informatica e in particolare sull’interazione uomo-macchina. Le sue ricerche e i suoi lavori sono alla base di concetti come le interfacce grafiche, l’ipertesto, le reti di computer, concetti che hanno gettato le fondamenti del modo in cui utilizziamo i pc ai giorni d’oggi. possiamo affermare che stato uno degli ultimi scienziati che hanno cambiato davvero il nostro modo di vivere e lavorare. Negli anni Novanta ha fondato anche il Bootstrap Institute: alla base un’idea di collaborazione che ha dato spunto per il movimento ‘open source’. Englebart realizzò Il primo prototipo di mouse nel 1964,e ottenne il  brevetto il 21 giugno 1967. Si presentava come una piccola scatola di legno con due ruote di metallo. Fu battezzato come ‘indicatore di posizione X-Y per display’; il nome mouse, topolino, e’ arrivato dopo ed e’ stato proprio Engelbart a darglielo perché il filo gli ricordava la coda di un topo. Nel 1968, alla Joint Computer Conference al Convention Center di San Francisco, si svolse la dimostrazione pubblica del progetto. Più tardi la Xerox produsse lo Star, il primo computer dotato di mouse: Steve Jobs vide il progetto, lo “rubo” (autorizzato) e lo perfezionò l’idea. E’ cosi che presero vital’Apple Lisa e soprattutto il Macintosh, il primo personal computer con interfaccia grafica e mouse ad avere grande successo commerciale. Engelbart non è stato solo l’inventore del mouse, ma ha contribuito a  tantissimi altri progetti.Ad esempio ha fatto parte del team che ha realizzato ARPANET,l’antenato dell’attuale internet,la rete mondiale di computer che permetteva lo scambio d’informazioni tra scienziati e ingegneri. E’ doveroso ricordare ,nel salutare Douglas C. Engelbart, che era uno dei pionieri dell’impossibile capace di tramutare visioni in realtà.

          

Git 2.24 adopts Contributor Covenant code of conduct

 Cache   

The newest version of Git arrived on November 3, 2019. What’s new in the open source project? Git 2.24 includes a number of notable features, bug fixes, and changes, including commit graphs enabled by ...
          

SD Times #News digest: New Relic acquires IOpipe, Git 2.24 released, and first public release of C++/CLI support for .NET Core 3.1

 Cache   

The open source Git project released Git 2.24, which includes new features such as the ability to opt into feature macros and commit graphs by default. Contributors can now also see what the project ...
          

关于技术

 Cache   
关于技术

Bill Venners:
  在一次CIPS Connections的交流中,你曾经说:” 我读过很多开放源码软件的源码,例如,Perl,Python,和很多风格LISP解释器的源码,我知道在写Ruby 之前我应该了解他们“ 你觉得程序员通过读源码可以得到哪些益处?

Yukihiro Matsumoto:
  程序员通过读源码可以收益颇多。你无法简单的告诉别人如何成为一个好的程序员,你可以向他们提供一些好的编程原则,你可以向他们描述一些你自己的一些好的设计经验,但是你无法给予他们如果成为一个好程序员的实际知识。我相信获取这些实际知识的最好方法就是读代码。写代码当然可以帮助你成为一个好的程序员,但是读代码这种方式更好一些。

Bill Venners:
  为什么

Yukihiro Matsumoto:
  因为成为一个好的程序员实际上是跟经验有关,代码是对程序员思想,态度,想法的表达。通过读代码,你不仅可以了解程序员要完成一个什么特殊的任务以及知道他们是如何实现的,而且你也可以通过他们的思考方式而增长见识。这就是为什么读程序可以让你成为更好的程序员的原因。除此之外,如果你想知道如何用代码实现一些东西,你可以打开一本计算机科学方面的书籍,书本可以向你解释算法,但是如果你想更快的了解算法,那么读代码将是最好的方式。此外,你可以执行代码以实现算法。你可以在代码执行算法的时候使用一个调试器来观察它。这种方式要远比读书本要好。

Bill Venners:
  在CIPS Connection的交流中, 你给出了程序员的十个技巧。其中之一是: ” 学不只一种程序语言,最好是不同风格的,比如脚本语言,面向对象语言,函数式语言,逻辑式语言,等等“,学习多种程序语言有什么好处?

Yukihiro Matsumoto:
  每个程序或者系统都有它自己的文化。每种语言或者系统都有自己的核心概念。这些概念中大多数是好的,但是他们是不同的,通过学习多种语言和系统,你可以接受不同的想法,进而增强你自己的观点。

  例如,如果你不了解Prolog语言,你就不会了解目标指导性编程的威力 (通过应用指定规则描述要解决的问题来编程的方式)。这是一个非常有趣的概念,是一种不同的思考方法。但是如果你不了解Prolog或者谓词逻辑的话,很难自己发现这种思考方式。了解其他的系统和范式将会扩展自己头脑中的世界。这就是为什么我极力推荐学习多种语言的原因。

Bill Venners:
  在你的十个最高技巧中你也说过:”不要太过多的关注于工具,工具是会变化的,而算法和基本概念不会“,你这是什么意思。

Yukihiro Matsumoto:
  部分是关于以人为本而不是以机器的观点。人的变化非常的慢,但是系统变化的非常快。100 年前的人们和现在没什么太大的不同。 100 年前我们没有计算机,50年前我们拥有了计算机,但是他们非常原始。从今之后的20年,我无法想象计算机将会是什么样子的,但是我可以想象的出20年后的人们将会如何思考。

  另外一个例子是数学。数学拥有非常悠久的历史。它是非常成熟的科学,但是计算机科学不是。所以从数学中获取思想是非常好的。

  工具会随着时间的流逝轻易的变化。如果你太多的关注现在的工具,那么你的努力只能得到短期的回报。如果你想获得持久的收益,你应该更关注一些基础的东西。关注数学和人类心理学。关注那些已经建立起来的科学和已经建立起来的思维方式。


懒惰

Bill Venners:
  你曾经在你的十大技巧中提到:”懒惰,机器将会服务于人类。经常程序员会不经意的服务于机器。让机器服务于你。尽可能做哪些让你懒惰的事情“,为什么我们要设法变得懒惰?

Yukihiro Matsumoto:
  因为你想变得懒惰。你要做任何可以减轻自己工作的事情,我努力工作来减轻自己的工作,变得懒惰。

Bill Venners:
  我相信这点

Yukihiro Matsumoto:
  我非常渴望变得懒惰。


考虑接口

Bill Venners:
  你在十大技巧中也提到了:”对他人友好,首先考虑接口: 人对人,人对机器和机器对机器的接口。再次记住人的因素是非常重要的 “ ,你这是什么意思,”首先考虑接口?“

Yukihiro Matsumoto:
  接口是我们作为一个用户所看到的一切。如果我的计算机正在内部做非常复杂的事情,但是复杂性并没有表露在外面,我不在乎。我不在乎计算机是否在内部辛苦的工作。我只要以好的方式呈现正确的结果。这就是说接口就是一切,至少对于普通的计算机用户使用计算机的时候,情况就是如此,那就是为什么我们要关注接口的原因。

  一些软件人士,比如天气预报员,数字计算者,他们更多的是了解事物的内部,但是他们所处的是非常有限的计算机科学领域。大多数程序员需要关注表面,接口,因为对他们来说那才是最重要的。

Bill Venners:
  你也提到了机器对机器的接口,你的意思是不是仅仅是对用户的接口或者机器的接口

Yukihiro Matsumoto:
  不只是用户接口。当机器之间通过一个协议互相对话的时候,他们不在乎对方内部是如何实现的,最重要的是通过恰当的协议正确的传递恰当的结果,这才是最重要的。

  如果你有的系统有一个好的接口,足够的时间和财务预算,你可以继续工作在你的系统上。如果你的系统有错误或者是太慢,那么你可以改进它。但是如果你的系统有一个糟糕的接口,那么你就基本上是一无所有了。内部实现有多高的技巧并不重要。如果你的系统有一个糟糕的接口,没有人会使用它。所以接口或者系统的表面特征,无论是对用户还是对其他机器来说,都是非常重要的。


英文原文:
Bill Venners: In an interview with CIPS Connections, you said, "I read a bunch of open source software source code, for example, Perl, Python, and many flavors of Lisp interpreter. I know they were needed to write Ruby." What benefit do you think programmers can derive from reading source code?

Yukihiro Matsumoto: Programmers can get a lot of benefit from reading source code. You can't simply tell people how to be good programmers. You can offer them some principles of good programming. You can describe some good design experiences you've had. But you can't give them a real knowledge of how to be a good programmer. I believe the best way for that knowledge to be obtained is by reading code. Writing code can certainly help people become good programmers, but reading good code is much better.

Bill Venners: Why?

Yukihiro Matsumoto: Because being a good programmer is a matter of experience. Code is an expression of the thoughts, attitudes, and ideas of the programmer. By reading code, you can not only figure out what particular task the programmers were trying to accomplish and understand how they did it, but you can also gain insight into how they were thinking. This is the reason that reading code makes programmers better.

And besides that, if you want to know how to accomplish something in code, you can open a computer science textbook. The textbook will explain the algorithm. But if you want to understand the algorithm very quickly, reading code is the best way. Moreover, you can execute code that implements theh algorithm. You can use a debugger to watch the code as it performs the algorithm. And this is much better than just reading a textbook.

Learning Languages
Bill Venners: In the CIPS Connection interview, you gave ten tips for programmers. One of them was, "Learn more than one programming language, preferably many different styles, like scripting, object-oriented, functional, logic, etc." What is the benefit of learning multiple programming languages?

Yukihiro Matsumoto: Every language or system has its own culture. In the background of every language or system are some central ideas. Most of these ideas are good, but they are different. By learning many languages and systems, you get exposed to different ideas—and that enhances your point of view.

If you don't know Prolog, for example, you may not know the power of goal directed programming—programming by describing the problem to solve through specifying rules to apply. This is a very interesting concept. It is a very different way of thinking. And if you don't know Prolog, or the predicate logic, it's very difficult to discover this way of thinking by yourself. Knowing other systems and paradigms expands the world inside your brain. That's why I advise learning multiple languages


Bill Venners: You also said in your ten top tips: "Don't focus too much on tools. Tools change. Algorithms and basic fundamentals don't." What did you mean by that?

Yukihiro Matsumoto: That was partly about focusing on humans instead of machines. Humans change very slowly, but systems change rapidly. 100 years ago, people were mostly the same as they are in the present time. 100 years ago we had no computers. 50 years ago we had computers, but they were very primitive. 20 years from now, I can't imagine how computers will be. But I can imagine how people 20 years from now will think.

Another example is mathematics. Mathematics has a very long history. It's a very mature science, but computer science is not. So it's good to retrieve ideas from mathematics.

Tools change very easily as time passes. If you focus too much on present-day tools, your efforts will give you only short-term returns. If you want benefits that will endure, you need to focus more on fundamentals. Focus on mathematics and human psychology. Focus on established sciences and established ways of thinking.

Being Lazy
Bill Venners: You also mentioned in your ten top tips: "Be lazy. Machines should serve human beings. Often programmers serve machines unconsciously. Let machines serve you. Do everything you can to allow yourself to be lazy." Why should I try to be lazy?

Yukihiro Matsumoto: You want to be lazy. You want to do anything to reduce your work. I work hard to reduce my work, to be lazy.

Bill Venners: I believe that.

Yukihiro Matsumoto: I work very eagerly to be lazy.

Considering Interface
Bill Venners: You also mentioned in your ten top tips: "Be nice to others. Consider interface first: man-to-man, man-to-machine, and machine-to-machine. And again remember the human factor is important." What do you mean by, "consider interface first?"

Yukihiro Matsumoto: Interface is everything that we see as a user. If my computer is doing very complex things inside, but that complexity doesn't show up on the surface, I don't care. I don't care if the computer works hard on the inside or not. I just want the right result presented in a good manner. So that means the interface is everything, for a plain computer user at least, when they are using a computer. That's why we need to focus on interface.

Some software people—like weather forecasters, the number crunchers—feel that the inside matters most, but they are a very limited field of computer science. Most programmers need to focus on the surface, the interface, because that's the most important thing.

Bill Venners: You also mentioned machine-to-machine interfaces, so are you just talking about interfaces for users or also for machines?

Yukihiro Matsumoto: It's not just user interfaces. When machines are talking to each other via a protocol, they don't care how the other is implemented on the inside. The important thing is the proper output getting passed correctly via the proper protocol. That's what matters.

If you have a good interface on your system, and a budget of money and time, you can work on your system. If your system has bugs or is too slow, you can improve it. But if your system has a bad interface, you basically have nothing. It won't matter if it is a work of the highest craftsmanship on the inside. If your system has a bad interface, no one will use it. So the interface or surface of the system, whether to users or other machines, is very important.



woow 2006-04-23 22:02 发表评论

          

CCE 2019 - 3M, Shell, Halliburton and Unibap weigh in on their AI results to date

 Cache   
CCE 2019 - 3M, Shell, Halliburton and Unibap weigh in on their AI results to date Jon Reed Wed, 11/06/2019 - 10:24
Summary:
It's hardly unique to hear about AI and IoT from the enterprise stage. But it's rare to hear customers speak to results and live lessons. At Constellation Connection Enterprise '19, a real world AI/IoT panel was a highlight - here's my review.

CCE 19 AI panel
CCE 2019 customer AI and IoT panel

Despite my incessant buzzword bashing, I'll concede this much: it's important to grapple with next-gen tech via experts who actually know what they are talking about.

We got an earful on day one of the Constellation Research Connected Enterprise 2019 event. Example: most CXOs are not falling over themselves to launch quantum computing projects in 2019, but they do need to be aware of possible threats to RSA encryption: 

Still, next-gen tech needs to be held to the fire of project results. Blockchain is a case in point. My upcoming podcast with blockchain panel moderator (and critic) Steve Wilson of Constellation will get into that in a big way. That's precisely why a day one CCE '19 highlight was "The Road to Real World AI and IoT Results." Moderated by Constellation's "Data to Decisions" lead Doug Henschen, the panelists shared AI lessons, as they bring tech to bear on logistics problems.

3M on AI - how does a 100+ year manufacturing company stay relevant?

Panelist Jennifer Austin, Manufacturing & Supply Chain Analytics Solutions Implementation Leader at 3M, told attendees why 3M is pursuing several AI-related initiatives. Start with the disruptions in the manufacturing sector:

We're looking at how, as a hundred-year-old plus manufacturing company: how do we stay relevant? How do we keep our products [in line] with consumer changes?

Joking about an earlier panel debate, Austin quipped:

I was also glad to hear that manufacturing is not dead.

As for those AI initiatives, one is a global ERP data standardization project:

As some of the speakers spoke of this morning, we struggle with our data, and we have a lot of self-sufficient organizations across the world. And so we don't have a standard way to represent our data. So we've been on a long journey to do that through our global ERP.

The next AI project? Smart products, such as 3M's smart air filter. The third? Manufacturing and supply chain pursuits, including an Industry 4.0 push:

The third [AI area] that I'm most focused on right now is in our manufacturing and supply organization. One aspect is Industry 4.0, which we're referring to as "digital factory."

We have over two hundred sites around the world, so we're trying to make sure that we have those all fully sensored, and that we're using the data that comes off of those sensors in a meaningful way -  to help us with things like capacity optimization, planning and cost reduction, and quality improvement for our customers.

Another aspect of the intelligent supply chain pursuit? Inventory optimization and other "customer value" projects.

The second portion of that manufacturing effort is connected to supply chain, so it's more transactional. That's where we're doing more of the machine learning activity right now. It's focused on things such as optimizing your inventory, by automatically determining what your saved stock should be. It's about minimizing and leaning out your value stream so you can deliver faster to our customers.

This is not a tiptoe into the AI kiddie pool:

We're starting to introduce some exciting new algorithms that are homegrown from our data scientists using, of course, open source models to scale that across the entire operation. So it's something we started out about 18 months ago. [At first], we didn't really think it was real, but it is very much real  - and driving results for our business.

Unibap AB - pursuing Industry 5.0 on earth, and in space

Next up? Frederick Bruhn from Unibap AB. Unibap is what you'd call a forward-thinking outfit. In a nutshell, they commercialize so-called "intelligent automation" - both on earth and in space. They've adopted the phrase Industry 5.0 to emphasize the shift from connected manufacturing (Industry 4.) to intelligent automation.

AI-in-space sounds like a science fiction popcorn movie special. But as Bruhn told us, it's a reality today, and not as different from "on earth" as we might assume:

For us, automation is both in the factories of tomorrow, and in space. Because if you have a mining operation on the ground, or if you have a mining operation on the moon for instance, for us it's the same. So we actually build the server hardware for space, and on the ground, and we do have software to go with that.

One of the cases for Unibap: replacing humans in real-time production lines for painting and coating, assembly, welding, and drilling. No, there aren't any mining operations on the moon yet, but Bruhn says that will happen in about fifteen years. In the meantime, Unibap is supplying computers to customers like NASA, "for intelligent data processing in space."

Royal Dutch Shell on AI - serving customers better is the goal

Deval Pandya of Royal Dutch Shell told us that Shell already has predictive maintenance models in operation, "giving us insights which you can act on to make business decisions and operational decisions, that is generating immense value for us."

The renewable energy space is another AI playing field for Royal Dutch Shell, including solar batteries, and a project to optimize when to charge or discharge batteries. Many of these "AI" and/or IoT projects, despite their focus on automation and "smart" machines, ultimately come back to serving customers better. Pandya:

We've been driving this culture of customer-centricity, and Shell is one of the largest in energy retail. There is a lot of information, and we're just starting to extract value out of it.

Getting AI projects right - talent and culture over tech

On diginomica, we've criticized digital transformation efforts that lack buy-in and total organizational commitment. Yet there is a need for small wins. In that context, how do you get AI projects right? Austin told Henschen: no matter how sexy the tech is portrayed, it's just a tool. 

I think that we have less of an AI strategy, than a commitment to delivering for our customers and our shareholders. So it's all about growth and innovation. AI/machine learning has become a tool that we're now more comfortable with. It's becoming a primary driver for helping us deliver on what our agenda is.

Pandya hit a similar note. Royal Dutch Shell has combined their digital technologies into a digital center of excellence:

A big portion is AI or machine learning, but a lot of it goes hand-in-hand. So in IoT, we are using a lot of this IoT data, and then applying AI to it.

I don't care how good your tech is, or how good your implementation partner is, you're still going to face adversity, your digital moment of truth. Henschen asked the panel: what is your biggest sticking point: talent, culture or technology platforms?

Halliburton's Dr. Satyam Priyadarshy says it's the talent. But for Halliburton, it's more of a training problem than a talent problem:

I call it talent transformation. Because we can't go and hire data scientists, right? A lot of us face the same challenge... We compete with Silicon Valley talent as well. The burning talent question for Halliburton is: can they transform the talent they have? The oil and gas industry has one of the most talented workforces scientifically, from geophysicists to geologists, right? So the question is: can they be turned and trained into data scientists? That has been very highly successful; we have been globally training people.

Two companies on the panel, Halliburton and Shell, use hackathons as a means to spark new hires, or upskill. As Priyadarshy shared, their hackathons are a crash course for developers on industry issues:

Our hackathons, or what we call boot camp workshops, are very contextualized and customized. Everybody can go and take a class on Coursera on AI, right? But how do you apply to oil and gas industry problems - that remains a challenge.

So Halliburton designs these boot camps to get geologists and drillers immersed in AI and IoT:

We have a big workforce of drillers; they are actually on the field. We are sitting in the office. So we have to understand their mindset. 

For Pandya, culture comes first, then talent, then tech: "culture sets the stage for everything else." But Pandya makes a critical point: if your workers don't feel free to fail, then your culture isn't ready for digital change.

This new technology is changing fundamentally the way we do business, the way we make decisions. And so it is a different mindset... The culture of failing fast and learning from failures is something which we have championed across Shell. It's okay to fail. And that's a huge, huge change in mindset, because when you are putting billions of dollars of investments [at stake], failure is usually not an option.

My take

Most of the panelists are investing in some type of AI/IoT COE (Center of Excellence). Give me a COE over a POC (Proof of Concept) anyway. A COE reflects a grittier commitment - and a recognition of the skills transitions needed. True, not all companies are able, or willing, to build data science teams, but it's instructive to see how approaches like COEs are holding up across projects.

A couple of panelists emphasized choosing the right implementation partner/advisory - that wisdom remains a constant. This panel was a welcome reminder that enterprise tech is at different maturity levels. It's our collective job to push beyond the marketing bombast and determine where we stand. Blockchain and quantum computing remain futuristic in an enterprise context, albeit with very different issues to conquer, whereas IoT, and now AI, have some live use cases to consider. Granted, none of the panelists offered up hard ROI numbers, but that's also a question that wasn't explored, and probably should have been.

Any discussion that comes back to data-powered business models must also return to issues of security, privacy, and governance. That wasn't a focus of this panel, but it was addressed in other Connected Enterprise sessions. My upcoming podcast with Steve Wilson on the persistent problem of identity will dig further.

Image credit - Photo of AI and IoT real world use case panel at Connected Enterprise 2019 by Jon Reed.

Disclosure - Constellation Research provided me with a press pass and hotel accommodations to attend Connected Enterprise 2019.

Read more on:

          

Php developers

 Cache   
Applicants Should have a good working background in HTML, JavaScript, PHP and MySQL with Linux and Windows Platform. XML and CSS knowledge is an added advantage. Applicants must possess B.Tech /MCA with min of 1yr background. Web Application Development using PHP4 /5 Strong in PHP OOPS and MySQL queries. Preference for applicants with background in open source customizations like OScommerce,...
          

Zapbuild Technologies - Senior PHP Developer - CodeIgniter/Zend/MySQL/CakePHP...

 Cache   
We have an urgent opening at our Head Office Mohali, Chandigarh for an Sr. PHP Developer. Experience : 1-four years This is regarding a job opening in Zapbuild Technologies Pvt. Ltd., Mohali for the position of - Software Engineer-Open Source Technologies- .Location : Mohali, Chandigarh. Necessary skills Required : Cake PHP, Laravel, CodeIgnite, Zend, Mysql, Jquery, API (Restful Service). Job...
          

بحث داغ اینروزها درباره ارزش افزوده

 Cache   
بر اساس اخبار منتشر شده از وزارت ارتباطات و فناوری اطلاعات،این وزارت تصمیم به تعطیل کردن  خدمات ارزش افزوده یا"وس" گرفته که به زودی این تعطیلی انجام میشود، اما مشکل اینجاست که برخی اصرار به وجود مشاغلی دارند که ممکن است با تعطیلی وس نابود شوند.خدمات ارزش افزوده با پولهای مردم چه میکند ؟خدمات ارزش افزوده (Value Added Services) که به اختصار VAS نامیده می‌شوند، شامل مواردی غیر از خدمات استاندارد اپراتورها مانند تماس صوتی و پیامک معمول است. در خدمات ارزش افزوده تلفن همراه ابزارهایی مانند پیامک، تماس و دیتای مصرفی به‌عنوان ابزار ارائه خدمات استفاده می‌شود و معمولا خدماتی از جمله مسابقات و رأی‌گیری، تبلیغات موبایلی، اطلاع‌رسانی، پرداخت موبایلی، زنگ انتظار تماس یا بازی آنلاین ارائه می‌شود.لذا اگر چه استفاده از این خدمات ارزش افزوده باید اختیاری و توسط مردم انتخاب شود، اما به نظر می‌رسد توسط برخی از شرکت‌های ارزش افزوده، کاربران ناخواسته به عضویت این سرویس‌ها درآمده بودند و با توجه به این‌که این خدمات معمولاً هزینه‌های بالایی دارند، به عنوان مثال ارسال یک پیامک ارزش افزوده به کاربر می‌تواند حداقل 300 تومان هزینه داشته باشد.در همین راستا سوءاستفاده برخی شرکت‌های خدمات ارزش افزوده از مردم  موجب شد وزیر ارتباطات و فناوری اطلاعات از تصمیم برای پایان دادن به برنامه وس را بگیرد واین تصمیم را اعلام کند. در همین رابطه اقای وزیر اظهار کرد : شاید وقتش رسیده باشد که به زودی «کل سرویس‌های ارزش افزوده» را برای همیشه تعطیل کنیم. دوستان دیگر به یقین رسیده‌اند که در مورد حق‌الناس هیچ خط قرمزی ندارم، مخصوصا اگر از جیب مردم دزدی شود.بر همین اساس طبق اعلام  وزارت ارتباطات و فناوری اطلاعات، محمدجواد آذری جهرمی به‌تازگی با بیان این‌که هوشمندسازی بسترسازی حضور شرکت‌های فناور و خلاق برای حل مسائل شهری است، گفت: هوشمندسازی شهری داستان شرکت‌های ارزش‌افزوده نشود که فقط برای تعدادی سرمایه‌دار ارزش‌افزوده ایجاد کرده است نه برای مردم. اگر قرار است ارزش‌افزوده‌ای در خدمات شهری ایجاد شود که برخی شرکت‌ها بیایند کار کنند، باید این ارزش‌افزوده حسش به مردم منتقل شود نه اینکه دستش را در جیب مردم کند.وی افزود: نباید با ساختارهایی که وجود دارد، انحصار ایجاد شود و شرکتی به‌واسطه نزدیکی با یک اپراتور انحصاری در ارائه خدمات ایجاد کند و بعد اتفاقاتی بیفتد که امروز در ماجرای ارزش‌افزوده شاهدش هستیم. ارزش‌افزوده واقعی این است که داده‌های شهری را باز بگذاریم و اجازه دهیم مسابقه خلاقیت برای حل مسائل شهری و مردم راه بیفتد.آذری جهرمی ادامه داد: داده‌های باز (open source)، امروز یک موضوع ضروری است. داده‌های شهری نباید در اختیار یک گروه خاص باشد که هم انحصار اقتصادی ایجاد می‌شود، هم فساد رخ می‌دهد و هم مردم بهره‌مند نمی‌شوند و رضایت نخواهند داشت. نماد کاملش هم می‌شود فعالیت‌های ارزش‌افزوده در کشور که سالانه اعداد بسیار بالایی در این حوزه درآمد کسب می‌کنند درحالی‌که نه محل درآمد شفاف است و نه هزینه کرد این درآمدها.وزیر ارتباطات تأکید کرده شما یک شرکت ارزش‌افزوده‌ای را مثال بزنید که می‌شناسید. مگر می‌شود در بازار سالانه 2600 میلیارد تومانی شما یک بازیگر این حوزه را نشناسید. اگر ‌ادعا می‌کنید که 30 هزار شغل ایجاد کرده‌اند، ‌پس یک شرکت و شغل‌های ایجاد کرده را معرفی کنید.در واکنش به بحث پیش امده پیرامون مسایل موجود رضا الفت‌نسب عضو هیات مدیره اتحادیه کشوری کسب‌وکارهای مجازی درباره مباحثی که درباره ایجاد مشاغل با استفاده از خدمات ارزش افزوده مطرح شده بود، گفت: اتفاقی که در حوزه وس رخ داده این است که برخی از ما حتی بعضا ناخواسته عضو این سرویس‌ها شدیم. این‌که بگوییم خدمات ارزش افزوده ایجاد شغل کرده و بنابراین حالا آن‌ها هر کاری که بخواهند، می‌توانند انجام دهند، توجیهی ندارد.رضا الفت نسب افزود : من فکر نمی‌کنم کسی مشکلی با وس صحیح داشته باشد و اگر چیزی باشد که مورد نیاز کاربران است، ایرادی ندارد، اما امروزه شاهدیم که حتی از طریق بدافزارها هم خیلی از مردم بدون این‌که بدانند، درگیر این موضوع شدند. شاهدیم دستی به جیب مردم رفته و بنابراین این‌که بخواهد 30 هزار یا 100 هزار شغل ایجاد کنند، اما بدون این‌که مردم بدانند از آن‌ها پول بگیرند، هنر نیست و باید جلویش گرفته شود.
          

Cara menjalankan APK Android langsung melalui Google Chrome (Windows)

 Cache   
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder

OS Android memang menjadi salah satu OS Mobile yang paling terkenal sekarang ini, dengan adanya Smartphone dengan System Operating Android, maka perkerjaan kita akan lebih dipermudah, sehingga pada hakikatnya sebuah Smartphone Android tidak lagi hanya untuk digunakan dalam kegiatan Komunikasi secara Nirkabel saja.

Tetapi, sudah jauh lebih dari itu, karena sebuah Smartphone Android itu sudah berubah menjadi sebuah Perangkat Pintar yang multifungsi yang bisa di manfaatkah untuk beragam keperluan, seperti untuk menikmati hiburan Multimedia, bermain games, menyalurkan hobi dan tentu saja akan memermudah pekerjaan yang dilakukan.

Umumnya, semua itu terjadi karena OS Android yang selalu berubah kian menjadi lebih canggih dan modern di setiap waktunya, sehingga wajar saja, jika saat ini keperluan akan penggunaan sebuah Smartphone Android, memang sudah setara seperti penggunaan Perangkat Komputer dalam kasus tertentu.

Salah satu alasan kenapa Smartphone dengan OS Android itu sangat menarik dan menjadi andalan banyak orang untuk meringankan pekerjaan, ialah terkait ketersediaan fitur dan aplikasi Android yang sangat melimpah.

Tidak bisa dihitung jumlahnya memang, tetapi kalau di Prediksi, sekarang ini mungkin ada ratusan ribu atau jutaan aplikasi Android yang memang bisa kalian gunakan untuk beragam tujuan dan sebagian dari aplikasi tersebut, jelas saja akan bisa membuat perkerjaan kalian menjadi lebih Efektif dan Efisien untuk diselesaikan, seperti aplikasi Office dan semacamnya.

Inilah salah satu hal yang membuat kenapa Smartphone Android itu menjadi sangat terkenal, karena pada faktanya banyak orang memang memerlukan hal yang ada pada OS Android itu sendiri, maka wajar jika pengguna dari Smartphone Android sekarang ini kian banyak.

Juga, telah digunakan diberbagai kelas dan kalangan, sesuai dengan tingkatan profesi atau tujuan masing-masing Individu.

Selanjutnya, hal yang memang menjadi bagian paling menarik dari Smartphone dengan OS Android itu adalah sifatnya yang Open Source.

Dengan adanya mekanisme Open Source tersebut, maka seperti yang terlihat sekarang ini, ada beragam jenis terbosan Inovasi dan teknologi baru yang hadir setiap tahunnya pada pengembangan OS Android tersebut.

Lalu, dengan adanya dukungan Open Source itu juga, sekarang ini OS Android juga sudah bisa dijalankan di Multi Platform, baik yang bersifat Dedicated ataupun Emulator, semua hal tersebut bisa dengan mudah kalian dapati dan gunakan sekarang ini.

Apalagi, jika kalian adalah seorang individu yang memang memiliki mobilitas tinggi terkait dengan penggunaan OS Android tersebut, maka pastinya kalian memerlukan sebuah Tool yang memang bisa digunakan untuk menjalankan Aplikasi Android dilintas Platform, seperti di OS Windows.

Hal tersebut sudah sangat memungkinkan untuk kalian lakukan sekarang ini, karena pada dasarnya saat ini memang ada banyak sekali jenis Emulator Android yang sudah bisa kalian gunakan dan kalian Install di Perangkat Komputer dengan OS Windows yang kalian gunakan sekarang ini.


Android Emulator (Windows)


Seperti yang sudah kami singgung diatas, pada faktanya kalian bisa dengan mudah menjalankan aplikasi Android pada Perangkat Komputer dengan hanya bermodalkan OS Windows saja, yaitu dengan memanfaatkan Aplikasi Emulator Android yang sudah tersedia.

Umumnya, sekarang ini ada banyak jenis aplikasi Emulator Android yang bisa kalian gunakan, seperti Nox, BlueStacks, MEmu atau yang bersifat Dedicated seperti Remix OS yang memang berjalan dengan mekanisme Full Android version pada OS Windows dengan metode Dual Boot.

Aplikasi Emulator Android untuk OS Windows sekarang ini sudah bisa digunakan secara penuh oleh masyarakat umum, karena memang memiliki Fitur dan Performa yang sudah sangat baik.

Sehingga, tidak mengherankan jika banyak orang yang menggunakan aplikasi Emulator Android tersebut untuk beragam keperluan, seperti para Streamer yang menggunakan Aplikasi Emulator Android, untuk bermain Games Android di OS Windows yang bertujuan untuk menghasilkan Content.

Selanjutnya, ada juga pemanfaatan Aplikasi Emulator Android untuk menunjang pekerjaan secara Multitasking dalam satu Perangkat saja, maka inilah sebenarnya essensi utama dari pengguna Android Emulator pada OS Windows tersebut.

Tetapi, ada beberapa hal yang perlu kalian tahu, terkait dalam penggunaan Aplikasi Emulator Android di OS Windows, yaitu adalah sebagai berikut :


  1. Memerlukan alokasi RAM yang tidak sedikit, idealnya Aplikasi Emulator Android akan berjalan lancar, jika kalian mengalokasikan kapasitas RAM minima, 2GB untuk Aplikasi Emulator Android tersebut.
  2. Memerlukan Resource GPU yang memiliki performa bagus, biasanya hal ini diperlukan oleh para Gamers yang senang bermain Games Android di OS Windows dengan menggunakan Emulator Android.
  3. Harus menggunakan Processor yang cepat, pada faktanya Aplikasi Emulator Android itu memang agak berat untuk dijalankan secara mulus, maka dari itu, jika kalian ingin mulus dalam penggunaannya, maka kalian perlu sebuah Processor PC yang memiliki kecepatan yang terbilang diatas rata-rata, agar lancar untuk digunakan.
  4. Menggunakan versi OS Android Kuno, memang bukan menjadi hal yang harus kalian pikirkan sebelumnya, karena yang kalian butuhkan memang hanya agar Aplikasi Android berjalan bukan? maka versi OS Android pada aplikasi Emultor Android itu memang tidak terlalu dibutuhkan, umumnya sekarang ini aplikasi Emulator Android banyak menggunakan versi Android 4.4 (KitKat) dan 5.1 (Lollipop) saja.
  5. Sering Error! kalian harus memaklumi hal tersebut, karena pada dasarnya sifat alamian dari aplikasi Emulator memang seperti itu.


Namun, dalam kondisi umum, sebenarnya kalian sudah bisa menggunakan aplikasi Emulator Android tersebut dengan lancar dengan masalah yang minimal, asalkan kalian memang memiliki Spesifikasi Perangkat Komputer OS Windows yang layak dan baik tentunya.

Selanjutnya, pada pemanfaatan aplikasi Emulator Android tersebut juga ada 2 jenis tipenya, yang bisa kalian manfaatkan di OS Windows tersebut, yaitu adalah :


  1. Menggunakan aplikasi Emulator Android Khusus (Standalone Installer)
  2. Menggunakan Emulator Android yang menjadi bagian dari Sub APP / Fitur dari Aplikasi Windows bersangkutan (Third Party Emulator)


Nah, yang akan kita bahas kali ini ialah terletak pada Point ke-2, karena disini kita tidak akan menggunakan sebuah Aplikasi Emulator Android khusus yang berdiri sendiri.

Melainkan menggunakan Integrasi Extension langsung dari aplikasi Browser Google Chrome! mungkinkan untuk dilakukan? jelas saja mungkin, inilah yang akan menjadi titik berak pembahasan kita kali ini.


Emulator Android di Google Chrome (Windows)


Satu hal yang harus kalian ketahui, terkait pembahasan yang kami angkat kali ini, yaitu adalah Tutorial ini hanya bisa dijalankan pada aplikasi Browser Google Chrome khusus untuk Perangkat Komputer saja (penggunaan dalam kasus kali ini, khusus untuk OS Windows).

Sehingga, cara yang akan kami jelaskan dibawah ini, hanya akan bisa dijalankan pada Perangkat Komputer dengan aplikasi Browser Google Chrome saja, untuk OS selain OS Wndows, kami belum mencobanya, jadi kami tidak tahu hasilnya.

Lalu, apa bedanya dengan Aplikasi Emulator Android Windows pada umumnya?

Jika, kita mencari perbedaanya, maka sudah jelas sekali bawah metode yang akan kita gunakan sekarang ini, jauh berbeda dari penggunaan Aplikasi Emulator Android pada umumnya.

Karena disini kita tidak memerlukan sebuah Aplikasi Installer, karena kita hanya akan memanfaatkan fitur Extensions pada aplikasi Browser Google Chrome tersebut saja.

Sehingga, cukup hanya dengan sebuah aplikasi Browser Google Chrome Windows, maka kalian sudah bisa menjalankan Aplikasi Android dengan sangat mudah dan Instant.

Berikut adalah beberapa keuntungan yang bisa kalian dapati, dalam penggunaan Emulator Android di Google Chrome tersebut :


  1. Cukup hanya menggunakan aplikasi Browser Google Chrome saja untuk bisa menjalankan APK Android.
  2. Memiliki Integrasi yang bagus dan Stabil.
  3. Cara Installasinya yang sangat mudah dan tidak rumit.
  4. Tidak memerlukan Settingan Advanced, setelah di Install Extensionnya, maka langsung bisa digunakan.
  5. Bisa dengan mudah kalian dapatkan, melalui Chrome Web Store secara Gratis.
  6. Bisa diandalkan untuk menjalankan hampir semua Aplikasi Android.
  7. Kalian juga bisa mengatur sendiri, Layout Device yang akan digunakan, seperti Layout Portrait ataupun Landscape.
  8. Bisa untuk menjalankan Aplikasi Android yang membutuhkan Koneksi Internet (Online) dan bukan hanya untuk aplikasi Android Offline saja.


Bagaimana? apakah kalian tertarik menggunakan apliklasi Emulator Android di Google Chrome tersebut?

Jujur saja, kami sendiri memang sudah lama menggunakan hal semacam ini, karena selain penggunaannya yang tidak terlalu rumit, kita disini hanya cukup menggunakan aplikasi Google Chrome saja, selain bisa dimanfaatkan untuk Browsing, kini kita juga bisa menjalankan aplikasi Android secara langsung, tanpa perlu menggunakan Aplikasi Emulator Android secara terpisah.

Tertarik? jika iya, maka kalian harus mengenal nama Extensions tersebut, yaitu adalah ARC Welder.


Apa itu ARC Welder untuk Google Chrome ?


Seperti yang sudah kami singgung diatas, kalian bisa dengan mudah menjalankan aplikasi Android dengan hanya bermodalkan aplikasi Browser Google Chrome yang telah terpasang di Komputer OS Windows yang kalian gunakan sekarang ini.

Untuk bisa menjalankan aplikasi Android tersebut, melalui Google Chrome, maka disini kalian memerlukan sebuah Tool Extensions khusus yang bernama ARC Welder.

Secara umum, ARC Welder ini memiliki mekanisme kerja dengan memanfaatkan fitur ARC (App Runtime for Chrome) yang memang menjadi sebuah Fitur unggulan di Chrome OS.

Namun, kalian tidak perlu khwatir, jika kalian tidak mengerti terkait masalah ARC pada Chrome OS tersebut, karena jika kalian memasang Extensions ARC Welder di aplikasi Google Chrome yang kalian gunakan.

Maka, secara otomatis, komponen ARC juga akan langsung terpasang pada aplikasi Google Chrome yang kalian gunakan, sehingga yang perlu kalian lakukan hanyalah memasangan Extensions terkait dan langsung saja digunakan, mudah sekali bukan?

Hanya, saja ada beberapa hal yang harus kalian perhatikan sebelum menggunakan Extension ARC Welder tersebut pada aplikasi Google Chrome yang kalian gunakan, silahkan lanjutkan bacaan ini, ke scene selanjutnya.


Persyaratan dalam penggunaan ARC Welder di Google Chrome (Windows)


Karena extensions ini bersifat teknis, maka sudah pasti ada beberapa Requirement yang harus kalian penuhi, yaitu adalah sebagai berikut :


  1. Menggunakan Aplikasi Browser Google Chrome versi paling terbaru.
  2. Menyiapkan aplikasi (APK) Android yang nantinya akan dibuka melalui ARC Welder, kalian bisa mendownload aplikasi Android terkait secara Online, melalui banyak situs yang menyediakan Offline Installer APK Android, contoh APK Android yang telah kami Download adalah seperti ini, pastikan diletakan ditempat yang mudah untuk kalian temukan.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Sebisa mungkin gunakan Spesifikasi Perangkat Komputer yang mumpuni, seperti Minimal RAM 4GB, OS Windows 64-bit dan paling tidak menggunakan Processor yang cukup cepat, meski tidak menggunakan GPU External.
  2. Memerlukan Size sebesar 100mb lebih, karena seperti yang sudah kami sebutkan diatas, dalam tahap Installasinya, ARC Welder ini juga akn secara otomatis, menginstall Fitur ARC untuk aplikasi Browser Google Chrome yang kalian gunakan, itulah kenapa disini kalian perlu menggunakan Data Internet sebesar lebih dari 100mb, kalau untuk ukuran size Extensions ARC Welder ini, hanya bersize 12mb saja.


Jika, kalian sudah paham terkait masalah Syarat dan Requirement diatas, maka kalian sudah bisa memasangkan Android Emulator di Google Chrome tersebut, dengan mengikuti cara dibawah ini.


Memasang Android Emulator di Google Chrome (Windows)



  1. Silahkan masuk ke Chrome Web Store dan Download Extension bernama ARC Welder, agar lebih cepat, kalian bisa langsung membuka alamat URL ini https://chrome.google.com/webstore/detail/arc-welder/emfinbmielocnlhgmfkkmkngdoccbadn
  2. Langsung saja, di Install ke Google Chrome dan selesaikan proses Download dan Installnya.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Done! saat ini aplikasi ARC Welder telah selesai terpasang di Google Chrome yang kalian gunakan sekarang ini.


Lalu, bagaimana cara menggunakannya? berikut adalah langkah dalam penggunaanya.


Cara menggunakan ARC Welder Google Chrome



  1. Silahkan masuk ke Chrome Apps atau kunjungi URL ini chrome://apps
  2. Disini kalian akan melihat sebuah Apps yang bernama ARC Welder.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Silahkan buka Apps tersebut.
  2. Setelahnya akan muncul, pemberitahuan seperti ini, silahkan cari tempat untuk peletakan direktori file dan folder ARC Welder, dengan memilih opsi Choose.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Tekan Select Folder, jika kalian sudah menentukan lokasi untuk direktori Extensions ARC Welder.
  2. Setelahnya, akan muncul gambar seperti ini, untuk membuka aplikasi Android, silahkan tekan Opsi Add your APK.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Selanjutnya, silahkan cari Aplikasi Android (APK) yang telah kalian Download sebelumnya, dan tekan Open.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Tunggu hingga ARC Welder selesai melakukan Load APK tersebut.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Setelahnya akan muncul halaman Konfigurasi Layout, silahkan kalian sesuaikan saja sendiri, seperti apakah ingin dijalankan pada tipe jenis Tablet atau Phone, lalu dengan Orientasi Portrait (Tegak) atau Landscape (Rebahan).
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Jika, sudah kalian Setting, untuk menjalankan APK Android terkait, silahkan tekan Opsi Test.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Setelahnya, aplikasi Android akan terbuka.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Selanjutnya, silahkan gunakan aplikasi Android tersebut seperti biasa.
Cara membuka aplikasi Android di Google Chrome, menjalankan aplikasi Android tanpa Emulator, menjalankan apk Android dengan aplikasi Chrome, menjalankan apk Android dengan aplikasi Browser, Membuka APK Android dengan ARC Welder, Google Chrome ARC Welder, Cara menggunakan ARC Welder, Cara Menginstall ARC Welder, Cara Download ARC Welder
  1. Done!


Apakah bisa untuk aplikasi Android yang membutuhkan koneksi Internet? jawabannya sangat bisa! karena Studi Kasus yang kami jalankan diatas, ialah untuk menjalankan Aplikasi Android yang memang membutuhkan Koneksi Internet agar bisa berkerja.

Bagaimana? mudah sekali bukan pemanfaatan Android Emulator Tersebut pada aplikasi Browser Google Chrome?

Ya! kami pastikan cara ini dapat berkerja dengan baik, asalkan kalian sudah menggunakan aplikasi Google Chrome (Windows) versi paling terbaru dan kalian juga menggunakan PC / Laptop dengan Spesifikasi yang cukup.

Hanya saja, sebenarnya ada kekurangan dari Android Emulator untuk Google Chrome tersebut, ialah sebagai berikut :


  1. Freeze (mungkin karena RAM yang habis atau Processor yang crash).
  2. Loading yang terlalu lama (terkesan berat saat melakukan Load APK).
  3. Ada beberapa Aplikasi (APK) yang tidak bisa dibuka, contohnya adalah seperti aplikasi Playstore yang memang membutuhkan Aplikasi Core tambahan.
  4. Tidak direkomendasikan untuk digunakan untuk bermain Games, karena tidak ada fitur Advanced untuk penggunaan Custom Button dan memang tidak maksimal untuk menjalankan Graphic yang berat.
  5. Tampilan Layout yang mungkin terpotong dan tidak bisa di Scroll kebawah.


Memang perlu diakui bahwa sejatinya kalau terkait urusan Performa ARC Welder ini memang jauh dibawah aplikasi Emulator Android yang Standalone atau khusus.

Karena pada dasarnya tujuan penggunaan Emulator Android di Google Chrome ini hanya untuk ditujukan agar bisa membuka sebuah Aplikasi Android (APK) secara Instant dan cepat.

Tetapi, untuk penggunaan Advanced seperti untuk bermain Games Online atau Games yang berat, nampaknya ARC Welder tidak bisa mengakomodasi hal tersebut.

Hanya saja, jika kalian memang tidak memerlukan tindakan berat dalam penggunaan Android Emulator di OS Android dan kalian ingin agar aplikasi Android mudah untuk dibuka tanpa perlu memasang Aplikasi Android Emulator khusus.

Maka, Extensions ARC Welder untuk Google Chrome (Windows) tersebut, bisa menjadi pilihan paling baik yang bisa kalian dapati sekarang ini.

Akhir kata, semoga artikel ini bermanfaat untuk kalian, semoga apa yang kalian inginkan dapat terpenuhi dan semoga hari kalian menyenangkan. P.AW ~ DRD
          

Cara menghindari masalah Layar OLED / AMOLED agar tidak Burn-in (Shadow)

 Cache   
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Smartphone Android sekarang ini memang sudah berevelusi menjadi sebuah Perangkat Pintar yang bisa diandalkan dalam beragam hal, bukan hanya untuk melakukan Komunikasi jarak jauh saja, melainkan dengan Smartphone Android kalian juga bisa menikmati beragam hiburan dan bisa juga untuk memudahkan pekerjaan dan menyalurkan hobi.

Selain itu, sekarang ini Teknologi dan Inovasi yang ada pada sebuah Smartphone Android juga sudah sangat canggih dan mutakhir, apalagi belakangan ini rata-rata Smartphone Android sudah dibekali dengan Mesin yang bisa berpikir sendiri, yang kerap dinamakan dengan Artificial Intelligence.

Sehingga, kata "Smartphone" itu memang sudah tercerminkan pada Smartphone dengan OS Android tersebut, karena Smartphone Android memang sudah menjelma menjadi sebuah Perangkat yang multifungsi.

Hal menarik lainnya yang ada pada sebuah Smartphone Android ialah variasinya yang sangat banyak, hal ini terjadi karena sifat OS Android yang Open Source, sehingga pihak Google selaku pengembang dari OS Android tersebut, memang mendistribusikan dan melisensikan kerjasama ke Produsen Smartphone Maker.

Maka, tidak mengherankan sekarang ini ada banyak sekali ragam dari Smartphone Android dengan berbagai jenis Desain, Brand Merek, Spesifikasi, Inovasi dan tentu saja Harga yang juga sangat variatif dan kompetitif.

Berlanjut kehal teknis lainnya, meski sama-sama menggunakan OS Android, faktanya Smartphone Android itu bisa berbeda, satu sama lainnya, umumnya perbedaan tersebut bisa kita lihat pada :

  • Segmentasi Kelasnya
  • Spesifikasi yang digunakan
  • Inovasi yang ditawarkan
  • Desain & Estitka
  • Branding

Sehingga, pasti saja setiap Smartphone Android itu tidak sama, apalagi yang sudah berbeda nama Brand, misalnya Smartphone Android Oppo dan Samsung, meski sama-sama menggunakan OS Android sebagai OS utamanya.

Faktanya kedua Brand Smartphone Android ini juga sangat berbeda, perbedaan tersebut datang dari hal yang sudah kami jelaskan diatas.

Selanjutnya, dari bagian Spesifikasi pun juga terlihat jelas terkait Kelas dari Smartphone Android tersebut, jika Smartphone Android tersebut, memiliki Segmentasi Pasar kelas atas, tentu Spesifikasi yang diusung tidaklah main-main.

Tetapi, jika Smartphone Android tersebut ditargetkan dikelas bawah (Entry-level), maka kalian harus pusa dengan Spesifikasi alakadarnya saja, dengan tanpa adanya sebuah Inovasi Teknologi yang mahal.

Jika dilihat dari Spesifikasi, maka kita bisa membedakan Smartphone Android satu sama lainnya, dengan beberapa Point dibawah ini :

  1. Jenis Processor dan kecepatannya
  2. Jumlah Kapasitas RAM dan Storage
  3. Fitur Kamera
  4. Inovasi tingkat lanjut, biasanya Exclusive buatan Produsen Smartphone Android terkait
  5. Build Quality
  6. Display Panel Layar

Dengan beberapa Point diatas, tentunya kalian bisa dengab mudah melakukan Identifikasi dari sebuah Smartphone Android, jika Spesifikasi yang digunakan pada Smartphone Android tersebut tergolong "Premium".

Maka, bisa kalian asumsikan bahwa Smartphone Android tersebut paling tidak berada dikelas Menengah Keatas minimalnya, tentu dengan harga jual yang cukup tinggi tentunya.

Selanjutnya, Spesifikasi yang Premium tersebut juga tidak bisa menjadi acuan bahwa Spesifikasi yang digunakan tersebut akan selamanya berfungsi dengan baik atau mampu memiliki Performa yang bagus dalam waktu yang lama.

Inilah yang akan kita bahas kali ini, yaitu terkait masalah salah satu Spesifikasi pada Smartphone Android, yaitu adalah terkait masalah Display Panel.

Seperti yang kalian ketahui, saat ini ada beragam jenis Display Panel yang digunakan pada Smartphone Android Modern sekarang ini, untuk lebih jelasnya, kalian bisa mengetahui Jenis dan Tipe Display Panel Layar di Smartphone Android dengan mengunjungi halaman dibawah ini :


Untuk lebih mempertegas pembahasan kita kali ini, maka kami hanya akan mebicarakan Solusi terkait masalah yang ada pada Display Panel OLED / AMOLED saja.


OLED / AMOLED (Android Display Panel)


Untuk kalian ketahui sebelumnya, OLED / AMOLED ini memang merupakan sebuah Display Panel Premium yang bisa menghasilkan gambar yang spektakuler, terang, detail dan kaya warna.

Sehingga, biasanya Display Panel ini memang hanya diperuntukan untuk Smartphone Android / Smartphone Non-Android dengan Segmentasi pasar minimal kelas menengah, karena biaya Produksinya memang agak mahal dibandingkan jenis Display Panel LCD (IPS / TFT).

Secara keseluruhan Panel Layar OLED / AMOLED tersebut memang sangat memuaskan untuk dimiliki dan digunakan, karena mata kalian akan sangat dimanjakan dengan kekayaan dan kejelasan warna yang ditampilkan secara nyata.

Maka wajar saja, jika banyak orang menyebutkan bahwa Panel Layar OLED / AMOLED tersebut, merupakan Display Panel terbaik untuk sebuah Perangkat Elekronik, karena memang benar-benar bagus dibandingkan dengan jenis Display Panel lainnya.

Untuk kelian ketahui sendiri, Display Panel OLED / AMOLED itu memiliki beberapa versi revisi, atau mudahnya bisa kita sebut dengan versi penyempurnanya, yaitu adalah sebagai berikut :

  • P-OLED
  • Super AMOLED
  • Dynamic AMOLED
  • Fluid AMOLED

Lalu, masing-masing versi Layar OLED dan AMOLED tersebut juga memiliki versi "Enhancement" lain seperti bertambahnya embel-embel "+" "Plus" atupun "Advanced", tetapi semua hal tersebut memang merupakan turunan dari Display Panel induk, yaitu adalah OLED.

*** AMOLED pun juga adalah turunan dari versi OLED yang sudah disempurnakan. ***

Apakah kalian tahu, jenis Display Panel revisi OLED apa yang paling banyak digunakan sekarang ini?

Jika, kalian tidak tahu jawabannya adalah Super AMOLED, sejatinya Super AMOLED memang menjadi Display Panel pilihan banyak Produsen Smartphone Android untuk bisa menghasilkan gambar yang sangat terang, detail dan kaya warna.

Hanya saja, meski Super AMOLED tersebut mendapat feedback yang sangat baik oleh penggunannya, tetap saja masih ada beragam Kekurangan dari Panel Jenis Super AMOLED tersebut.

Untuk lebih jelasnya, silahkan kalian lihat detail Kekurangan dan Kelebihan Layar Super AMOLED pada halaman dibawah ini :


Lalu, tahukan kalian ada satu buah masalah yang memang benar-benar menjadi sebuah petaka dan dipastikan akan terjadi di Semua Smartphone Android dengan Layar OLED / AMOLED tersebut?

Ya! masalah tersebut adalah munculnya kasus Burn-in atau kerap yang disebut sebagai penyakit Layar Shadow AMOLED / OLED / Super AMOLED.

Inilah yang akan kita bahas kali ini, yaitu terkait masalah Burn-in atau Shadow tersebut, disini kami akan menjabarkan bagaimana cara untuk menghindari masalah tersebut, agar bisa diminimalisir kemunculannya.

Tetapi, sebelum itu kalian harus pahami dulu, apa sih itu OLED / AMOLED yang Burn-in atau Shadow?


OLED / AMOLED Burn-in (Shadow)


Satu hal yang harus kalian ketahui dari masalah yang mengesalkan ini, yaitu sifatnya yang Permanent atau tidak bisa diperbaiki, jika kalian tidak menggantikannya dengan Part Layar yang baru!

Tentu saja tidak ada yang suka dengan sebuah Kerusakan yang Permanent tersebut, apalagi dalam kasus ini munculnya di Layar Smartphone Android kalian, tentu saja akan membuat kalian tidak nyaman dalam memandangi Smartphone Android yang kalian gunakan sekarang.

Masalah ini hanya terjadi pada Smartphone Android ataupun Non-Android Smartphone dengan jenis Display Panel OLED dan turunannya seperti AMOLED, Super AMOLED, Dynamic AMOLED dan lainnya.

Sehingga, jika kalian pengguna Smartphone Android dengan Display Panel LCD seperti IPS ataupun TFT, kalian tidak perlu khwatir, karena masalah Burn-in atau Shadow ini tidak akan terjadi pada Smartphone Android yang kalian gunakan.

Lalu, apa penyebab munculnya masalah Burn-in atau Shadow tersebut?

  1. Masalah Burn-in (Shadow) tersebut terjadi karena Pixel RGB pada Panel OLED / AMOLED tidak lagi berkerja secara responsif, karena memang sudah tidak mampu lagi menyalurkan warna yang seharusnya dan terkesan meninggalkan jejak / Bekas, inilah yang kerap disebut dengan Shadow tersebut.
  1. Selanjutnya, masalah Burn-in (Shadow) ini juga bisa muncul, karena Lapisan Kaca atas dari Panel OLED / AMOLED tersebut terbakar karena panasnya Pixel, umumnya ditemukan pada kasus pengguna yang selalu menggunakan opsi Full Brightness setiap saat, sehingga akan menimbulkan bekas cetak dengan warna merah kehitaman ataupun merah keunguan.

Untuk pemicu masalah ini ada beragam, bisa jadi terkait efek penggunaan yang tidak hati-hati dan tidak wajar dan bisa juga datang dari kecacatan Produk Layar Smartphone Android itu sendiri.

Kasus Burn-in atau Shadow ini bisa dikategorikan dengan tingkat masalahnya yaitu adalah sebagai berikut :

  1. Burn-in (Shadow) kerusakan tingkat rendah, biasanya tidak begitu terlihat dan masih bisa tersamarkan.
  2. Burn-in (Shadow) kerusakan tingkat menengah, sudah bisa terlihat, tetapi dalam skala yang kecil dan warna Burn-in atau Shadow-nya tidak begitu pekat, terkesan masih Transparent.
  3. Burn-in (Shadow) kerusakan tingkat tinggi, tampilan Layar OLED / AMOLED pada Smartphone Android kalian sudah tidak layak pakai lagi, karena sudah tertutupi oleh masalah Burn-in atau Shadow yang terlihat sangat jelas dan mengganggu penglihatan.

Untuk lebih jelasnya, kalian bisa melihat contoh tingkat kerusakan Layar OLED / AMOLED Burn-in atau Shadow pada gambar-gambar dibawah ini :

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android
Burn-in / Shadow tingkat rendah
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android
Burn-in / Shadow tingkat menengah
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android
Burn-in / Shadow tingkat menengah
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android
Burn-in / Shadow tingkat tinggi

Menakutkan bukan?
kami pikir tidak ada satu dari kalianpun yang menginginkan hal ini terjadi, kami pun ogah jika mendapati masalah seperti ini, karena biaya perbaikannya akan mahal dan cenderung tidak menguntungkan.

Sehingga, sebelum masalah ini menimpa kalian, maka ada baiknya jika kalian menghindari masalah munculnya kasus Burn-in atau Shadow tersebut dengan beberapa saran yang telah kami buat dibawah ini.


Menghindari masalah Burn-in / Shadow di Layar OLED / AMOLED


Jika kalian khwatir terkait munculnya masalah Burn-in atau Shadow tersebut dalam waktu yang cepat, maka kalian bisa memperlambat munculnya masalah tersebut dengan beberapa Tips dibawah ini.

Kalian tidak perlu khwatir, jika kalian menggunakan beberapa saran kami dibawah ini, maka masalah Burn-in / Shadow tersebut tidak akan muncul dalam kurun waktu kurang dari 1 tahun dan umumnya Display Panel Smartphone Android kalian tetap akan memiliki Performa yang baik untuk tahun-tahun berikutnya.

Kecuali, kalian menggunakan Smartphone Android berjenis Layar OLED / AMOLED dengan kualitas yang buruk ataupun ada kecacatan dalam Produksi, maka beberapa Tips dibawah ini, tidak akan bisa begitu membantu kalian. 

Apa saja hal yang harus kalian lakukan untuk menghindari masalah Burn-in / Shadow pada Smartphone Android dengan Layar OLED / AMOLED tersebut?

Silahkan ikuti semua yang kami sebutkan dibawah ini.


Hindari Full Brightness yang terlalu lama


Pemicu utama kemunculan kasus Burn-in / Shadow pada Layar OLED / AMOLED tersebut, memang sebagian besarnya datang dari fitur Brightness yang disetting ke Mode Full Brightness terlalu lama.

Sehingga, akan menyebabkan Pixel RGB pada Panel OLED / AMOLED menjadi Over-Action, setelahnya dipastikan akan muncul kasus Burn-in atau Shadow tersebut, kebanyakan kasus munculnya masalah Burn-in / Shadow tersebut memang muncul dari masalah ini.

Apalagi, jika kalian menggunakan Mode Full Brightness langsung disiang bolong dibawah sinar matahari langsung dengan durasi waktu yang sangat sering dan lama, maka yakin saja, dalam waktu dekat maka masalah Burn-in / Shadow tersebut akan muncul.

Itulah kenapa, kebanyakan masalah Layar OLED / AMOLED yang menjadi Burn-in / Shadow, bisanya banyak ditemui pada pengguna Smartphone Android yang memang lebih banyak menggunakan Smartphone Androidnya langsung dibawah terik matahari denga durasi yang panjang.

Sebenarnya, kalian tidak perlu khwatir untuk masalah ini, hal yang perlu kalian lakukan adalah sebagai berikut :

  1. Setting Manual Brightness jika berada dibawah terik sinar matahari langsung, jangan Setting ke Full Brightness, setting dan sesuaikan saja dengan penglihatan kalian.
  1. Setting ke Mode Auto Brightness, jika kalian sudah tidak berada diluar ruangan atau sudah tidak lagi terpapar sinar matahari langsung, hal ini bertujuan agar Panel OLED / AMOLED menjadi Resposif dan Adaptable dengan beragam kondisi.
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Selain itu melakukan kedua hal diatas juga akan sangat menguntungkan, karena Konsumsi Daya Battery pada Smartphone Android kalian bisa menjadi lebih irit tentunya.


Hindari penggunaan Visual yang Statis


Perlu kalian ketahui, biasanya masalah Burn-in / Shadow pada Smartphone Android dengan Layar OLED / AMOLED itu biasanya kerusakannya terpicu dikarenakan komposisi Visual pada Smartphone Android kalian yang sifatnya Statis dan sudah dibiarkan terlalu lama.

Seperti misalnya, kalian menggunakan Icon dengan peletakan yang sepert itu-itu saja, menggunakan Wallpaper dan Launcher yang juga itu-itu saja juga, maka masalah Burn-in / Shadow akan lebih besar kemungkinannya akan muncul.

Karena, masalah Burn-in / Shadow tersebut paling banyak terjadi dan muncul pada bagian Status Bar dan juga Navigation bar, kenapa?

Alasannya sederhana saja, karena peletakannya yang statis dan tidak berpindah atau berganti dari waktu-kewaktu.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Maka dari itu, kalian bisa meminilisirkan masalah munculnya kasus Burn-in / Shadow tersebut dengan cara menghindari visul yang statis tersebut dengan cara :

  1. Secara rutin berganti Launcher.
  2. Rutin mengganti Wallpaper.
  3. Penggunaan Live Wallpaper akan lebih direkomendasikan, meski akan membuat Konsumsi daya baterai lebih besar.
  4. Secara berkala mengganti atau memindahkan peletakan Icon.
  5. Jika menggunakan fitur Always on Display (AOD), pastikan aplikasi AOD tersebut tidak memiliki Bugs atau Error.
  6. Jika memang bisa, silahkan gonta-ganti Themes, jika memang tersedia dan bisa dilakukan.

Dengan menerapkan hal diatas, maka potensi terjadinya kasus Burn-in / Shadow tersebut, akan drastis ter-minimalisir, karena pada kasus ini kalian tidak lagi menggunakan Tampilan Visual yang Statis lagi.


Atur Screen Time-out maksimal 30 Detik


Nah, ini juga menjadi salah satu hal penting, keterkaitannya akan sangat kuat dengan Static Visual yang sudah kami jelaskan diatas.

Maka dari itu, agar Layar Smartphone Android kalian tidak terlalu lama Stuck / Idel pada tampilan Visual yang tidak berubah (Karena memang tidak digunakan), maka lebih baik kalian melakukan Set Time Out maksimal ke 30 Detik, kurang dari itu, misalnya 15 Detik bahkan lebih baik lagi.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Sehingga, setelah maksimal 30 Detik Smartphone Android kalian tidak digunakan / kondisi Idle, maka secara otomatis Smartphone Android kalian akan masuk pada Mode Sleep lagi, cara ini akan sangat besar dampaknya untuk meminimalisir terjadinya kasus Burn-in / Shadow.


Gunakan Dark Mode (Night Mode)


Ini adalah salah satu Trick untuk menghindari masalah Burn-in / Shadow pada Smartphone Android dengan jenis layar OLED / AMOLED tersebut.

Mekanisme utamanya adalah dengan membuat sebagian besar tampilan aplikasi / system OS Android menjadi hitam, karena seperti yang kalian ketahui, Layar OLED / AMLOMED tersebut akan sangat awet digunakan, jika kalian menggunakan Konfigurasi warna yang hitam secara dominan.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Maka inilah yang akan kita manfaatkan, umumnya kalian bisa dengan mudah mengaktifkan Dark Mode tersebut, karena sejatinya fitur ini sudah ada secara Default pada OS Android 9.0 keatas.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Tetapi, kalau kalian menggunakan versi Android lebih rendah, maka kalian bisa menggunakan aplikasi yang bisa kalian dapatkan secara gratis melalui Google Play Store dibawah ini.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Umumnya, jika kalian mengaktifkan Dark Mode / Night Mode tersebut, maka beberapa aplikasi Android lain juga akan berubah warna menjadi Dark.

Keuntungan lainnya, selain akan memperpanjang usia pakai Layar OLED / AMOLED tersebut, cara ini juga akan membuat Konsumsi Daya Baterai menjadi sangat minimal dan irit.


Gunakan Icon, Themes dan Wallpaper OLED / AMOLED Friendly


Selanjutnya, jika kalian tidak bisa menggunakan fitur Dark Mode / Night Mode seperti yang sudah kami jelaskan diatas.

Maka, kalian tetap bisa meminimalisir munculnya kasus Burn-in / Shadow tersebut dengan cara menggunakan Themes dengan komposisi warna Hitam atau Dark Grey, jika memang bisa.

Selanjutnya, kalian juga harus menggunakan Icon Pack yang sudah mendapat Predikat OLED / AMOLED Friendly, karena umumnya jenis Icon ini akan memunculkan warna yang Soft, sehingga tidak terlalu menyiksa Layar OLED / AMOLED tersebut.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android
OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Kalian bisa mendapatkan Icon Pack tersebut secara gratis dari Google Play Store dan umumnya hampir sebagian aplikasi Launcher untuk OS Android, memang sudah Support fitur Icon Pack tersebut.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Selanjutnya, sebisa mungkin kalian juga menggunakan Wallpaper yang ramah untuk Layar OLED / AMOLED yang kalian gunakan, dengan menggunakan Wallpaper dengan warna hitam yang dominan.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Jika, kalian bisa mengaktifkan Dark Mode / Night Mode di Smartphone Android yang kalian gunakan saat ini, maka kombinasi dengan Themes, Icon dan Wallpaper yang OLED / AMOLED Frinedly tersebut, akan menjadi kombinasi yang sangat baik dan akan bisa memperlama umur dari Layar OLED / AMOLED yang digunakan tersebut.


Sembunyikan Statusbar dan Navigation Bar


Seperti yang sudah kami singgung diatas, umumnya masalah Burn-in atau Shadow tersebut paling banyak muncul akibat dari tampilan Status Bar dan Navigation Bar yang memang bersifat Fixed / Static Visual.

Maka, kalian sebenarnya bisa saja menghindari hal tersebut dengan cara menyembunyikan Status Bar dan Navigation Bar tersebut, tetapi untuk Status Bar mungkin agak risih untuk disembunyikan, karena memang sangat diperlukan untuk melihat Jam dan Notifikasi.

Tetapi, tidak untuk Navigation Bar, ada 3 cara yang bisa kalian lakukan untuk menghindari Burn-in / Shadow pada Navigation Bar tersebut, ialah sebagai berikut :

  1. Mengganti Style Icon Navigation Bar dan harus menggunakan Background Hitam.
  2. Membuat Navigation Bar auto-hide jika tidak sedang digunakan.
  3. Mengganti fungsi Navigation Bar dengan Gesture.

Kalau, kami sendiri lebih memilih menggunakan Gesture, karena selain bisa menghilangkan Navigation Bar Panel tersebut, menggunakan Navigation Gesture ini akan membuat Smartphone Android yang digunakan tampak lebih Modern dan kekinian.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Selanjutnya, jika memang bisa, lebih baik Status Bar dan Navigation Bar Panel tersebut memang disembunyikan saja, jika tidak memungkinkan untuk menyembunyikan Statur Bar, paling tidak kalian bisa menyembunyikan Navigation Bar Panel tersebut.


Gunakan AOD yang berwarna dan selalu berpindah tempat


Seperti yang kalian ketahui, Layar OLED / AMOLED itu memang memiliki keunggulan untuk bisa menjalankan fitur Always on Display (AOD) secara maksimal dengan Konsumsi Daya Battery yang sangat minimal.

Apalagi, jika Smartphone Android terkait itu tidak memiliki fitur LED Notifikasi, maka sudah jelas fitur AOD ini akan menjadi andalan, karena memang bisa memunculkan Notifikasi secara Real Time.

Hanya saja, dari beberapa kasus masalah Layar OLED / AMOLED yang menjadi Burn-in / Shadow tersebut memang bisa muncul akibat penggunaan dari fitur AOD tersebut.

Alasannya sederhana saja, karena fitur AOD ini akan selalu aktif jika kondisi Smartphone Android Idle, maka sudah pasti jika fitur AOD yang digunakan tidak canggih dan dengan Kombinasi warna yang terang, maka sudah pasti akan memunculkan masalah Burn-in / Shadow jika digunakan dalam durasi waktu yang lama.

Masalah utamanya sama saja, yaitu akibat dari penggunaan Visual yang Statis, maka dari itu pastikan kalian menggunakan fitur AOD yang memiliki fungsi terus berpindah dalam setiap jamnya, agar tidak memunculkan Static Visual.

Selanjutnya, hindari penggunaan tampilan AOD dengan warna Solid, seperti misalnya warna Putih, Hijau, Merah, Biru dan lainnya.

Selalu gunakan warna kombinasi atau Gradient, jika tidak bisa atau fiturnya tidak tersedia, paling tidak gunakanlah Background berwarna, agar Pixel pada OLED / AMOLED tidak hanya fokus ke tampilan layar AOD yang Static tersebut.

OLED AMOLED Shadow Burn-in, Cara atasi agar AMOLED tidak Shadow, apa itu Shadow AMOLED, apa itu Shadow OLED, apa itu Burn-in AMOLED, apa itu Burn-in OLED, agar Android tidak Shadow, cara merawat layar Android, cara menghindari Shadow dan Burn-in AMOLED, Layar Super AMOLED Shadow, Contoh Layar Shadow Android, Kenapa muncul Shadow di layar Android, kenapa muncul bayangan di Layar Android

Meski demikian, hal ini akan membuat Konsumsi Daya Baterai lebih boros sedikit, dibandingkan dengan penggunaan AOD dengan tampilan yang sangat minimal.

Dengan demikian, berakhirlah sudah pembahasan kita kali ini, kami sarankan lebih baik kalian ikuti saja semua hal yang sudah kami sebutkan diatas, karena cara yang kami bagikan diatas memang bersumber dari saran para Professional.

Intinya, merawat dan menjaga Smartphone Android dengan jenis Layar OLED / AMOLED tersebut memang rada "Tricky" dan tidak bisa sebebas menggunakan Smartphone dengan layar IPS.

Tetapi, itulah harga yang harus kalian bayar jika ingin menikmati gambar visual yang benar-benar bagus dan memanjakan mata, melalui Display Panel OLED / AMOLED tersebut.

Selanjutnya, jika kalian bisa menerapkan hal diatas, maka paling tidak Layar OLED / AMOLED di Smartphone Android kalian akan bisa bertahan tanpa rusak paling tidak untuk 1-2 tahun kedepan atau lebih.

Karena, jika rusak terlalu dini, apalagi garansinya sudah habis / hangus atau tidak dijamin garansi, maka biaya reparasi untuk penggantian Part Layar OLED / AMOLED itu akan membutuhkan biaya yang tidak sedikit, yaitu akan menghabiskan biaya hingga jutaan rupiah untuk satu Unit Display Panel saja.

Itulah kenapa, kalian harus benar-benar bisa merawat Layar OLED / AMOLED tersebut semaksimal mungkin, agar tidak merugikan kalian.

Akhir kata, semoga artikel ini bisa bermanfaat untuk kalian, ada yang kalian harapkan bisa terkabul dan juga semoga hari kalian menyenangkan. P.AW ~ DRD

          

Download Android SDK Platform Tools versi paling Terbaru

 Cache   
Download Android SDK Platform Tools Terbaru, Download Android SDK Platform Tools Latest, Download SDK Platform Tools latest version, Download versi SDK Platform Tools terbaru, Download ADB Driver SDK Platform Tools. Download Fastboot Driver SDK Platform Tools, SDK Platform Tools, apa itu SDK Platform Tools, perbedaan Adnroid SDK dan SDK Platform Tools, fungsi SDK Platform Tools, SDK Platform Tools r29.0.2, SDK Platform Tools r29.0.1, SDK Platform Tools r29.0.0, SDK Platform Tools r28.0.2, SDK Platform Tools r28.0.1, SDK Platform Tools r28.0.0

Kami pikir kebanyakan dari kalian semua pasti tahu terkait apa itu Android SDK, ya! Android SDK adalah sebuah Tools yang memungkin penggunannya untuk membuat sebuah aplikasi (.apk) khusus OS Android atau sesuatu yang berhubungan dengan System Operasi Android.

Android SDK adalah sebuah Tools yang resmi di rilis Google dan selalu di perbaharui setiap saat, karena OS Android itu bersifat Open Source maka wajar jika Android SDK ini dibuat dan disediakan ke Publik secara gratis dan cuma-cuma.

Memang benar, menggunakan Android SDK itu tidak semudah seperti kalian menggunakan Tools dengan metode Generator yang terbilang cepat dan mudah. Namun, kebanyakan developers aplikasi Android di luar sana yang memang telah ahli dalam bahasa Coding / Programmer pasti lebih memilih Tool Android SDK murni, selain gratis Tool ini tentu saja sangat Premium dan bisa diandalkan.

Lalu apasih Android SDK tersebut? Android SDK adalah sebuah Tools dimana semua asal mula pengembangan Software Android di mulai, SDK yang berarti Software Development Kit itu adalah sebuah Media yang bisa kalian gunakan dalam kepentingan apapun terkait dalam urusan Software Android.

Karena di dalam Android SDK tersebut tidak hanya tersedia satu Platform saja, melainkan selain Core Platform seperti ADB dan Fastboot Driver, di dalam Android SDK juga ada sebuah Tools khusus yang bisa di gunakan untuk membangun sebuah aplikasi Android.


Beda Android SDK dan Android Studio


Banyak dari kalian mungkin bingung apasih beda dari Android SDK dan Android Studio? sebenarnya dari fungsinya memang cukup mirip, tetapi jika di lihat dari fungsinya Android SDK itu mencakup segalanya, berbeda dari Android Studio yang memang di buat khusus untuk membangun sebuah Software Android saja.

Sehingga mudahnya seperti, jika menggunakan Android SDK untuk membangun sebuah aplikasi Android, maka kalian membutuhkan Skill yang memang benar-benar ahli, berbanding terbalik dengan Android Studio, dimana disini kalian akan di permudah, karena sudah berbasis kepada IDE (Integrated Development Environment) yang tentu saja akan lebih mudah di gunakan di banding Android SDK.

Meski pada kenyataannya jika kalian mendownload Android Studio, maka Android SDK secara penuh juga akan terdownload, karena sudah menjadi satu paket dalam Android Studio tersebut.


Android SDK Platform Tools


Lalu selain Android SDK dan Android Studio, ada lagi sebuah Platform Tools dari Google dan itu adalah Android SDK Platform Tools, apa gerang isi Tools ini?

Intinya di dalam SDK Platform Tools itu hanya disediakan Core Driver saja, tidak ada Tools lain di dalamnya, intinya itu di dalam Platform Tools itu hanya ada Inti dari Android SDK tersebut, bingung?

Mudahnya seperti ini, hal apa yang membuat sebuah Tools Development Kit bisa berkerja? maka jawabannya adalah Drivers dan main Core lainnya dan inilah Android SDK Platform Tools.

Dalam Paket SDK Platform Tools ini hanya terdapat beberapa komponen super penting untuk development Android, yaitu adalah ADB, Fastboot dan Systrace.

Sehingga wajar jika Size SDK Platform Tools ini menjadi sangat kecil yaitu kurang dari 30mb saja, maka disinilah point pentingnya, jika kalian ingin menggunakan ADB dan Fastboot Driver yang paling terbaru dan sudah Support sampai ke versi Android yang paling terbaru, menggunakan SDK platform Tools adalah keputusan yang tepat.

Karena dibandingkan menggunakan ADB dan Fastboot Driver Minimal yang saat ini berversi 1.4.3 yang mungkin hanya Support ke versi OS Android Marshmallow saja, ADB dan Fastboot Driver yang ada pada Android SDK Platform Tools ini sudah Support hingga Android paling terbaru, untuk sekarang ini adalah versi Android 9.0 (Pie).


Download Android SDK Platform Tools versi paling Terbaru


Disini kami punya beberapa versi SDK Platform Tools yang bisa kalian gunakan secara cuma-cuma dan cepat, silahkan gunakan saja yang versi terbaru yang sudah kami sediakan :


Password : dadroidrd.com

Cara penggunaannya sangat mudah, kalian hanya perlu mendownload dan lakukan extract setelahnya, maka saat ini kalian sudah bisa menggunakan fungsi ADB dan Fastboot Driver di versi terbaru dengan maksimal tanpa bugs yang berarti.

Hanya saja sebelum menggunakan Tools yang telah kami sediakan diatas, pastikan Smartphone Android kalian bisa terhubung dengan baik ke PC Windows, karena jika tidak, maka Tools diatas tidak akan bisa berarti apa-apa, ada baiknya kalian Install dulu Android USB Driver Terbaru di bawah ini, sebelum kalian melakukan ke tahap selanjutnya :


Berikut adalah gambaran isi dari dalam Android SDK Platform Tools tersebut :

Download Android SDK Platform Tools Terbaru, Download Android SDK Platform Tools Latest, Download SDK Platform Tools latest version, Download versi SDK Platform Tools terbaru, Download ADB Driver SDK Platform Tools. Download Fastboot Driver SDK Platform Tools, SDK Platform Tools, apa itu SDK Platform Tools, perbedaan Adnroid SDK dan SDK Platform Tools, fungsi SDK Platform Tools, SDK Platform Tools r29.0.2, SDK Platform Tools r29.0.1, SDK Platform Tools r29.0.0, SDK Platform Tools r28.0.2, SDK Platform Tools r28.0.1, SDK Platform Tools r28.0.0

Bisa kalian lihat, di dalam SDK Platform Tools ini kalian bisa menemukan banyaj File, tetapi kalau penggunaan normal saja, seperti kami, maka mungkin yang paling berguna adalah ADB dan Fastboot Driver saja.

Silahkan gunakan ADB dan Fastboot Driver dari Android SDK Platform Tools ini saja, karena sudah di jamin selalu update dan tentu saja kekinian yang memang sudah Support hingga versi OS Android terbaru.

Dengan demikian, saat ini kalian sudah bisa secara langsung menggunakan komponen ADB dan Fastboot Driver di SDK Platform Tools tersebut dengan menggunakan CMD, jika kalian masih awam terkait penggunaan CMD untuk diarahkan ke direktori folder ADB dan Fastboot Driver, maka silahkan ikuti saja cara mudah dibawah ini.

Cara cepat mengakses CMD ke tempat spesifik Folder dengan Shortcut

Itulah sedikit penjelasan terkait beda Android SDK, Android Studio dan juga SDK Platform Tools, dan beberapa alasan kenapa harus menggunakan ADB dan Fastboot Driver dari Android Platform Tools tersebut, kami bisa mengerti terkait pembahasan dalam artikel sederhana ini.

Jika memang kalian sudah mengerti maka, kalian akan bisa memahami keuntungan menggunakan Android SDK Platform Tools di bandingkan penggunakan Installer ADB dan Fastboot Driver Minimal yang sekarang banyak sekali bertebaran di Internet, semoga artikel ini bisa bermanfaat untuk kalian.

Jika kalian merasa tidak cocok dengan aplikasi Universal ADB Driver tersebut, mungkin kalian bisa menggunakan opsi Installer ABD Driver yang lain, yaitu adalah sebagai berikut :


Yang membedakan dari aplikasi ADB Driver yang satu dan yang lainnya hanya teradapat pada metode Installasinya saja, ada yang menggunakan Command Prompt, Instant, di Copy Paste secara manual dan yang menggunakan Installer, silahkan pilih yang mana kalian suka.

          

Cara aktifkan USB Debugging di smartphone Android

 Cache   
USB Debugging Android, arti USB Debugging. maksud USB Debugging, cara aktifkan USB Debugging. cara menggunakan USB Debugging, cara mencari opsi USB Debugging, dimana letak USB Debugging, USB Debugging Android versi 4 5 6 7 8 9, USB Debugging Android versi kitkat lollipo marshmallow nougat oreo pie, USB Debugging Xiaomi, USB Debugging Oppo, USB Debugging Samsung, USB Debugging Huawei, USB Debugging Realme, USB Debugging Redmi, USB Debugging Sony, USB Debugging Asus

Saat ini Smartphone Android memang sudah merajai pasaran di seluruh dunia, terbukti banyak sekali penduduk di dunia ini yang menggunakan Smartphone dengan OS Android dari Google, hal ini tidak lepas dari kontrubusi dari Brand Smartphone International dan Lokal yang membawa OS Android tersebut sebagai OS utama pada jajaran Smartphone yang mereka jual.

Smartphone Android memang mudah untuk kita miliki sekarang ini, mulai dari harga kurang dari 1 Jutaan sekarang ini kalian sudah bisa membawa pulang sebuah Smartphone Android, karena yang membedakan dari Smartphone Android satu dan yang lainnya itu hanya ada pada Merk dan juga Spesifikasinya.

Makin tinggi spesifikasinya, maka sebuah Smartphone Android akan di jual dengan harga yang tinggi tentunya, lalu bagaimana dengan versi OS? kami kira hanya sedikit orang yang memperdulikan hal ini, karena saat ini kebanyakan aplikasi di Android sudah Support ke versi Android lawas.

Lalu, bagi yang tidak bisa melakukan Update versi OS Android ke yang terbaru, karena tidak di Support oleh OEM atau Vendor lagi, maka kalian tetap bisa menikmati versi teranyar OS Android tersebut dengan cara Custom ROM, dengan syarat Custom ROM tersebut memang tersedia untuk seri Smartphone Android kalian.


OS Android itu Open Source


Hal menarik dari OS Android yang paling utama adalah selain di kembangkan oleh perusahaan Internet terbesar di dunia yaitu Google, OS Android sendiri di sukai karena sifatnya yang Open Source, artinya siapa saja bisa ikut untuk mengembangkan OS Android tersebut.

Mau itu melakukan pengembangan secara Custom, sebut saja Custom ROM dan juga Custom Kernel, ataupun pengembangan fitur secara Custom, semua ini bisa kalian lakukan di OS Android, asalkan kalian bisa dan tahu apa yang akan kalian lakukan.

Karena hal ini sudah bersifat sangat teknis, maka tidak banyak orang yang bisa mengerjakannya, tetapi bagi para User Smartphone Android biasa, mereka tetap bisa mendapat imbas baik dari Open Source yang ada di OS Android tersebut, yaitu dengan cara memasangkan beragam Custom yang telah di buat, seperti :

  • Custom ROM
  • Custom Kernel
  • ROOT
  • Unlock Bootloader
  • Custom Firmware
  • Custom Recovery
  • Flash
  • Custom Mod
  • dan banyak hal keren lainnya

Karena sifat OS Android yang Open Source dan sebenarnya hanya di tujukan untuk kalangan khusus saja, jadi wajar secara default Mode Pengembang atau Developer Options itu tidak diaktifkan oleh Google atau OEM yang sudah melakukan Custom terhadap Source Android Murni.

Maka, untuk bisa menikmati Mode Pengembang atau Developer Options ini kalian mesti mengaktifkannya secara manual, ingat! Fitur ini bukan untuk orang awam.


USB Debugging


Selanjutnya dari banyak fitur di dalam Mode Developer atau Developer Options tersebut, terdapat satu Opsi yang teramat penting yang harus di aktifkan oleh user Android untuk melakukan Pengembangan secara Custom, yaitu adalah USB Debugging.

Seperti namanya sendiri USB Debugging itu adalah opsi yang akan memberikan akses atau Authorization kepada media lain yang terkoneksi ke Smartphone Android kalian, dalam kasus ini adalah penggunaan terhadap PC atau Komputer.

Opsi ini bisa di gunakan hampir di semua Computer OS yang ada saat ini sebut saja Linux, MacOS dan Windows.

Cara kerjanya pun cukup mudah yaitu melalui perangkat dalam kasus ini Komputer, maka kalian bisa memberikan perintah secara Remote untuk melakukan banyak hal, misalnya melakukan Unlock Bootloader, memasang Custom Recovery, Custom Kernel, menginstall ROM, bahkan sampai penggunaan ketingkat ROOT, semua ini bisa di lakukan asalkan USB Debugging sudah aktif di Smartphone Android kalian.

Lalu, untuk media penghubungnya biasanya menggunakan Kabel USB, tetapi sekarang ini sudah ada yang bisa melakukannya secara Wireless, meski demikian kami belum pernah mencobanya, lalu sepenting itukah USB Debungging? menurut kami sangat teramat penting bagi yang memiliki kebutuhan akan hal tersebut.


Cara Aktifkan USB Debugging di Android


Penjelasan diatas sudah bisa menggambarkan secara garis besar terkait USB Debugging tersebut, lalu yang menjadi pertanyaannya adalah bagaimana cara mengaktifkan fitur ini?

Jika kalian tidak tahu dan baru pertama kali mendengar istilah ini, maka silahkan ikuti cara dibawah ini :

  1. Buka aplikasi Settings / Pengaturan >> System
  2. Cari Build Number / Nomor Build atau Versi ROM (Misalnya MIUI Version)
USB Debugging Android, arti USB Debugging. maksud USB Debugging, cara aktifkan USB Debugging. cara menggunakan USB Debugging, cara mencari opsi USB Debugging, dimana letak USB Debugging, USB Debugging Android versi 4 5 6 7 8 9, USB Debugging Android versi kitkat lollipo marshmallow nougat oreo pie, USB Debugging Xiaomi, USB Debugging Oppo, USB Debugging Samsung, USB Debugging Huawei, USB Debugging Realme, USB Debugging Redmi, USB Debugging Sony, USB Debugging Asus
USB Debugging Android, arti USB Debugging. maksud USB Debugging, cara aktifkan USB Debugging. cara menggunakan USB Debugging, cara mencari opsi USB Debugging, dimana letak USB Debugging, USB Debugging Android versi 4 5 6 7 8 9, USB Debugging Android versi kitkat lollipo marshmallow nougat oreo pie, USB Debugging Xiaomi, USB Debugging Oppo, USB Debugging Samsung, USB Debugging Huawei, USB Debugging Realme, USB Debugging Redmi, USB Debugging Sony, USB Debugging Asus
  1. Tekan sebanyak 7 kali Tab Informasi tersebut.
  2. Sekarang ini Mode Pengemban / Developer Options telah aktif.
  3. Silahkan cari Opsi tersebut di Menu Settings, agar mudah kalian bisa mencarinya di kotak pencarian aplikasi Settings atau Pengaturan.
  4. Jika sudah ketemu, silahkan Aktifkan Mode Pengembang / Developer Options dan aktifkan juga opsi USB Debugging, silahkan lihat Screenshot dibawah (hasil dari ROM MIUI dan Android Murni) :
USB Debugging Android, arti USB Debugging. maksud USB Debugging, cara aktifkan USB Debugging. cara menggunakan USB Debugging, cara mencari opsi USB Debugging, dimana letak USB Debugging, USB Debugging Android versi 4 5 6 7 8 9, USB Debugging Android versi kitkat lollipo marshmallow nougat oreo pie, USB Debugging Xiaomi, USB Debugging Oppo, USB Debugging Samsung, USB Debugging Huawei, USB Debugging Realme, USB Debugging Redmi, USB Debugging Sony, USB Debugging Asus
USB Debugging Android, arti USB Debugging. maksud USB Debugging, cara aktifkan USB Debugging. cara menggunakan USB Debugging, cara mencari opsi USB Debugging, dimana letak USB Debugging, USB Debugging Android versi 4 5 6 7 8 9, USB Debugging Android versi kitkat lollipo marshmallow nougat oreo pie, USB Debugging Xiaomi, USB Debugging Oppo, USB Debugging Samsung, USB Debugging Huawei, USB Debugging Realme, USB Debugging Redmi, USB Debugging Sony, USB Debugging Asus
  1. Selesai!

Catatan : Setiap Brand Smartphone Android, mungkin memiliki mekanisme pengaktifan Mode Developer atau Developer Options yang berbeda-beda, ada yang sudah aktif sebelumnya dan ada juga yang harus diaktifkan secara manual. Namun, untuk gambaran garis besarnya, akan mirip saja seperti yang sudah kami sampaikan diatas.


Android USB Driver


Setelah kalian mengaktifkan USB Debugging maka ada hal penting yang harus kalian persiapkan dan pastikan setelahnya, yaitu adalah pastikan Koneksi antara Smartphone Android kalian dengan PC Windows dengan Kabel USB bisa terhubung dengan sempurna, jika tidak maka fitur USB Debugging ini tidak akan bisa berfungsi dengan baik, ada baiknya jika kalian Install dulu Android USB Driver di bawah ini :



Langkah selanjutnya?


Setelahnya silahkan kalian sesuaikan dengan kebutuhan kalian, apakah kalian memerlukan USB Debugging untuk menggunakan Android SDK atau Android Studio atau bahkan di gunakan untuk menginstall Custom Mod yang sudah kami gambarkan diatas.

Jika tujuannya untuk menginstall Custom ROM, Kernel, ROOT, Unlock Bootloader, dll. Maka, kalian membutuhkan sebuah Driver agar USB Debugging tersebut bisa berkerja dan Driver yang kami maksud adalah ADB dan Fastboot, silahkan Install dulu Driver ADB dan Fastboot Terbaru, silahkan pilih versi ADB dan Fastboot Driver mana yang ingin kalian gunakan, kami sudah sediakan dibawah ini :

  1. ADB Driver + Fastboot Driver (Minimal) Terbaru
  2. Universal ADB Driver Terbaru
  3. 15 Second ADB Installer Terbaru
  4. ADB + Fastboot Driver Instant (Support hingga Android 10)
  5. Android SDK Platform Tools (ADB + Fastboot Driver versi paling terbaru)

Pastikan ADB dan Fastboot Driver tersebut sudah terpasang dengan baik di PC Windows yang kalian gunakan, karena jika tidak bisa terhubung dengan baik ke PC, maka fitur USB Debugging ini tidak akan ada artinya.

Dengan demikian, sampai situ saja dulu pembahasan kita, sebenarnya pembahasan ini memang di khususkan untuk orang awam saja, karena bagi yang sudah terbiasa dengan OS Android, fitur atau opsi ini memang sudah menjadi keharusan untuk diaktifkan setiap saat, semoga bermanfaat.

          

WinUI 3.0 with Ryan Demopoulos

 Cache   
What's happening with Windows client-side development? Carl and Richard talk to Ryan Demopoulous about WinUI 3.0, the next version of the WinUI stack, which represents a major shift in how Windows applications are going to be built and supported in the future. Ryan starts the conversation focused on the current WinUI 2, which is open source, but largely focuses only on UWP. WinUI 3 expands the horizons to support .NET Core and more - the alpha bits shipped at Ignite, check it out!
          

 Cache   

11:18 RT @dbrgn: Eine Open Source Success-Story im Embedded-Bereich: Gardena golem.de/news/gardena-o… [Original]

17:53 RT @Diddielou: «Er nimmt die öffentliche Kritik der letzten Monate offenbar nicht ernst und will nicht einsehen, dass dieses E-Vot… twitter.com/i/web/status/1… [Original]


          

War in Raqqa: Rhetoric vs. Reality, thru Dec 20

 Cache   
Experience photographs, videos, open source investigations, and 360° Virtual Reality that document the assault on Raqqa, Syria by coalition forces in 2017. The show draws on Amnesty International's investigations, supported by students in UC Berkeley's Human Rights Investigations Lab and the Digital Verification Corps worldwide. Immerse yourself in video, testimonials, satellite imagery and maps that tell the stories of the families who lived and died.
          

Open Source: Monitoring-Lösung Sentry wechselt auf proprietäre Lizenz

 Cache   
Nach vielen Vorbildern wie etwa MongoDB oder Redis wechselt nun auch der Hersteller der Monitoring-Lösung Sentry von Open Source auf eine nichtfreie Lizenz. Damit soll der Verkauf der Software durch Dritte unterbunden werden. (Open Source, Cloud Computing)
          

OpenTitan

 Cache   
  • Daily Crunch: Google announces open-source chip project

    The aim of the new coalition is to build trustworthy chip designs for use in data centers, storage and computer peripherals.

    The project will allow anyone to inspect the hardware for security vulnerabilities and backdoors. It comes at a time where tech giants and governments alike are increasingly aware that hostile nation states are trying to infiltrate and compromise supply chains in an effort to carry out long-term surveillance or espionage.

  • Google Is Helping Design an Open Source, Ultra-Secure Chip

    There are some parts of the OpenTitan design that won't be public, at least for the foreseeable future. These are all related to the actual physical fabrication of chips in a factory, categories like "foundry intellectual property," "chip fabrication," and "Physical Design Kit," among others. They hint at the immense challenges that exist in creating open source hardware—fabrication of which requires massive, specialized factories and proprietary silicon manufacturing processes, not just a laptop and an internet connection. If you don't own a silicon plant or have the leverage to convince existing fabricators to make OpenTitan chips for you, you won't be able to get them. And though device-makers have an incentive to save money on licensing fees with something like OpenTitan, silicon manufacturers who impose these fees may resist dropping them.

  • Google launches OpenTitan, an open-source secure chip design project

          

GUELPH POLITICAST #195 - Marshall on Transit

 Cache   

This weekend, the new Transit Action Alliance of Guelph (TAAG) will be holding their first annual Transit Summit and Town Hall. It will be a chance for the transit users and the transit curious to engage with City of Guelph employees and transportation advocates about how to make a transit-friendlier future. But why wait to get started? 

This week, we'll hear from Sean Marshall, who's a prolific writer and advocate for transit and will be speaking and presenting at the Transit Summit on Saturday. Marshall's a geographer by training, but he’s made a name for himself as one of the preeminent voices on transit issues, covering them on the TVO website, and on his own personal blog, by digging into the history and the challenges all around this province in creating more transit and better active transportation options.

These are interesting transit times. In Guelph, we’re in the middle of creating a Transportation Master Plan. Up the road in Kitchener-Waterloo, the long-awaited LRT finally started running this past summer. In Toronto, the city council there finally reached a deal with the Government of Ontario to fund new subway expansion while retaining ownership of the present system. And Metrolinx is doing their damnedest to make two-way, all-day GO trains a reality on the Kitchener Line. So why does it still feel like we're going no where fast?

On this week's podcast, we'll consider that question in what we'll call a preamble to the summit. Marshall will talk about growing up as a transit nerd in Brampton, and how that much maligned GTA municipality might actually be a really positive example of transit expansion done right. We also talk about the transit challenges in Guelph and the region from Marshall’s perspective, and we get into the weeds about the Ontario Line, two-way, all-day GO and the transit plans from the Provincial Government. Finally, we'll talk about private operators and what role they can play in answering the transit needs of Ontario.

So let's get prepared to summit on transit with this week's Guelph Politicast!

You can find Sean Marshall's writing on TVO.org, and on his personal website. The first annual Transit Summit and Town Hall this Saturday at St. Andrew’s Presbyterian Church from 12 to 5:30 pm, and you can get tickets at TAAG's website

The host for the Guelph Politicast is Podbean. Find more episodes of the Politicast here, or download them on your favourite podcast app at iTunes, Stitcher, Google Play, and Spotify.

Also, when you subscribe to the Guelph Politicast channel and you will also get an episode of Open Sources Guelph every Monday, and an episode of End Credits every Friday.


          

Wired.com: What Does Crowdsourcing Really Mean?

 Cache   

LCM note:  Karl Rove recently took to the WSJ to discuss how campaigns were changing.  One of his suggestions was that the traditional campaign structure would lose meaning and that individuals outside the structure would take a greater role.  Somewhat related, this is an old interview with Douglas Rushkoff about the original meaning and the very different popular understanding of the term "crowdsourcing."

From religion, novels and back again. The strength of community and the dangers of crowdsourcing

Sarah Cove Interviews Douglas Rushkoff via telephone on May 18, 2007

Douglas Rushkoff is an author, professor, media theorist, journalist, as well as a keyboardist for the industrial band PsychicTV. His books include Media VirusCoercionNothing Sacred: The Truth About Judaism (a book which opened up the question of Open Source Judaism), Exit Strategy (an online collaborative novel), and a monthly comic book, Testament. He founded the Narrative Lab at New York University's Interactive Telecommunications Program, a space which seeks to explore the relationship of narrative to media in an age of interactive technology.

We spoke about the notion of crowdsourcing, Open Source Religion, and collaborative narratives.

Sarah Cove: What is crowdsourcing for you?

Douglas Rushkoff: Well, I haven't used the term crowdsourcing in my own conversations before. Every time I look at, it rubs me the wrong way.

To read the rest if this interview click here.

 

 


          

Hire Dedicated Drupal Developers at an affordable price

 Cache   
MAAN Softwares is a place consists of top Drupal developers, programmers, engineers, coders and consultants. Top companies and start-ups choose MAAN Softwares Drupal freelancers for their mission-critical software projects. Our expert Drupal developers build and render you with the solution that effective suits your business requirements much faster than what offers capable of delivering. MAAN Softwares boasts of its experienced and dedicated Drupal developers who provide the efficient and impressive solution to clients’ website with the help of open source technology. Hire Drupal Developers from MAAN Softwares and get exclusive, undivided attention on your projects. Drupal CMS is useful to develop e-commerce sites, social networking sites, directory platforms for differing business requirements. We have a team of experts in Drupal and have developed small to medium to enterprise level solutions. Hire Drupal developers with us and get your Drupal website as per your requirements. Hire top Drupal developers at a very competitive price. Our Drupal development company follows agile web app development process in order to ensure timely delivery to our clients. For more enquiries about our services, you can contact us through: Email id: info@maansoftwares.com Contact no.: 216-298-1665
          

Wired.com: What Does Crowdsourcing Really Mean?

 Cache   

LCM note:  Karl Rove recently took to the WSJ to discuss how campaigns were changing.  One of his suggestions was that the traditional campaign structure would lose meaning and that individuals outside the structure would take a greater role.  Somewhat related, this is an old interview with Douglas Rushkoff about the original meaning and the very different popular understanding of the term "crowdsourcing."

From religion, novels and back again. The strength of community and the dangers of crowdsourcing

Sarah Cove Interviews Douglas Rushkoff via telephone on May 18, 2007

Douglas Rushkoff is an author, professor, media theorist, journalist, as well as a keyboardist for the industrial band PsychicTV. His books include Media VirusCoercionNothing Sacred: The Truth About Judaism (a book which opened up the question of Open Source Judaism), Exit Strategy (an online collaborative novel), and a monthly comic book, Testament. He founded the Narrative Lab at New York University's Interactive Telecommunications Program, a space which seeks to explore the relationship of narrative to media in an age of interactive technology.

We spoke about the notion of crowdsourcing, Open Source Religion, and collaborative narratives.

Sarah Cove: What is crowdsourcing for you?

Douglas Rushkoff: Well, I haven't used the term crowdsourcing in my own conversations before. Every time I look at, it rubs me the wrong way.

To read the rest if this interview click here.

 

 


          

G+D Mobile Security collaborates with lowRISC to support OpenTitan, a new open source project

 Cache   

G+D Mobile Security, a leading provider of connectivity and security in IoT, today announced it has partnered with lowRISC and Google in support of OpenTitan, an open source hardware root of trust (RoT) reference design and integration guidelines that enable chip producers and platform providers to build transparently implemented, high-quality hardware RoT chips tailored for data center servers and other devices.   Security begins with infrastructure, and OpenTitan will help ensure trans...

Read the full story at https://www.webwire.com/ViewPressRel.asp?aId=249616


          

Avail Custom Web Application Development Services

 Cache   
Avail Custom Web Application Development Services Jellyfish Technologies is a well-known custom web app development company that leverages the power of open source (Java, PHP) or Microsoft technologies in order to develop high-performance custom web applications. Our web app developers are trained well mannerly to understand our client’s objectives and their business logic so that Jellyfish can deliver with unique and top-notch custom web application development services and solutions. PS: Jellyfish Technologies is based in India, Canada, and the US. We offer outsourced front-end web development services to UK clients as well. Contact Us: Head Office: H-134, First Floor, Sector 63, Noida, Uttar Pradesh, India, 201301 Email: enquiry@jellyfishtechnologies.com Ph: +1 (801) 477-4541, +91 1204296782
          

Three-Dimensional Reconstruction of the Craniofacial Skeleton With Gradient Echo Magnetic Resonance Imaging (“Black Bone”): What Is Currently Possible?

 Cache   
imageThree-dimensional (3D) reconstructed computed tomography (CT) imaging has become an integral component of craniomaxillofacial patient care. However, with increasing concern regarding the use of ionizing radiation, particularly in children with benign conditions who require repeated examinations, dose reduction and nonionizing alternatives are actively being sought. The “Black Bone” magnetic resonance imaging (MRI) technique provides uniform contrast of the soft tissues to enhance the definition of cortical bone. The aim of this study was to develop methods of 3D rendering of the craniofacial skeleton and to ascertain their accuracy. “Black Bone” MRI datasets acquired from phantoms, adult volunteers and patients were segmented and surface and/or volume rendered using 4 commercially available or open source software packages. Accuracy was explored using a custom phantom (permitting direct measurement), CT and MRI. “Black Bone” MRI datasets were successfully used to create 3D rendered images of the craniofacial skeleton in all 4 software packages. Comparable accuracy was achieved between CT and MRI 3D rendered images of the phantom. The “Black Bone” MRI technique provides a viable 3D alternative to CT examination when imaging the craniofacial skeleton.
          

Devops Engineer Noida/J41191 ( 5-8 yrs. )

 Cache   
Noida
Job Post Date: Thursday, November 07, 2019
Other IT Experience
Have 5 years relevant experience in a DevOps cloud environment (Git, Jenkins, AWS, Azure, Docker, Mesos, Kubernetes)
Have strong Scripting (Bash/Salt/Ansible/Terraform/Puppet/Chef) or
Programming (Python/Ruby) skills.
Have strong Linux, Networking and System Administration skills are involved in the DevOps Community and Open Source projects (visiting conferences, meetups, giving talks, etc..)
Like to take initiative, work in close collaboration with fellow developers and share your ideas and knowledge

          

TOP NODEJS WEB DEVELOPMENT COMPANY IN UK & INDIA

 Cache   
IIH Global a top leading Node.js development company, a one-stop solution provider for building rich, high performance and scalable web and mobile applications. We have experience developer who built best eCommerce solutions & advanced Node.js programming, to social networking & collaboration application, with latest trend knowledge and advanced skill-set, we are always prepared to create web-based node js app. Open source and cross platform environment. Node js Develops real-time applications for web and mobile. Low-level APIs in Node Js. Inexpensive testing & cost effective hosting. Numerous packages and extensions. Nodejs google v8 engine boost app performance. High Speedy & Scalable Node Js. Node.js technology is the power to build faster, secure and scalable real-time applications. As long as the back-end with event-driven and speedy Node.js development. Please feel free to send us an email at info@iihglobal.com or get in touch with us, Our business development team will get back to you. Get A Free Quote: https://www.iihglobal.com/request-a-quote/
          

The best open source and free Nodejs and Angular platfom

 Cache   
With Spurtcommerce 2.0, open source NodeJS ecommerce platform and Angular ecommerce platform, develop a world-class ecommerce website with excellent UI.
          

SAP C/4HANA at Acuiti Labs

 Cache   
Using a combination of open source, SAP C/4HANA, SAP S/4HANA solutions and other enterprise applications, Acuiti labs offers the best of latest technologies and services
          

Taxi Booking for Joomla 2.5/3.x

 Cache   

Taxi Booking software can be used for online taxi booking, online coach (shuttle, bus, boat, ferry, aeroplane) ticket booking, online limousine booking, online cab and minicab booking as well as private hire vehicle booking system and is a complete, distance charge and flat rate based online reservation and dispatch system.

Price: £199.99 (limited time price discount) fully open source unlimited domains license.

Get Taxi Booking

Looking for a discount? Contact us

Any Questions? Want to see online back end demo? Contact us


          

Share Cost

 Cache   

Share Cost is a Joomla component that allows you to raise capital towards a cause in crowd funding format, anyone interested can Contribute.

Price: £89 £44.50 fully open source multi domain license.

Get Share Cost

Ideal for software developers, artistic types, local improvement projects, charities, NGO (non-government organisations) and any project that can generate crowd interest.

With Share cost you will turn your own website into a rich crowd funding platform.

Contact us with any questions.

Share Costs key features:

  • Fundraise through multiple contributors  - crowdfunding
  • Break a big project to small easy to explain and mange jobs or have just one fundraising goal
  • Engage your current supporters and form a seed funding group should you decide to use a crowd funding platform in the future
  • Harness the power of collective thinking via the built in Comment and Feature request systems - Crowd sourcing within the crowd funding platform
  • Create project categories to fund raise for more than one project at a time
  • Interact with anyone interested via direct email
  • Fully responsive design to fit any device screen
  • Multi domain license fully open source GPL v.3

Share Cost's documentation

Pre-sales questions? Contact us


          

Lead Magnet for Joomla

 Cache   

Lead Magnet for Joomla is the perfect content upgrade bait opt-in extension. 

Price: £19.99 fully open source multi-domain license.

Get Lead Magnet for Joomla

With Lead Magnet you can:

Collect user's email addresses in exchange for a Download.

Users are silently registered to your website so you can further communicate with them.

Sort users who sign up into Joomla groups.

Unlimited Download offers.

URL redirect instead of file Download.

 Inline form or Pop up form for email collection.

Multiple modules published anywhere on your website. 

Unlimited domains installation, fully open source GPL v.3 license.

Contact us if you have any questions or to see the back end demo.

Learn more about Lead Magnet for Joomla.


          

Taxi Booking Light

 Cache   

Taxi Booking Light is a lightweight version of the famous Taxi Booking for Joomla extension.

With Taxi Booking Light you can start selling your transportation services on your website in minutes.

Taxi Booking Light has the same front end booking form as Taxi Booking for Joomla, as well as a back-end management panel for Cars, Settings and Orders.

The main differences with Taxi Booking for Joomla are that Taxi Booking Light has built-in just Address to Address* (price per mile/kilometre) service type and Cash only payment method.

Taxi Booking Light can be extended with the rest of the payment methods available for separate purchase.

Horizontal and Vertical Quick booking modules are also available.

We will be adding the rest of Taxi Booking for Joomla features as separate extensions that can enrich Taxi Booking light to meet your business needs.

Fully open source, multi-domain licensed software - install on as many websites as you like.

90 days of support and updates (can be extended).

Front-end Demo here. To request backend demo please contact us.

Price: £69.99 

See a front-end demo here

Get Taxi Booking Light here


          

Tour Booking

 Cache   

With Tour Booking you can start selling your tours on your website in a few minutes.

Tour Booking has the same great looking front end booking form as Taxi Booking for Joomla, as well as a back-end management panel for Cars, Settings and Orders.

Horizontal and Vertical Quick booking modules are also available.

Fully open source, multi-domain licensed software - install on as many websites as you like.

90 days of support and updates (can be extended).

Front-end Demo here. To request backend demo please contact us.

Price: £69.99 

Get Tour Booking here


          

Everything for Joomla - Developer's bundle

 Cache   

The Developer's bundle include access to download and support of all our Joomla extensions for one year.

All Joomla extensions are fully open source multi-domain GPL licensed so you can use them freely in your non-commercial and commercial projects.

Get Developer's bundle for Joomla

All current extensions are listed below. Extensions that are in development will also be added to this list and active subscribers will be able to download them.

Taxi Booking (£199.99)
Taxi Booking Drivers (£39.99)
Taxi Booking Invoice later payment gateway for business clients (£39.99)
Taxi Booking Stripe payment gateway (£39.99)
Taxi Booking moneta.ru payment gateway (£39.99)
Taxi Booking Mollie - iDeal payment gateway (£39.99)
Taxi Booking RedSys payment gateway (£39.99)
Taxi Booking SystemPay payment gateway (£39.99)
Taxi Booking CardSave payment gateway (£39.99)
Taxi Booking CECA payment gateway (£39.99)
Taxi Booking PayFast payment gateway (£39.99)
Taxi Booking Authorize.net payment gateway (£39.99)
Alpha User Points for Taxi Booking (£39.99)

Sports for Joomla 

Taxi Booking Light (£69.99)

Tour Booking (£69.99)

Progressive Web App for Joomla (£52.34)

Share Cost - Crowdfunding for Joomla (£44.50)

Lead Magnet (£39.99)

Autoresponder (£19.99)

Total price if you buy them separately: £976.67

Total price if you buy them in the bundle: £299.99

Get Developer's bundle for Joomla


          

Source Song Festival, Open Source: World Premieres

 Cache   
My newly-composed work "Voices of the City" will be premiered alongside works by Libby Larsen and David Evan Thomas as part of the Open Source: World Premieres Concert, the opening night of the Source Song Festival.Program:"Pharaoh Songs" by Libby LarsenAlan Dunbar, baritoneMark Bilyeu, piano"To Joy" by David Evan ThomasMary Wilson, sopranoClara Osowski, mezzoJacob Christopher, tenorTyler Duncan, baritoneArlene Shrut & Erika Switzer, piano [...]
          

Node.js Application Development Outsourcing Services to the UK

 Cache   
Node.js is an open source platform that uses Javascript on the server side. The single threaded, non-blocking nature of Node.js is used to create highly scalable applications that can operate across distributed systems. Our team of proficient Node.js developers uses Node.js and its frameworks such as ExpressJs, SailsJs and ElectronJs for developing web applications, REST APIs and desktop applications. Jellyfish Technologies is one of the best Node.js application development company. We are providing Node.js application development outsourcing services to the UK clients from last 5 years. Contact Us: Head Office: G-76, First Floor, Sector 63, Noida, Uttar Pradesh, India, 201301 Email: enquiry@jellyfishtechnologies.com Ph: +1 (801) 477-4541, +91 1204296782 Website: https://www.jellyfishtechnologies.com/node.js-development-company.html
          

Top Node.js App Development Company in the UK

 Cache   
Are you struggling to find leading node.js web development company? Hire top Node.js web developers from Jellyfish Technologies which provides outsourcing services to the UK, to pick up or start your existing projects dedicatedly on an hourly or full-time basis. Jellyfish Technologies is a top node.js app development company in the UK. At Jellyfish Technologies, our dedicated teams have proven expertise Node.js development services, a versatile open source, a cross-platform runtime environment, that offers unmatched results. Our node.js development solutions have helped businesses achieve scalability easily through the use of event loop, which is one of the best functional features of node.js. If you're planning to build a feature-rich node.js application and want to discuss the challenges of converting your idea into an innovation. Get in touch with us, the experts at Jellyfish Technologies will help you with all your queries. Contact Us: Head Office: G-76, Third Floor, Sector 63, Noida, Uttar Pradesh, India, 201301 Ph: +91-1204296782, +1 (801) 477-4541 Business Email: - enquiry@jellyfishtechnologies.com Website: https://www.jellyfishtechnologies.com/node.js-development-company.html
          

Wired.com: What Does Crowdsourcing Really Mean?

 Cache   

LCM note:  Karl Rove recently took to the WSJ to discuss how campaigns were changing.  One of his suggestions was that the traditional campaign structure would lose meaning and that individuals outside the structure would take a greater role.  Somewhat related, this is an old interview with Douglas Rushkoff about the original meaning and the very different popular understanding of the term "crowdsourcing."

From religion, novels and back again. The strength of community and the dangers of crowdsourcing

Sarah Cove Interviews Douglas Rushkoff via telephone on May 18, 2007

Douglas Rushkoff is an author, professor, media theorist, journalist, as well as a keyboardist for the industrial band PsychicTV. His books include Media VirusCoercionNothing Sacred: The Truth About Judaism (a book which opened up the question of Open Source Judaism), Exit Strategy (an online collaborative novel), and a monthly comic book, Testament. He founded the Narrative Lab at New York University's Interactive Telecommunications Program, a space which seeks to explore the relationship of narrative to media in an age of interactive technology.

We spoke about the notion of crowdsourcing, Open Source Religion, and collaborative narratives.

Sarah Cove: What is crowdsourcing for you?

Douglas Rushkoff: Well, I haven't used the term crowdsourcing in my own conversations before. Every time I look at, it rubs me the wrong way.

To read the rest if this interview click here.

 

 


          

Web design and development company in london

 Cache   
Seawind Solution is providing its web design and development services having a good client base and do a lot more than a website work well and look great. Our expertise designers work as a team to fulfill the client requirements. Our developers are expertise in Custom Website Development, Bespoke Website Development, Flash Website Development, PHP Web Development, ASP.Net Development, E-commerce and Online Store Development, Web Portal Development, Wordpress, Joomla, Magneto, Moodle, and other Open Source Development providing best quality web services in London, Birmingham, Glasgow, Liverpool, Edinburgh, Bristol, Manchester, Leicester, Cardiff, Derby, England, Scotland, Wales, UK.
          

Essentials of Biostatistics: An overview with the help of softwar

 Cache   
This book intends to provide an overview of biostatistics concepts and methodology through the use of statistical software. It helps clinicians, health care and biomedical professionals who need to have basic knowledge of biostatistics as they come across clinical data related to patient, drug and dosage requirement, treatment modalities in day to day life and they are required to take clinical and health care decisions based on the data. This book covers basic concepts involved in the field of Biostatistics such as descriptive statistics, inferential statistics, correlation and regression along with the advanced concepts such as factor analysis, cluster analysis, discriminant analysis and survival analysis. Each topic is explained with the help of R statistical package (open source package). One important note that the book will not discuss about the formulas and equations involved in the statistical concepts. Author assumes that the readers have basic understanding of excel as the sample datasets in the book are mostly excel based datasets and also have some clinical background
          

Why to choose Laravel framework?

 Cache   
It is safe to say that you are searching for a PHP web application created for your startup and pondering which framework to choose? Indeed, Laravel is a decent decision which is an open source PHP framework. It is extremely well known and its imaginative highlights settle on it the most ideal decision for a powerful and responsive application development. Laravel offers various advantages to designers and devices to manufacture an assortment of web applications for little and huge measured organizations. If you are searching for best Laravel developers to boost your business, then you can get in touch with us.
          

Customize end-to-end kendo UI development services!

 Cache   
Get the skilled & experienced Kendo UI developers from company! Utilize Kendo UI open source frameworks for a comprehensive & uniquely designed User Interface for proprietary software while reducing development time. At Ansi Bytecode, our kendo developers have expertise in custom software development & the dexterity to deliver distinctive applications using open sources. Hire experienced & dedicated developers as per your needs by visiting http://ansibytecode.com/kendo-ui-dev-express-services/ or call us +91 98980 10589!
          

WinUI 3.0 with Ryan Demopoulos

 Cache   
What's happening with Windows client-side development? Carl and Richard talk to Ryan Demopoulous about WinUI 3.0, the next version of the WinUI stack, which represents a major shift in how Windows applications are going to be built and supported in the future. Ryan starts the conversation focused on the current WinUI 2, which is open source, but largely focuses only on UWP. WinUI 3 expands the horizons to support .NET Core and more - the alpha bits shipped at Ignite, check it out!

          

Microsoft Edge (Chromium) sera available on January 15, 2020, passage for new products

 Cache   
Microsoft has announced the release date of the final version of its new Navigator Edge (Chromium). The order is fixed at 15 January with a range of newcomers. This promes a inspiré son nouveau logo. Explications The remake for Navigator Edge is being announced in December 2018 with the release of Chromium's open source project. …
          

We Provide World Wide Cloud Services Hosted on Underground Facili

 Cache   
Delivering cutting-edge IT solutions across multiple sectors, X-ITM is an IT consultancy committed to providing highquality, integrated, secure and reliable services - whatever your IT needs. Our experts know how to make technology work for you, offering a diverse range of advice and support, as well as a bespoke network design service that improves efficiency and drives productivity. From professional IT Outsourcing to Data Centre Migration and everything in between, we have the expertise to offer outstanding solutions that are the perfect fit for your business needs. We meet IT challenges with practical advice, explained in plain English. Our professionals are all leaders in their field and, as a team, everyone at X-ITM works together to offer the highest levels of customer satisfaction and care. Supporting Growth with Practical IT Solutions With the brightest innovators in the industry, X-ITM provides practical solutions for the full range of client needs, including Openstack, Linux, Amazon Web Services (AWS), Virtualization and professionally managed IT Outsourcing. We also excel in the provision of Cloud Solutions and Network Security, business strategies, innovative marketing Combining network design with the latest advancements in IT technology is the core feature of our work, bringing together email, web and cloud solutions to deliver efficient, engaging technology that supports growth and productivity. we work with clients across a broad range of industries both in the UK and abroad. Looking for a powerful on-demand IT solution? Amazon Web Services (AWS) offers cloud options that deliver sophisticated, flexible applications with scalability and reliability. However, to make the most of AWS features, you need to be able to grasp every stage of implementing services. This where X-ITM experts come in. It Doesn’t Have To Be Complicated As one of the UK’s leading providers of IT infrastructure, take care of the design, migration, security and operation of your cloud. Optimising every level, we manage the service to let you get on with the job of growing your business. Our architects and engineers have the expert know-how to design and deliver even the most complicated solutions. X-ITM Meets Complex Needs You can trust X-ITM to provide solutions for more than 90 cloud services, including Big Data, Backup, Clustering, Business Analytics, Auto Scaling, High Availability, Data and Application Migration, Systems Monitoring and Security. Our professional solutions include everything from strategy planning based on in-depth assessment, design, implementation, optimisation, automation, support, governance and compliance. We also provide flexible computing with vertical and horizontal scalability and CDN (CloudFront - Content Delivery Network). Talk to us about Firewall and IAM security, S3 object storage, API integration and auto scaling and flexible load balancing. Our other AWS services include: Route 53 Routing and VPN - VPC, ACL to ensure high available for Virtual Private Networks CloudWatch, SNS Development and Implementation Management Work Docs Talk to a member of X-ITM team for a full brief on our extensive portfolio of services. About Us X-ITM knows how to make technology work for you - whatever your IT needs. We offer a comprehensive range of practical solutions for Openstack, Linux, Amazon Web Services (AWS), Virtualization, IT Outsourcing, Cloud Services, Network Security and Network Design. Our Services X-ITM meets diverse needs with a comprehensive range of solutions that improve efficiency and drive productivity at every level. Put your IT challenges in safe hands - let us deliver outstanding solutions while you to get on with the job of growing your business. Client Support Let your business benefit from gold standard IT support 24/7. The name trusted by clients across the UK, US and beyond, XITM provides outstanding support you can rely on. Don’t leave your IT support to chance, let X-ITM experts take care of it. Security You Can Trust We take your security seriously - that is why X-ITM provides the best security hardware solutions on the market. Talk to us if you want robust IT security that: Shields servers, desktops and laptops against malware Defends businesses from major internet risks such as phishing Secures Android devices with key features such as anti-theft and many more Provides an extra defence for businesses using online banking Stops the theft of sensitive information, such as employee and customer data Makes security simple and offers mobile/remote management X-ITM focused on providing IT consulting, Open Source, web & application development solutions and support services. Serving many customers over the years, we strengthened our competencies on Open Source, Web Development, application services, Portal Development, E-commerce and Network & Security systems. X-ITM has thus metamorphosed as a strong source for IT Solutions & Services Company. Our intent is to apply practical answers to your concerns and provide them in an easily implementable manner. X-ITM is a company that puts long-term customer service above all else and has the right people in place to deliver them. The trust you build in us will be delivered back to you by proven first class support services from X-ITM. As a value-added Solution provider, we serve the region through outstanding levels of support and unmatched relationships with our customers. X-ITM provides a wide portfolio of IT solutions and services in the area of Linux and Solaris platform, complete solutions for mail, jboss, apache, tomcat-apache, Networking monitoring, storages, clustering, firewall, sso and security . We deliver far-reaching support for our vendors and clients, including Solution design, technical supports, consultancy and integrated platform. Some of the world’s leading vendors choose to work with X-ITM in the region, recognizing our ability to provide complete IT solutions, bringing together products and services that support end-to-end requirement for organizations Information Technology needs. X-ITM gives its clients a added service by supplying technical expertise, Solutions design, and assistance in training to develop their overall capabilities, and provides high-quality IT products at a fair and competitive price. Solutions Services Online Web Development Hosting & Registration Linux Servers Solaris Servers Application Servers Database Spam Controlling Security CMS (Content Management Solutions) IT Consultancy Web services arrow-small Mail Services Jboss (J2EE) Tomcat – Apache, Nginx Linux and Solaris (complete list available) Network Monitoring SEO & SEM Online Marketing Web Promotions Payment Gateway Out -Source Web Portals E-commerce Joomla(CMS) (and others) Corporate websites Logo Designing Custom Applications Maintenance of Portals Domain Registration Hosting Packages Reseller Programs Domain transfers X-ITM focused on providing IT consulting, Open Source, web & application development solutions and support services. Serving many customers over the years, we strengthened our competencies on Open Source, Web Development, application services, Portal Development, E-commerce and Network & Security systems. X-ITM has thus metamorphosed as a strong source for IT Solutions & Services Company. Our intent is to apply practical answers to your concerns and provide them in an easily implementable manner. X-ITM is a company that puts long-term customer service above all else and has the right people in place to deliver them. The trust you build in us will be delivered back to you by proven first class support services from X-ITM. As a value-added Solution provider, we serve the region through outstanding levels of support and unmatched relationships with our customers. X-ITM provides a wide portfolio of IT solutions and services in the area of Linux and Solaris platform, complete solutions for mail, jboss, apache, tomcat-apache, Networking, monitoring, storages, clustering, firewall, sso and security. We deliver far-reaching support for our vendors and clients, including Solution design, technical supports, consultancy and integrated platform. Some of the world’s leading vendors choose to work with X-ITM in the region, recognizing our ability to provide complete IT solutions, bringing together products and services that support end-to-end requirement for organizations Information Technology needs’-Commerce Web Portals Custom Applications E-Commerce helps business seeking growth, integrating ecommerce solutions into your website opens great opportunities, reach to new customers, increase in sales revenue. Web portal is a website that provides many useful services and resources such as online shopping, email, forums in easy way help users can use services and even interact with Customized applications development and many advantages, most importantly an application that fulfils all the requirements in way you conduct your business and Business development strategies, innovative marketing, restructuring business. Cloud Computing - Consultancy - Development - Reverse Engineering Nested Environments - High Availability Email:support@x-itm.com Tel:+442037731220 London Paris Moscow New York Hong Kong Amsterdam We Provide Private point to point World Wide VPN encrypted networks We Provide Private World Wide Communications with 16 digits dial codes We Provide World Wide Cloud Services Hosted on Underground Facilities We Provide Migrations Support and Consultancy Services to Infrastructures and Installations
          

SmartScope2: Simultaneous Imaging and Reconstruction of Neuronal Morphology.

 Cache   

Quantitative analysis of neuronal morphology is critical in cell type classification and for deciphering how structure gives rise to function in the brain. Most current approaches to imaging and tracing neuronal 3D morphology are data intensive. We introduce SmartScope2, the first open source, automated neuron reconstruction machine integrating online image analysis with automated multiphoton imaging. SmartScope2 takes advantage of a neuron's sparse morphology to improve imaging speed and reduce image data stored, transferred and analyzed. We show that SmartScope2 is able to produce the complex 3D morphology of human and mouse cortical neurons with six-fold reduction in image data requirements and three times the imaging speed compared to conventional methods.


          

Managed IT Services - Steppa

 Cache   
Would you like to develop a new cyber security solution or capability and are not sure how to start? You might not have a security professional in your organization, with Steppa, no need to hire a full-time security staff, simply pay for the service, on demand via our Steppa's CSaaS. As such, our experts plan, analyze, develop and deploy cyber security solutions based on state-of-the-art cyber security models.

We guide you through the development using proven programming approaches and techniques, open sources technologies and other essentials of modern development.

This service can help you develop a cyber security program by focusing on the following elements: implementation, analytics and detection, monitoring, threat assessment, escalation, response and reporting, awareness, training and education.
          

Jitsi, la vidéoconférence open source et multiplateformes

 Cache   
Lorsqu’on parle de vidéoconférence ce sont souvent les mêmes noms qui viennent en tête : Skype, Google Hangout, les appels vidéos Slack, voire les Facebook et autres Youtube Live. Et bien, pensez à ajouter à cette liste le nom de Jitsi. Ce dernier est un projet open source, 100% gratuit, multiplateformes (Windows, … Suite
          

Networks An Open Source Approach Solution

 Cache   
Networks An Open Source Approach Solution
          

Robocorp announces $5.6M seed to bring open-source option to RPA

 Cache   
Robotic Process Automation (RPA) has been a hot commodity in recent years as it helps automate tedious manual workflows inside large organizations. Robocorp, a San Francisco startup, wants to bring open source and RPA together. Today it announced a $5.6 million seed investment. Benchmark led the round, with participation from Slow Ventures, firstminute Capital, Bret […]
          

Erstellen Sie ein soziales Online-Netzwerk mit Elgg auf Debian 9

 Cache   

Elgg ist eine kostenlose Open Source Social Engine Framework Software, die in der Programmiersprache PHP geschrieben wurde. Dieses Tutorial zeigt Ihnen, wie Sie die neueste Elgg-Version unter Debian 9 installieren und konfigurieren, um ein kostenloses soziales Online-Netzwerk zu erstellen.

Der Beitrag Erstellen Sie ein soziales Online-Netzwerk mit Elgg auf Debian 9 erschien zuerst auf HowtoForge.


          

8 Best WordPress Hosting Solutions on the Market

 Cache   

If you’re like us you love WordPress.  The open source platform has been around for about 15 years now and powers web platforms across the globe. WordPress is great, but if you really want to get the most out of the service you need to choose the right tools and services to integrate with your […]

The post 8 Best WordPress Hosting Solutions on the Market appeared first on ReadWrite.


          

Wired.com: What Does Crowdsourcing Really Mean?

 Cache   

LCM note:  Karl Rove recently took to the WSJ to discuss how campaigns were changing.  One of his suggestions was that the traditional campaign structure would lose meaning and that individuals outside the structure would take a greater role.  Somewhat related, this is an old interview with Douglas Rushkoff about the original meaning and the very different popular understanding of the term "crowdsourcing."

From religion, novels and back again. The strength of community and the dangers of crowdsourcing

Sarah Cove Interviews Douglas Rushkoff via telephone on May 18, 2007

Douglas Rushkoff is an author, professor, media theorist, journalist, as well as a keyboardist for the industrial band PsychicTV. His books include Media VirusCoercionNothing Sacred: The Truth About Judaism (a book which opened up the question of Open Source Judaism), Exit Strategy (an online collaborative novel), and a monthly comic book, Testament. He founded the Narrative Lab at New York University's Interactive Telecommunications Program, a space which seeks to explore the relationship of narrative to media in an age of interactive technology.

We spoke about the notion of crowdsourcing, Open Source Religion, and collaborative narratives.

Sarah Cove: What is crowdsourcing for you?

Douglas Rushkoff: Well, I haven't used the term crowdsourcing in my own conversations before. Every time I look at, it rubs me the wrong way.

To read the rest if this interview click here.

 

 


          

New cloud integrations help streamline big data use

 Cache   
As users and enterprises demand more added capacity, particularly for their data-driven workloads, they are increasingly moving towards all cloud or hybrid cloud environments. This needs cloud data orchestration to accelerate and synchronize data across different environments, and as a result users are turning to cloud data analytic services like Amazon's EMR and Google Cloud's Dataproc that reduce hardware spend, eliminate the need to overbuy capacity, and provide business agility. Open source cloud data orchestration company Alluxio has announced that its platform can now be seamlessly integrated with both of these leading cloud analytical services to speed up analytical jobs… [Continue Reading]
          

As outrage over Twitter India’s alleged ‘bias’ grows, journalists and activists ‘migrate’ to Mastodon: How to join this open source micro-blogging site

 Cache   

With suspensions and ineffectiveness to deal with Twitter trolls, Mastodon is gaining popularity in India.

The post As outrage over Twitter India’s alleged ‘bias’ grows, journalists and activists ‘migrate’ to Mastodon: How to join this open source micro-blogging site appeared first on Firstpost.


          

Survey and solutions for potential cost reduction in the design and construction process of nearly zero energy multi-family houses

 Cache   

The CoNZEBs project formed the main part of a session at the IAQVEC conference in Bari in September 2019. CoNZEBs project partner held in total 6 presentations including a project overview, different project results and one of the national exemplary NZEB buildings presented in the end-user brochure. The papers are published as open source documents in IOP Conference Series: Materials Science and Engineering, Volume 609.

 


          

Cost-efficient Nearly Zero-Energy Buildings (NZEBs)

 Cache   

The CoNZEBs project formed the main part of a session at the IAQVEC conference in Bari in September 2019. CoNZEBs project partner held in total 6 presentations including a project overview, different project results and one of the national exemplary NZEB buildings presented in the end-user brochure. The papers are published as open source documents in IOP Conference Series: Materials Science and Engineering, Volume 609.

 


          

Offer - Is Technology Conferences Is The Cost Conserving Deal - Richmond

 Cache   
Right off the bat, Ben ten Games on-line has turn out to be an immediate hit with kids. With the availability of technology these days, it's very simple to change the makeover of any individuation completely, at minimum in pictures. Perhaps then I wouldn't be so annoyed by the time my 3-year-previous's Disney movie began. You will have numerous opportunities to pursue a new path in your career. If you have any type of questions relating to where and exactly how to make use of hp enterprise security services health information technology quotes a.i. artificial intelligence cast sbi online m retail marketing tech online hr tech nyc edtech mst food tech online course tech valley real estate business intelligence jokes k. ashton that internet of things thing sl agritech owner insurance tech startups in india best tech investment trusts healthcare analytics problems banking tech week north africa payments in spanish adtech mini glue gun it operations manager key responsibilities h 265 open source encoder mobile communication meaning big data analytics market size pa local services tax rate smart cars models sports tech minneapolis second hand goods on facebook data service not working on iphone auto tech ii ocala fl r tech retail on demand is not working value of sales force automation mobile payment options australia wearable technology outlook augmented reality labels i phone virtual reality suse enterprise storage reference architecture smart homes lettings, you can call us at our own web-page.
          

Google Cardboard open sourced as active development on Google VR SDK stops

 Cache   

Last month, Google stopped selling Daydream View as modern Android phones — including the Pixel 4 — lack support. The company’s mobile virtual reality offerings are being further diminished today as Google Cardboard gets open sourced.

more…

The post Google Cardboard open sourced as active development on Google VR SDK stops appeared first on 9to5Google.


          

Wired.com: What Does Crowdsourcing Really Mean?

 Cache   

LCM note:  Karl Rove recently took to the WSJ to discuss how campaigns were changing.  One of his suggestions was that the traditional campaign structure would lose meaning and that individuals outside the structure would take a greater role.  Somewhat related, this is an old interview with Douglas Rushkoff about the original meaning and the very different popular understanding of the term "crowdsourcing."

From religion, novels and back again. The strength of community and the dangers of crowdsourcing

Sarah Cove Interviews Douglas Rushkoff via telephone on May 18, 2007

Douglas Rushkoff is an author, professor, media theorist, journalist, as well as a keyboardist for the industrial band PsychicTV. His books include Media VirusCoercionNothing Sacred: The Truth About Judaism (a book which opened up the question of Open Source Judaism), Exit Strategy (an online collaborative novel), and a monthly comic book, Testament. He founded the Narrative Lab at New York University's Interactive Telecommunications Program, a space which seeks to explore the relationship of narrative to media in an age of interactive technology.

We spoke about the notion of crowdsourcing, Open Source Religion, and collaborative narratives.

Sarah Cove: What is crowdsourcing for you?

Douglas Rushkoff: Well, I haven't used the term crowdsourcing in my own conversations before. Every time I look at, it rubs me the wrong way.

To read the rest if this interview click here.

 

 


          

GitHub Sponsors anche in Italia

 Cache   

Microsoft porta GitHub Sponsors in Italia, una piattaforma che consente di sostenere progetti Open Source sottoscrivendo degli abbonamenti mensili

Leggi GitHub Sponsors anche in Italia


          

TV-B-Gone, the hidden-in-your-glasses edition

 Cache   

Facelesstech created a fun pair of "smart glasses" with an embedded a miniature attiny85 Arduino controller, and followed it up with a pair that concealed a TV-B-Gone (Mitch Altman's open source hardware gadget that cycles through all known switch-TV-off codes, causing any nearby screens to go black). It's a much less creepy use than the spy glasses with embedded cameras sold by Chinese state surveillance vendors. I'd certainly buy a pair! (via JWZ) Read the rest


          

Networks An Open Source Approach Solution

 Cache   
Networks An Open Source Approach Solution
          

Emodi, Nnaemeka Vincent (2019) Addressing climate change impact on the energy system: a technoeconomic and environmental approach to decarbonisation. PhD thesis, James Cook University.

 Cache   
Background: The provision of energy services is a vital component of the energy system. This is often considered emission-intensive and at same time, highly vulnerable to climate change conditions. This forms the fundamental objective of this thesis, poised to examine technoeconomic and environmental implications of policy intervention, targeted at cushioning impacts of climate change on the energy system. Aims: Four research queries are central to this work: (1) Review literature on impacts of CV&C on the energy system; (2) Estimate influence of seasonal climatic and socioeconomic factors on energy demand in Australia; (3) Model dynamic interactions between energy policies and climate variability and change (CV&C) impacts on the energy system in Australia and exploring the technoeconomic and environmental implications; and (4) Identify least-cost combination of electricity generation technologies and effective emissions reduction policies under climate change conditions in Australia. Methods: A systematic scoping review method was first applied to identify consistent pattern of CV&C impacts on the energy system, while spotting research gaps in studies that met the inclusion criteria. Databases consisting of Scopus and Web of Science were searched, and snowballing references in published studies was adopted. Data was collated and summarised to identify the characteristic features of the studies, consistent pattern of CV&C impacts, and locate research gaps to be filled by this study. The second study applied an autoregressive distributed lag (ARDL) model to estimate temperature sensitive electricity demand in Australia. Estimates were used with projected temperatures from global climate models (GCMs) to simulate future electricity demand under climate change scenarios. The study further accounted for uncertainties in electricity demand forecasting under climate change conditions, in relation to energy efficiency improvement, renewable energy adoption and electricity price volatility. The estimates from the ARDL model and projections from GCMs were used for energy system simulation using the Long-range Energy Alternative and Planning (LEAP) system. It considered climate induced energy demand in the residential and commercial sector, alongside linking the non-climate sensitive sector with energy supply sector. This model was vital to justifying policy options under investigation. Further, LEAP modelling analysis was extended by identifying effective emission reduction policies considering CV&C impacts. Here, the Open Source Energy Modelling System (OSeMOSYS) was used for optimisation analysis to identify least-cost combination of electricity generation technologies and GHG emission reduction policies. Whereas, in the third and final study, cost-benefit analysis and estimation of long run marginal cost of electricity were conducted, while decomposition analysis of GHGs were analysed in the third study alone. Data used in the ARDL model included socioeconomic data which includes gross state product, as well as population and electricity prices from 1990-2016. The LEAP and OSeMOSYS model as used, was dated to 2014 as the base year, while several technological (power plant characteristics, household technologies), economic (energy prices, economic growth, carbon price) and environmental (emission factors, emission reduction target) variables were used to develop Australia's energy model. Results: The literature search generated 5,062 articles in which 176 studies met the inclusion criteria for the final literature review. Australian studies were scarce compared to other developed countries. Also, just few articles made attempt to examine decarbonisation under climate change. The ARDL model estimates and GCMs simulation of future electricity demand under CV&C show that Australia had an upward sloping climate-response functions, resulting to an increase in electricity demand. However, the researcher identified an annual increase in projected electricity demand for states and territory in Australia, which calls for the need to scale up RET. The LEAP model results showed substantial impacts on energy demand, as well as impacts on power sector efficiency. Under the BAU scenario, CV&C will result in an increase in energy demand by 72 PJ and 150 PJ in the residential and commercial sectors, respectively. Induced temperature enlarges the non-climate BAU demand, which will increase threefold before 2050. Under the non-climate BAU, there is an expansion of installed capacity to 81.8 GW generating 524.6 TWh. Due to CV&C impacts, power output declines by 59 TWh and 157 TWh in Representative Concentration Pathways (RCP) 4.5 and 8.5 climate scenarios. This leads to an increase in generation costs by 10% from the base year, but a decrease in sales revenue by 8% and 21% in RCP 4.5 and RCP 8.5, respectively. The LEAP-OSeMOSYS model suggests renewables and battery storage systems as least-cost option. However, the configuration varied across Australia. Carbon tax policy was observed to be effective in reducing Australia's emission and foster huge economic benefits when compared to the current emission reduction target policy in the country. Also, renewable energy technologies increase electricity sales and decrease fuel cost better than fossil fuel dominated scenarios. Conclusions: Data from this study reveals that seasonal electricity demand in Australia will be influenced by warmer temperatures. Also, the study identified the possibility of winter peaking which is somewhat higher than summer peak demand in some states located in the southern regions of Australia. However, winter peaking is projected to decline by mid-century across the RCPs, while summer peak load is projected to increase, thereby, causing power companies to expand their generation capacity which may become underutilised. Owing to increase in cooling requirements up to 2050, policy uncertainties analysis recommend renewables to match an increasing future electricity demand. The energy model indicates that ignoring the influence of CV&C may result in severe economic implications which range from increased demand, higher fuel cost, loss in revenue from decreased power output, as well as increased environmental externalities. The study concludes that policy options to reduce energy demand and GHG emissions under climate change may be expensive on the short-run, though, may likely secure long-run benefits in cost savings and emission reductions. It is envisaged that this could provide power sector management with initiatives that could be used to overcome cost ineffectiveness of short-term cost. The modelling results makes a case for renewable energy in Australia as lower demand for energy and increased electricity generation from renewable energy source presents a win-win case for Australia.
          

Helping build secure software is of utmost important to GitHub

 Cache   
GitHub COO Erica Bresica explained how the platform is improving its security at the Open Source Summit Europe 2019.
          

Google Cardboard goes open source (as Google moves away from smartphone VR)

 Cache   

Folks who attended the Google IO conference in 2013 got a free Chromebook Pixel worth $1,300. A year later, Google gave attendees a piece of cardboard… But Google Cardboard was a surprise hit of Google IO 2014, because when you followed the company’s instructions to fold up the cardboard, insert glasses, and then put your […]

The post Google Cardboard goes open source (as Google moves away from smartphone VR) appeared first on Liliputing.


          

PinePhone “Brave Heart Edition” pre-orders open Nov 15th (Cheap Linux smartphone)

 Cache   

Pine64’s first smartphone is set to ship by the end of the year… sort of. The PinePhone is a $149 smartphone designed to run free and open source operating systems such as PostmarketOS, Ubuntu Touch, KDE Plasma Mobile, LuneOS, or Sailfish OS. First unveiled in January, the PinePhone has been under development ever since — and […]

The post PinePhone “Brave Heart Edition” pre-orders open Nov 15th (Cheap Linux smartphone) appeared first on Liliputing.


          

Best Practices for Using Open Source Code

 Cache   
A huge amount of applications are now built with open source components, but what affects does this open source code use have on your apps later on down the line? Take a look at this guide, Open Source Compliance, Security, & Risk Best Practices to learn how to monitor your open source code and make sure you're using it safely. Published by: TechTarget
          

8 Best WordPress Hosting Solutions on the Market

 Cache   

If you’re like us you love WordPress.  The open source platform has been around for about 15 years now and powers web platforms across the globe. WordPress is great, but if you really want to get the most out of the service you need to choose the right tools and services to integrate with your […]

The post 8 Best WordPress Hosting Solutions on the Market appeared first on ReadWrite.


          

Kubernetes, CoreOS, and many lines of Python later.

 Cache   

Several months after my last post, and lots of code hacking, I can rebuild CoreOS-based bare-metal Kubernetes cluster in roughly 20 minutes. It only took  ~1300 lines of Python following Kelsey Hightower’s Kubernetes the Hard Way instructions. Why? The challenge. But really, why? I like to hack on code at home, and spinning up a new VM for another Django or Golang app was pretty heavyweight, when all I needed was an easy way to push it out via container. And with various open source projects out on the web providing easy ways to run their code, running my own Kubernetes cluster seemed like a no-brainer. From github/jforman/virthelper: First we need a fleet of Kubernetes VM’s. This script builds 3 controllers (corea-controller{0,1,2}.domain.obfuscated.net) with static IPs starting at 10.10.0.125 to .127, and 5 worker nodes (corea-worker{0,1,2,4,5}.domain.obfuscated.net) beginning at 10.10.0.110. These VMs use CoreOS’s beta channel, each with 2GB of RAM and 50GB of Disk.

$ ./vmbuilder.py --debug create_vm --bridge_interface br-vlan10 --domain_name domain.obfuscated.net \
--disk_pool_name vm-store --vm_type coreos --host_name corea-controller --coreos_channel beta \
--coreos_create_cluster --cluster_size 3 --deleteifexists --ip_address 10.10.0.125 \
--nameserver 10.10.0.1 --gateway 10.10.0.1 --netmask 255.255.255.0 --memory 2048 \
--disk_size_gb 50 $ ./vmbuilder.py --debug create_vm --bridge_interface br-vlan10 \
--domain_name domain.obfuscated.net --disk_pool_name vm-store --vm_type coreos \
--host_name corea-worker --coreos_channel beta --coreos_create_cluster --cluster_size 5 \
 --deleteifexists --ip_address 10.10.0.110 --nameserver 10.10.0.1 --gateway 10.10.0.1 \
 --netmask 255.255.255.0 --memory 2048 --disk_size_gb 50

Once that is done, the VMs are running, but several of their services are erroring out, etcd among them. Why? They use SSL certificates for secure communications among the etcd notes, and I decided to make that part of the below kubify script. I might revisit this later since one should be able to have an etcd cluster up and running without expecting Kubernetes. Carrying on…. From github/jforman/kubify:

$ /kubify.py --output_dir /mnt/localdump1/kubetest1/ --clear_output_dir --config kubify.conf --kube_ver 1.9.3 

Using the kubify.conf configuration file, this deploys Kubernetes version 1.9.3 to all the nodes, including Flannel (inter-node network overlay for pod-to-pod communication), the DNS add-on, and the Dashboard add on, using RBAC. It uses /mnt/localdump1/kubetest1 as the destination directory on the local machine for certificates, kubeconfigs, systemd unit files, etc. Assumptions made by my script (and config):

  • 10.244.0.0/16 is the pod CIDR. This is the expectation of the Flannel Deployment configuration, and it was easiest to just assume this everywhere as opposed to hacking up the kube-flannel.yml to insert a different one I had been using.
  • Service CIDR is 10.122.0.0/16.

Things learned:

  • rkt/rktlet as the runtime container is not quite ready for prime time, or perhaps its warts are not documented enough. rktlet/issues/183 rktlet/issues/182
  • kubelets crash with an NPE kubernetes/issues/59969
  • Using cfssl for generating SSL certificates made life a lot easier than using openssl directly. There are still a ton of certificates.
  • Cross-Node Pod-to-Pod routing is still incredibly confusing, and I’m still trying to wrap my head around CNI, bridging, and other L3-connective technologies.

End Result:

$ bin/kubectl --kubeconfig admin/kubeconfig get nodes
NAME STATUS ROLES AGE VERSION 
corea-controller0.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-controller1.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-controller2.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker0.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker1.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker2.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker3.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker4.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 

$ bin/kubectl --kubeconfig admin/kubeconfig get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-dns-6c857864fb-tn4r5 3/3 Running 3 6h 10.244.7.3 corea-worker4.obfuscated.domain.net 
kube-system kube-flannel-ds-dlczz 1/1 Running 2 6h 10.10.0.127 corea-controller2.obfuscated.domain.net 
kube-system kube-flannel-ds-kc45d 1/1 Running 0 6h 10.10.0.125 corea-controller0.obfuscated.domain.net
kube-system kube-flannel-ds-kz7ls 1/1 Running 2 6h 10.10.0.111 corea-worker1.obfuscated.domain.net
kube-system kube-flannel-ds-lwlf2 1/1 Running 2 6h 10.10.0.113 corea-worker3.obfuscated.domain.net 
kube-system kube-flannel-ds-mdnv8 1/1 Running 0 6h 10.10.0.110 corea-worker0.obfuscated.domain.net 
kube-system kube-flannel-ds-q44wt 1/1 Running 1 6h 10.10.0.112 corea-worker2.obfuscated.domain.net 
kube-system kube-flannel-ds-rdmr5 1/1 Running 1 6h 10.10.0.114 corea-worker4.obfuscated.domain.net 
kube-system kube-flannel-ds-sr26s 1/1 Running 0 6h 10.10.0.126 corea-controller1.obfuscated.domain.net 
kube-system kubernetes-dashboard-5bd6f767c7-bnnkm 1/1 Running 0 6h 10.244.1.2 corea-controller1.obfuscated.domain.net

          

Boston Barcamp 6, Day Two

 Cache   

Finally got this post out after having a bit of a busy week.  

Location based networking, anurag wakhlu (coloci inc) http://goo.gl/mxAtd * location based apps: where are you now? or where will you be? * where are you now: foursquare, gowalla, loopt, etc * where will you be: coloci, fyesa, tripit, plancast * interest based networking: the reason to talk to someone who is near you. tie an interest: sending someone a coupon when they are near starbucks. if they arent near starbucks, what good is a coupon? * proactive coupons: dont wait for a check-in. if someone is 2 blocks from starbucks, send them a notification for coupon. ex// minority report. walk by a billboard, recognizes you, tailors ad specifically to you. 52% of US consumers willing to share location for retail perks. * foursquare background checkin? automatically check you in when you are in a close enough vicinity to a location * Do privacy concerns have a potential impact on services becoming more popular? ex// European privacy laws about broadcasting who you are, where you are, etc. * Have to trust your device that when you disallow authority to know your location, it actually does not broadcast where you are. * Trade off of convenience versus privacy. Debit card is a lot more convenient than cash, people are more than likely to give up privacy. * If you really want to not be tracked, you really need to disconnect yourself from the computer. Go cash only. Re-education might help. “You might already be sharing this info somewhere else, so what difference is it now that you do it via your phone?” * Tracking someone’s history via CSS visited tag. Firefox supposedly has fixed this issue where websites cannot do this anymore. * Using EZpass, who is responsible for giving a ticket if you did 60 miles in faster than 60 minutes? Using your location to know your broke the law. At the start, Anurag gave a wonderfully succint history of location based networking, highighting the current giants like Foursquare and Facebook Places. We talked about how the potential is there to enable your phone to alert you about consumer deals in your vicinity, having more of a ‘push’ aspect to networking, or your phone could alert you to friends being near as well. Eventually though, the attendants turned the talk into a big privacy discussion. Not necessarily as flame-worthy as it could have been, but still talking about how much of our information we want to broadcast and allow to advertisers. Broadcasting location and private information. Could the situation eventually get to the point like Minority Report where your phone is overtly/covertly broadcasting who you are to potential advertisers or other potentially nefarious people.

Economics of open source * reputation is a kind of currency. ancillary benefits of ‘being known.’ ex// popular github repo, can get you a book deal, flown to conferences, etc. * are we cheapening what we do by giving it away? software produces so much cash for people. not everything is oss. still need people to customize it and apply. * discussion: can donations kill a project? the comptroller decides who gets money, and those who donate time but dont get paid feel slighted, and the project can take a nose dive. Content of presentation was a bit bland/dry, but the discussion was involved. War story: giving training away for free when a company charges for it. you are hurting the ecosystem by giving it away rather than someone paying for it. This was fairly interesting, delving past the common topic of software being ‘free as in beer.’

Interviewing well as a coder round table * feel okay sitting there for a couple minutes thinking. Dont feel stressed to start writing code right away. * some questions to ask you to regurgitate syntax. what happens if you get confused between languages. * design issues “show us where you would add X feature.” stylistics versus code syntax. * code portfolios: employers look at your github profile. see the code you’ve written. if your code is ‘too good’, employer wants you to find bugs in their code. * how to practice your whiteboarding skills? codekata: short programming problems. * asking questions that there is no solution to. can you be an asshole interviewing? * be prepared for personal questions because employers will google you and find your personal interests * spin negative questions as positive: what do you see improving in your work environment? * questions back to employee: what do you hope to improve for our company? * if you list a skill in your skills list, be ready to whiteboard the code.

Can the internet make you healthier? jason jacobs, runkeeper founder * convergence of health/athletic data and IT * virtual coaching: ahead/behind pace, in-app reminders to go faster or slower on their iOS app. The more data you have over what you’re doing physically, can help you react. How am I doing against my peers? This was interesting, since Jason sees his company’s first product ‘Run Keeper’ as the jumping off point to more athletic-body sensing applications. The point was raised about what point does the app which suggests a certain pace while running, dance the line of being medical advice. I think it is a good point, that the app needs more information about your health before suggesting a certain distance or pace for exercise. I’ll be curious myself as I use the app more, how I am improving athletically.

Overall, I found the signal-to-noise ratio of the unconference to be very high. For my first Barcamp, I would suggest it to all technically-inclined folks who just want to let their interests and imaginations plot the course of which talks they attend. I know I will be a repeat attendee.


          

Mark Text : un éditeur Markdown simple et élégant axé sur la rapidité et la convivialité selon son développeur, disponible pour macOS, Windows et Linux

 Cache   
Mark Text : un éditeur Markdown simple et élégant axé sur la rapidité et la convivialité selon son développeur,
disponible pour macOS, Windows et Linux

Mark Text est un éditeur Markdown à l'interface simple affichant une prévisualisation en temps réel grâce au moteur de rendu Snabbdom. Mark Text est un projet open source publié sous licence MIT. C'est un éditeur Markdown proposant une prévisualisation en temps réel, gratuit et open source, qui prend en charge CommonMark Spec et GitHub Flavored Markdown...
          

Mautic by hartmut.io – the 2020 Mautic Marketing Automation Review

 Cache   

Are you looking for a Mautic Marketing Automation Review in 2020? Or are you wondering what marketing automation is all about in the first place? What has hartmut.io got to do with it all? Look no further! In this week’s blog article by Tenba Group, we tap into automated marketing, the open source solution called …

Mautic by hartmut.io – the 2020 Mautic Marketing Automation Review Read More »

The post Mautic by hartmut.io – the 2020 Mautic Marketing Automation Review appeared first on Global multilingual digitalization agency.


          

Going BOLD – Open Source in the Built Environment with Marcin Jakubowski

 Cache   
Marcin Jakubowski, Open Source Construction Developer Show Notes – www.constructrr.com/09 Passion behind collaboration – open source vs. traditional collaboration in construction. Kickstarter Campaign: Open Building Institute: Eco Building Toolkit Library Downloadable Designs Emersion Training Program for Builders Open Source Solar Powered Production Facility The Pilot program will be doubled every year. Software Platform – Developers…
          

07 Nov 2019 19:00 : Tempe Ubuntu Hour

 Cache   
Come out and meet us or ask questions and learn Linux open source or just hang for a bit with other users!

This event has a video call.
Join: https://meet.google.com/tfh-tpjk-htz
+1 929-251-6037 PIN: 928116393#
          

GitHub's user survey reveals massive growth in open source

 Cache   

          

Google open sources Cardboard as it retreats from phone-based VR

 Cache   

          

Google launches Skaffold in general availability

 Cache   

In a recent survey of over 5,000 enterprise companies, 58% responded that they were using Kubernetes the open source container-orchestration system for automating app deployment, scaling, and management in production, while 42% said they were evaluating it for future use. The momentum was a motivating force behind Google’s Skaffold, a command line tool that facilitates […]

The Post Google launches Skaffold in general availability appeared first on
Latest Technology News


          

The FLOSS ecosystem of PLC and robotics

 Cache   

Open Source robotics has now been present for more than 10 years and has achieved some success, at least in research and education. There are numerous projects for open source robotics

Although less known, it has been possible for more than a decade to deploy a complete  solution for industrial automation based on free software and open hardware. The first success case was demonstrated by SSAB in Sweden in a fairly large factory which produces steel.

This page tries to collect all success cases and initiatives related to open source robotics and industrial automation. It is a work-in-progress. Feel free to contribute by write to sven (dot) franck (at) nexedi (dot) com or by suggesting new entries on Nexedi's contact page.

Success Cases

Lists

Software

Hardware

Integrators

Presentations

Tutorials

Articles

Standard

  • Modbus is one of the most open standards for PLC integration over TCP/IP with I/O from WagoAdvantech (ADAM) ou ICPDAS
  • DIN Rail is a standard format for industrial enclosures

          

A guide to open source for microservices

 Cache   
Text editor on a browser, in blue

Microservices—applications broken down into smaller, composable pieces that work together—are getting as much attention as the hottest new restaurant in town. (If you're not yet familiar, dive into What Are Microservices before continuing here.)


read more
          

My first open source contribution: Keep the code relevant

 Cache   
Filing cabinet for organization

Previously, I explained the importance of forking repositories. Once I finished the actual "writing the code" part of making my first open source pull request, I felt excellent. It seemed like the hard part was finally over. What’s more, I felt great about the code that I wrote.


read more
          

Google teams up with security firms to curb malicious apps

 Cache   
Google teams up with security firms to curb malicious apps

SAN FRANCISCO: Google has partnered with mobile security companies ESET, Lookout and Zimperium to stop malicious apps from hitting the Play Store and harming Android users. The company’s new partnership initiative is called the App Defense Alliance.  “We’re excited to take this collaboration to the next level, announcing a partnership between Google, ESET, Lookout, and Zimperium. …

Check out more stories at The Siasat Daily


          

First Open Source Silicon Root of Trust Revealed

 Cache   
'Blind trust is no longer necessary', says Google head, as open source silicon root of trust project launches.
          

5 things Go taught me about open source?

 Cache   
This is a keynote highlight from the O’Reilly Velocity Conference in Berlin 2019. Watch the full version of this keynote on the O’Reilly online learning platform. You can also see other highlights from the event.
          

Google maakt vr-software Cardboard open source

 Cache   
Google maakt zijn vr-software Cardboard open source, zo heeft het bedrijf bekendgemaakt. Het belooft nog zelf zaken toe te voegen aan de software, maar laat de verdere ontwikkeling van Cardboard aan geïnteresseerde ontwikkelaars.
          

Skomentuj WNOP 108: Zawód: Programista, czyli jak zostać developerem i ile zarabiają programiści – zdradza Maciej Aniserowicz, którego autorem jest Piotr

 Cache   
Jak znaleźć taką pracę zdalną dla firmy amerykańskiej? Od ponad dwóch lat pracuję dla klienta amerykańskiego za pośrednictwem polskiej firmy. Mój czas pracy to popołudnie, wieczór i noc. Nie wiem, ile klient za mnie płaci, ale po analizie rynku, myślę, że za równo w przypadku tego klienta jak i innych klientów jest to około 3~4 razy więcej niż to co polska firma płaci programiście. Niestety nie mogę dla niego bezpośrednio pracować ze względu na podpisany NDA. Powysyłałem CV do setek firm, ale nawet w przypadku pracy zdalnej chcą często kogoś, kto mieszka w USA i będzie w biurze kilka dni w miesiącu. Poza tym jest olbrzymia konkurencja - na jedno wolne miejsca czasem przypada więcej niż 100 kandydatur. Poza tym chyba na frontendzie, za równo webowym jak i mobilnym jest łatwiej znaleźć pracę, bo jest mniejsza różnorodność stacków (w dużej mierze React/ React Native). Natomiast ja jestem backendowcem, piszę w Node.js, a na backendzie zależnie od projektu jest Node.js, PHP, Python, Ruby, Java, .NET, Golang a czasem jakiś Erlang czy Haskell, więc rynek Node.js jest dość ograniczony. Ponadto wyszukiwarki są dość słabe, bo wyszukanie "Node.js" wyszukuje bardzo często oferty de facto Java, PHP czy czysty frontend a nawet fullstack to nie jest to, co mnie interesuje (mogę w 10, max 30% pisać frontend, ale głównie chcę się skupić na backendzie). Mam bogate portfolio Open Source oraz blog, ale jakoś to nie przekonuje amerykańskich firm, bo wolą chyba wziąć kogoś, kto ma o kilka lat doświadczenia więcej ode mnie lub wspólnych znajomych.
          

Google Cardboard is now open source to keep the mobile VR daydream alive

 Cache   
In 2015, Google launched its formal entry into the world of mobile-powered virtual reality. Cardboard was revolutionary in the way it empowered VR experiences with nothing but, well, a makeshift cardboard headset with special lenses. But while headsets were relatively easy to make, the software behind Cardboard stagnated to the point of being forgotten. Now Google seems to be stirring … Continue reading
          

Alluxio Announces Advanced Cloud Service Integrations on Amazon AWS and Google Cloud

 Cache   
Advances to Amazon Select Technology Partner and Joins the Google Cloud Partner Advantage Program MOUNTAIN VIEW, CA – November 7, 2019 — /BackupReview.info/ — Alluxio, the developer of open source cloud data orchestration software, today announced at the first Data Orchestration Summit at the Computer History Museum, the availability of a range of cloud offerings [...] Related posts:
  1. Alluxio to Showcase Memory-Speed Virtual Distributed Storage System at Strata + Hadoop World in New York Sept. 27 – 29, 2016
  2. Alluxio Delivers First Data Orchestration Platform Powering Multi-cloud Analytics and AI
  3. Alluxio Responds to Enterprise Cloud Adoption by Bolstering Go-To-Market Team with Appointment of Vice President, Global Sales and Business Development
  4. Handy Backup Allows Advanced Automatic Backup of Amazon S3, Dropbox, Google Drive and Other Clouds
  5. Announcing the Inaugural ‘Data Orchestration Summit,’ Bringing Together Practitioners and Thought Leaders at the Intersection of Cloud, AI, and Data

          

Red Hat Shares ― Open processes, culture, and technology

 Cache   

Open source principles help improve the world in a variety of ways. See how you can use them to make a positive impact.

The Red Hat® Shares newsletter helps IT leaders navigate the complicated world of IT―the open source way.

 

 


          

Red Hat Shares ― Open processes, culture, and technology

 Cache   

Open source principles help improve the world in a variety of ways. See how you can use them to make a positive impact.

The Red Hat® Shares newsletter helps IT leaders navigate the complicated world of IT―the open source way.

 

 


          

LXer: Getting started with Pimcore: An open source alternative for product information management

 Cache   
Published at LXer: Product information management (PIM) software enables sellers to consolidate product data into a centralized repository that acts as a single source of truth, minimizing*errors...
          

LXer: My first contribution to open source: Make a fork of the repo

 Cache   
Published at LXer: Previously, I explained how I ultimately chose a project for my contributions. Once I finally picked that project and a task to work on, I felt like the hard part was over, and I...
          

How to promote a new OpenSource software ?

 Cache   
Hello, I'm also a professional software developer and I've been using Linux and other open source product for the last 20 year. I've been working an a new Opensource software and I'm looking for a...
          

Have you ever compiled the source code of an open source application?

 Cache   
We recently asked if you've ever modified the source code of an open source application...
          

LXer: My first contribution to open source: Impostor Syndrome

 Cache   
Published at LXer: The story of my first mistake goes back to the beginning of my learn-to-code journey. I taught myself the basics through online resources. I was working through tutorials and...
          

Senior .NET Developer for Gigya/SAP (190004VU) в Ciklum, Киев

 Cache   

On behalf of Gigya/SAP, Ciklum is looking for Senior .NET Developer to join Kyiv team on a full-time basis.

About Client:
SAP is the world leader in enterprise applications, has 404,000+ customers in more than 180 countries and 93,800+ employees in 130+ countries.

Video speaks more than words — feel free to watch the our SAP videos to get a better understanding of what we are working on
· Life at SAP
· About our product in Ukraine ( Under Gigya-SAP Group)
· Ukrainian kids about SAP

You will be part of a team of professionals that develops our product and take it to the next level by using the most cutting-edge big-data technologies in a SaaS cloud solution, our product provides customer identity and access management (CIAM) to the biggest companies in the world.

Our system is running on our in-house microservices open source framework — Microdot (look it up!), Microdot is a powerful microservices solution which serves billion of users around the world on daily basis.
On this position you will work with the best development teams in agile/scrum methodology, to provide our customers with advanced security and privacy capabilities that should prevent account take overs and bot attacks.

Responsibilities:
• Designing and developing a highly complex project on microservices architecture
• Ability to work in a fast-paced Agile environment in a tight collaboration with a multinational development team
• Facing performance and deployment challenges
• Working as a part of a multinational team in Kiev and Tel Aviv

Requirements:
• 5+ years of experience with application development using C# for server-side development on large scale projects
• Highly technical person that loves technology, innovation and creative thinking
• Experience in large-scale enterprise software development
• TDD and focus on testability
• Experience working with databases
• Great communication skills which reflect on good written and spoken English

Desirable:
• Passion for security
• Experience with big data
• Experience with Microsoft Orleans
• Experience with developing a SaaS solution

What’s in it for you?
• An opportunity to work on a highly technical solution that leads the market and not just follow it
• Competitive salary
• Be a part of a big and successful multinational company
• Variety of knowledge sharing and training opportunities

About Ciklum:
Ciklum is a top-five global Software Engineering and Solutions Company. Our 3,000+ IT professionals are located in the offices and delivery centers in Ukraine, Belarus, Poland and Spain.

As Ciklum’s employee, you’ll have the unique possibility to communicate directly with the client when working in Extended Teams. Besides, Ciklum is the place to make your tech ideas tangible. The Vital Signs Monitor for the Children’s Cardiac Center as well as Smart Defibrillator, the winner of the IoT World Hackathon in the USA, are among the cool things Ciklumers have developed.

Ciklum is a technology partner for Google, Intel, Micron, and hundreds of world-known companies. We are looking forward to seeing you as a part of our team!

Join Ciklum and “Cross the Borders” together with us!
If you are interested — please send your CV to hr@ciklum.com


          

Tech Lead Full Satck for Gigya/SAP (190004VT) в Ciklum, Киев

 Cache   

On behalf of Gigya/SAP, Ciklum is looking for Tech Lead Full Satck to join Kyiv team on a full-time basis.

About Client:
SAP is the world leader in enterprise applications, has 404,000+ customers in more than 180 countries and 93,800+ employees in 130+ countries.

Video speaks more than words — feel free to watch the our SAP videos to get a better understanding of what we are working on
• Life at SAP
• About our product in Ukraine ( Under Gigya-SAP Group)
• Ukrainian kids about SAP

You will be part of a team of professionals that develops our product and take it to the next level by using the most cutting-edge big-data technologies in a SaaS cloud solution.
Our product provides customer identity and access management (CIAM) to the biggest companies in the world.

Our system is running on our in-house microservices open source framework — Microdot (look it up!), Microdot is a powerful microservices solution which serves billion of users around the world on daily basis.
We’re expanding our site in Kiev which means you will have an amazing opportunity to build your team from grounds up, shape the product by contributing from your knowledge while your team will be responsible for our new enhanced security capabilities that should lead the market and differentiate us from competitors.

Responsibilities:
• Responsible for all parts of the product including coding Back End, Front End, infrastructure, testing, and production maintenance;
• Contribute technically (coding daily) to the product by taking on and completing key deliverables;
• Taking ownership of user interfaces and the appropriate tech stack;
• Direct communications with Owners and C-level Managers;
• Contribute to mentoring new recruits in Kyiv Team

Requirements:
• 3+ years of direct management experience of 4+ developers in a large company (tech lead doesn’t count)
• Experience working in an agile/SCRUM environment
• 3+ years of hands-on experience of C# server-side development
• 1+ years of hands-on experience of client side technologies using Angular / React
• Proven ability to mentor and push towards high-quality delivery
• Experience in defining architectures of solution that is composed of web servers, services and DB
• TDD and focus on testability
• Great communication skills which reflect on good written and spoken English
• B.Sc in computer science (or an equivalent degree)

Desirable:
• Experience with big data
• Experience with CI/CD/TDD/Micro Services
• Experience with developing a SaaS solution

Personal Skills:
• Strong communication skills — you know how to engage your employees, you strive to solve problems and raise flags on time
• Team player — you can hold yourself accountable, you are flexible and committed to team goals, you know how to bridge the gaps with interfaces
• Mentor — you lead by example, you make your employees grow by providing them with the technical and personal assistance

What’s in it for you?
• Lead the product to success, make your team members grow, innovate and express your skills on daily basis
• An opportunity to work on a highly technical solution that leads the market and not just follow it
• Competitive salary
• Be a part of a big and successful multinational company
• Variety of knowledge sharing and training opportunities

About Ciklum:
Ciklum is a top-five global Software Engineering and Solutions Company. Our 3,000+ IT professionals are located in the offices and delivery centers in Ukraine, Belarus, Poland and Spain.

As Ciklum’s employee, you’ll have the unique possibility to communicate directly with the client when working in Extended Teams. Besides, Ciklum is the place to make your tech ideas tangible. The Vital Signs Monitor for the Children’s Cardiac Center as well as Smart Defibrillator, the winner of the IoT World Hackathon in the USA, are among the cool things Ciklumers have developed.

Ciklum is a technology partner for Google, Intel, Micron, and hundreds of world-known companies. We are looking forward to seeing you as a part of our team!

Join Ciklum and “Cross the Borders” together with us!
If you are interested — please send your CV to hr@ciklum.com


          

Middle Front End Developer for Seeking Alpha (190004V4) в Ciklum, Киев

 Cache   

On behalf of Seeking Alpha, Ciklum is looking for Middle Front End Developer to join Kyiv team on full-time basis.

About Client:
Seeking Alpha (seekingalpha.com) is the market leader for crowdsourced equity research in the USA. We employ 150+ people across its operations in New York, Israel and India.

The company is the premier website for actionable stock market opinion and analysis, and vibrant, intelligent finance discussion. Each month our crowdsource investment analysis draws an audience of 5.2MM+ monthly visitors to our real-time alerts products on email and mobile.

We handpick articles from the world’s top market blogs, money managers, financial experts and investment newsletters, publishing 500 unique article and news updates daily. The company gives a voice to over 5.5MM registered users, including 12,000+ contributors and individuals averaging 130,000+ comments a month, providing access to the nation’s savviest and inquisitive investors.

Our site is the only free, online source for over 5,000 public companies’ quarterly earnings call transcripts, including the S&P 500. The company was named the Most Informative Website by Kiplinger’s Magazine and has received Forbes’ ‘Best of the Web’ Award.

Responsibilities:
• Develop Front-End (ReactJS, NodeJS) for modern financial media website
• Participate in design and planning discussions, contribute architecture ideas
• Develop and test new user-facing features
• Write highly scalable, reusable and testable code
• Optimize application for maximum speed and performance
• Collaborate with other team members

Requirements:
• Knowledge of Agile principles, open-source ecosystem, Test Driven Development (TDD)
• Experience in ReactJS, NodeJS.
• Experience with SQL or NoSQL database technologies (e.g. MySQL, ElasticSearch, CouchBase, Redis, etc.)
• Comfortable with source version control software (Git)
• Knowledge and understanding of server-side architecture best practices, good understanding of the HTTP protocol and networking
• 3+ years of front-end development experience in building complex and scalable high load websites (JavaScript, HTML5, and CSS3)
• Understanding of all major browsers and the special considerations required for various quirks. Knowledge of browser internals like Javascript engines, native DOM, Event APIs and ways to tune code for the best performance

Desirable:
• Experience working in a UNIX environment
• Familiarity with Angular/React/Vue or similar frontend frameworks
• Proficiency in ES6 and newer specifications of EcmaScript
• Having Github portfolio or link to open source work

Personal skills:
• Ability to focus on details
• Self-motivated, self-disciplined, goal-driven
• Analytical and problem-solving skills
• Good ability to learn fast
• Service and teamwork orientation

What’s in it for you:
• Very close cooperation with the client
• Possibility to propose solutions on a project
• Dynamic and challenging tasks.
• Ability to influence project technologies.
• Team of professionals: learn from colleagues and gain recognition of your skills.
• Low bureaucracy, European management style.

About Ciklum:
Ciklum is a top-five global Software Engineering and Solutions Company. Our 3,000+ IT professionals are located in the offices and delivery centers in Ukraine, Belarus, Poland and Spain.

As Ciklum’s employee, you’ll have the unique possibility to communicate directly with the client when working in Extended Teams. Besides, Ciklum is the place to make your tech ideas tangible. The Vital Signs Monitor for the Children’s Cardiac Center as well as Smart Defibrillator, the winner of the IoT World Hackathon in the USA, are among the cool things Ciklumers have developed.

Ciklum is a technology partner for Google, Intel, Micron, and hundreds of world-known companies. We are looking forward to seeing you as a part of our team!

Join Ciklum and “Cross the Borders” together with us!
If you are interested — please send your CV to hr@ciklum.com


          

Conquering documentation challenges on a massive project

 Cache   
Given the recent surge in popularity of open source data science projects like pandas, NumPy, and Matplotlib, it’s probably no surprise that the increased level of interest is generating user complaints about documentation. To help shed light on what’s at stake, we talked to someone who knows a lot about the subject: Thomas Caswell, the ...
          

« Libre à vous ! » diffusée mardi 5 novembre 2019 sur radio Cause Commune - Les femmes et l’informatique – Google, la presse et les droits voisins – Pacte pour la Transition

 Cache   

Au programme : Notre sujet principal portera sur les femmes et les métiers et communautés de l'informatique et du logiciel libre; chronique « La pituite de Luk » sur Google, la presse et les droits voisins; interview sur le Pacte pour la Transition.

Libre à vous !, l'émission pour comprendre et agir avec l'April, chaque mardi de 15h30 à 17h sur la radio Cause commune (93.1 FM en Île-de-France et sur Internet).

Au programme de la quarantième-troisième émission :

Podcasts des différents sujets abordés

Les podcasts seront disponibles après la diffusion de l'émission.

N'hésitez pas à nous faire des retours sur le contenu de nos émissions pour indiquer ce qui vous a plu mais aussi les points d'amélioration. Vous pouvez nous contacter par courriel, sur le webchat dédié à l'émission (mais nous n'y sommes pas forcément tout le temps) ou encore sur notre salon IRC (accès par webchat).

Personnes participantes

Les personnes qui ont participé à l'émission :

Galerie photos

Vous pouvez voir quelques photos prises pendant l'émission.

Références pour la chronique « La pituite de Luk »

Références pour la partie consacrée aux femmes et l'informatique

Références pour la partie consacrée au Pacte pour la Transition

Références pour la partie sur les annonces diverses

Pauses musicales

Les références pour les pauses musicales :


          

RE: Displayfusion for Linux?

 Cache   
Also a +1 from me for a Linux version. In addition, if you wanted to create DisplayFusion Linux as an open source tool, I'd be happy to contribute, and potentially even write some code for it.
          

Po virtuální realitě končí i ta papírová. Google ruší brýle Cardboard, projekt bude open source

 Cache   
Google nedávno ukončil své prostředí pro virtuální realitu Daydream, které bylo dostupné v rámci Androidu. nyní se přidává i Cardboard – jednoduché papírové brýle, do kterých se vkládal telefon. Softwarové nástroje ale Google uvolní komunitě jako open source. Princip fungování Cardboardu byl ...


Next Page: 10000

© Googlier LLC, 2019