Next Page: 10000

          

IT / Software / Systems: .NET Developer - Miami, Florida

 Cache   
RESPONSIBILITIES:Kforce has a client in search of a .NET Developer in Miami, Florida (FL).Summary:The .NET Developer will be part of the Data Engineering team, building and working on enterprise grade applications to consume and provision data on using Microsoft .NET development stack. In this crucial role, the Developer will be involved with the design, development, testing, and support a suite of applications ranging from web applications, WCF services, restful APIs, cloud-native Azure Microservices and Serverless framework.Responsibilities:* Majority of the time will be writing code * 25% of the time is around design meetings, ground processing * Application Development using C#, NET and Azure skillsREQUIREMENTS:* Bachelor's degree in Computer Science preferred* 5+ years of experience* Heavy C# and NET experience* Working in a 'Data Analytics' driven environment * Azure native cloud service experience * Microservices/serverless experience with Swagger/OAS* Azure API Management Gateway * Strong understanding of AAD/ADFS/JWT/OAuth* Knowledge or familiarity with Container orchestration using Azure Kubernetes Services or Azure Service Fabric* Experienced with bots on the Azure StackKforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status. - provided by Dice ()
          

Senior Java Developer

 Cache   
We have partnered with one of the most innovative banking groups in the Netherlands and they are currently in need of 2 x Senior Java Developers to work in their Amsterdam offices for an initial 1 year freelance contract. What tech stack do you need: • Minimum 3 years of experience with Java, Spring and Web services (SOAP / JSON) • Requires experience with Spring Boot, Docker, OpenShift / Kubernetes, RabbitMQ, REST, Microservices • Requires experience with tools such as Git / Bitbucket, Maven, Jira, Confluence, Nexus • Concrete experience with front-end development is a Pré (Preact or similar) • Knowledge of and being able to work with SQL, JPA and Hibernate • Knowledge of and experience with Jenkins pipelines • Knowledge of automatic testing and use of tools such as Cucumber and Selenium • Knowledge of WebLogic is a plus • Knowledge of Azure DevOps is a Pre...
          

Senior Java Developer

 Cache   
We have partnered with one of our exclusive clients in the public sector and they are currently in need of a Senior Java Developer. This is a freelance contract opportunity untill November 2020 and located in Utrecht. Need to speak Dutch Tech requirements: - Minimum 3 years of experience with Java, Spring and Web services (SOAP / JSON) - Spring Boot, Docker, OpenShift / Kubernetes, RabbitMQ, REST, Microservices - Git / Bitbucket, Maven, Jira, Confluence, Nexus - Concrete experience with front-end development is a preference (React or similar) - Knowledge of and being able to work with SQL, JPA and Hibernate - Knowledge of and experience with Jenkins pipelines - Knowledge of automatic testing and use of tools such as Cucumber and Selenium - Knowledge of WebLogic is a plus - Knowledge of Azure DevOps is a Pre Interviews happening at a short notice, so apply to not miss out on the opportunity!...
          

Red Hat’s Quarkus Java stack moves toward production release

 Cache   

The fast, lightweight, open source Quarkus Java stack will graduate from its current beta designation and become available as a production release at the end of November. Sponsored by Red Hat, the microservices-oriented Java stack supports both reactive and imperative programming models. 

Quarkus is a Kubernetes-native Java stack for cloud-native and serverless application development. Quarkus promises faster startup times and lower memory consumption than traditional Java-based microservices frameworks. It features a reactive core based on Vert.x, a toolkit for building reactive applications based on the JVM, and the ability to automatically reflect code changes in running applications. 

To read this article in full, please click here


          

Vmworld 2019, Tanzu e Kubernetes guidano il futuro

 Cache   
tanzu kubernetes

Vmworld 2019 apre orizzonti nuovi: la piattaforma per ii container Tanzu integra Kubernetes in vSphere, Nsx e vSan.

Ne abbiamo parlato con Michele Apa, Senior Manager Solutions Engineer di Vmware, e con Rafffaele Gigantino, Country Manager Vmware Italia.

Michele Apa ci ha ricordato che la missione della società americana è semplificare le tecnologie, che nel tempo si fanno sempre più complesse.

Peraltro, è una complessità amplificata dalla quantità di dati che negli ultimi anni è letteralmente esplosa. Gestirne una mole così rilevante non è semplice, ma al tempo stesso è indispensabile e offre vantaggi competitivi .

Si rende quindi necessaria l’adozione di soluzioni adatte allo scopo, e Vmware è in prima linea anche in questo ambito.

Tanzu segna di fatto l’ingresso dalla porta principale di Vmware nell’ambito dello sviluppo software e di Kubernetes, e in questa ottica va vista l’acquisizione di Pivotal.

Kubernetes ricopre un ruolo essenziale per Vmware, grazie alla versione beta di Project Pacific (che è già in testing su un gruppo limitato di clienti).

La presenza di Project Pacific all’interno di vSphere consente di estendere la piattaforma di sviluppo in ambito multi-cloud, con l’aggiunto delle capacità di governance di Tanzu; un importante aumento di sicurezza e scalabilità fanno della reingegnerizzazione di vSphere

Con riferimento alla realtà italiana, l’interesse per le proposte Vmware è organico rispetto alla situazione globale. Un importante lavoro di evangelizzazione viene svolto nei confronti dei prospect, e in questo i system integrator svolgono un ruolo essenziale nel mettere in campo tutte le competenze specifiche per meglio integrare le tecnologie nella specifica realtà della azienda cliente.

Non solo: il sistema di partner gioca un ruolo determinante nei confronti delle organizzazioni, avendo già creato un solido rapporto fiduciario, che rende più semplice l’adozione di tecnologie che impattano in maniera rilevante sulla attività quotidiana.

Michele Apa e Raffaele Gigantino
Michele Apa e Raffaele Gigantino

L’attività formativa di Vmware, sul territorio italiano, è particolarmente attiva e si concretizza in operazioni cime Orizzonte Digitale.

Le tecnologie Sd-Wan e 5G si abiliteranno a vicenda, amplificando le potenzialità di connettività e di Intrisic Security, che viene distribuita sugli hypervisor. In questo modo si annullano gli effetti sulle performance, garantendo una sicurezza di livello best in class.

Il 5G consentirà a Vmware di aumentare la presenza sul mercato, e non a caso il rapporto con le telco è particolarmente approfondito: nei prossimi anni è facile prevedere ampi spazi di crescita in questo segmento.

L’acquisizione di Carbon Black, conclusa a metà ottobre, ha apportato una robusta componente antivirus alle soluzioni di security di Vmware. L’integrazione in vSphere rendere quest’ultima de facto “security by design”.

Il mondo Pmi è presente nei pensieri di Vmware, e attraverso i cloud service provider una piccola o media impresa può avviare un percorso verso il cloud senza dover passare tramite una ingegnerizzazione delle proprie applicazioni.

Gigantino ha voluto sottolineare la filosofia di Vmware: dare ai clienti il massimo della scelta possibile. È questa la miglior traduzione possibile del tradizionale motto “any device, any application, any cloud”

L’importanza della customer experience è determinante per il successo di una organizzazione. In questa chiave vanno lette le acquisizioni di Bitnami e Pivotal.

Far si che le applicazioni siano in grado di girare su qualsiasi cloud, e gestite in modo efficiente, è un chiaro vantaggio competitivo per le aziende clienti.

D’altronde, la gestione delle complessità fa parte del DNA di Vmware, e per questo gli investimenti in ricerca e sviluppo ed acquisizioni sono ai vertici del mercato.

L'articolo Vmworld 2019, Tanzu e Kubernetes guidano il futuro è un contenuto originale di 01net.


          

Vmworld 2019, Kubernetes e cloud abilitano le organizzazioni

 Cache   

In una sala gremita da oltre 10.000 fra analisti, partner e giornalisti provenienti da tutta Europa, il CEO di Vmware Pat Gelsinger ha tenuto il proprio speech di apertura di Vmworld 2019 che si sta tenendo a Barcellona. E, con la consueta energia che caratterizza i suoi interventi, ha mostrato come nella visione dell’azienda le nuove tecnologie saranno determinanti nel creare quella che ha definito la Digital Life.

Le app saranno sempre più numerose, e tecnologie come intelligenza artificiale, cloud, edge e 5G imporranno una forte accelerazione ai processi di trasformazione digitale, tanto delle organizzazioni quanto dei singoli individui.

La visione di Vmware è semplice ma efficace: any device, any application, any cloud. Gelsinger, nel ricordare quanto possano essere complesse da gestire applicazioni, cloud e infrastrutture, ha indicato questa come una grande opportunità di business. Chi saprà gestire adeguatamente i multi-cloud e le complessità strutturali avrà una posizione di leadership nel prossimo decennio.

Pat Gelsinger
Pat Gelsinger, CEO di Vmware

Peraltro – continua il CEO – la spinta ricevuta da grandi clienti come Sky, Porsche o Maersk è un continuo stimolo a migliorare le già ottime soluzioni di Vmware.

Kubernetes è sempre più il collante fra IT operator e sviluppatori: Joe Beda, Principal Engineer di Vmware, ha definito in maniera brillante Kubernetes come jazz improvvisato.

Kubernetes porta in dote una grande flessibilità e altrettanta potenziale complessità; per questo Vmware ha annunciato Vmware Tanzu, un portfolio di prodotti e servizi per trasformare il modo in cui le organizzazioni sviluppano le moderne app.

Grazie a Tanzu, secondo Vmware sarà possibile liberare il potenziale di Kubernetes, abilitando le organizzazioni verso una sempre più efficace trasformazione digitale.

Gelsinger ha anche annunciato il lancio della versione beta di Project Galleon, in grado di combinare efficacemente Bitnami con le customizzazioni dei clienti, agevolando l’interfaccia con player come AWS e Azure.

project galleonProject Pacific, inoltre, unisce vSphere con Kubernetes, estendendo vSphere a tutte le moderne app, e con performance eccellenti: secondo Vmware, le performance sono del 30% superiori a una VM Linux based, e dell’8% più veloce di un Bare Metal.

In sintesi, Vmware Tanzu aiuta gli sviluppatori e gli IT manager a gestire Kubernetes, in modalità cloud neutral e con tutta la scalabilità di cui una enterprise possa aver bisogno.

Tanzu garantisce piena libertà di sviluppo alle API dei developer, ma al tempo stesso consente alle IT operation il controllo richiesto da policy e regole.

La visione mostrata durante Vmworld 2019 (come da tradizione Vmware) è di alto livello strategico e tecnologico, ma al contempo perfettamente calata nelle concrete esigenze delle organizzazioni.

Gelsinger ha anche citato il grande successo di CloudHealth, che può contare su oltre 7.000 clienti, a cui consente risparmi nell’ordine del 25%.

Vmware Cloud Foundation è ampiamente leader di mercato, grazie al crescente successo di vSphere, vSan e Nsx, adottati da oltre 300.000 clienti in tutto il mondo.

La strada di Vmware non prescinde da partnership strategiche con i principali attori di cloud pubblico, ponendosi come fattore semplificante per le organizzazioni sempre più orientate a strategie multi-cloud.

vmware cloud partnershipGelsinger ha ha ricordato come la partnership con AWS si stia espandendo a livello globale, abbracciando un numero sempre maggiore di grande aziende clienti.

La migrazione verso il cloud (se effettuata attraverso Vmware Cloud) consente importanti risparmi senza alcuna interruzione nella continuità aziendale, continua il manager americano.

Sempre più ricca è anche la collaborazione fra Vmware e Microsoft, con Azure al centro ovviamente. Anche in questo caso ci si avvia a una maggior diffusione territoriale che vede il 2020 come l’anno delle nazioni asiatiche.

L’edge computing ha avuto spazio nel discorso di Gelsinger, che da Vmware viene affrontato in modo sistematico e organizzato: non è il futuro potenziale ma una tecnologia da adottare e gestire già oggi.

Nel contempo, il 5G si dimostra un potente abilitatore di crescita. Le telco sono rilevanti clienti in questo ambito, cui Vmware offre la tech preview di Project Maestro, un cloud orchestrator nativamente pensato per l 5G.

Uhana è invece un sistema di predictive analytics abilitata da intelligenza artificiale. Anche questa proposta è pensata per il mondo delle telco.

Il portfolio NSX si espande con Vmware NSX Distributed Ids/Ips, un sistema di intrusion detection software-defined.

La sicurezza è un tema caro a Vmware, che, nella visione della società americana, è tanto vitale quanto affetta da una oggettiva frammentazione fra un numero molto elevato di attori.

Carbon Black è una soluzione che lavora in simbiosi con vSphere, con Workspace One con Nsx e infine con Secure State.

Non poteva mancare il riferimento a Workspace One, compatibile con tutti i sistemi operativi per computer e dispositivi mobile. Completa la compatibilità con Windows 10 e Office 365.

In chiusura, Gelsinger ha parlato dell’importanza di sfruttare le tecnologie non solo per perseguire il mero profitto, ma anche per migliorare la vita delle persone.

Un tema etico caro a Vmware e che si sta affermando globalmente sull’onda di una sempre maggior consapevolezza e attenzione verso la corporate social responsibility delle organizzazioni.

 

L'articolo Vmworld 2019, Kubernetes e cloud abilitano le organizzazioni è un contenuto originale di 01net.


          

VMware Project Pacific - подход VMware к объединению миров виртуальных машин и контейнеризованных приложений.

 Cache   

Project Pacific был одним из главных анонсов конференции VMworld в этом году, и VMware обращала на него особенное внимание. Это неудивительно - ведь VMware делает поддержку Kubernetes не для галочки. Тут дело в том, что многие пользователи планируют развертывание инфраструктуры контейнеризованных приложений на физической инфраструктуре - в этом случае не надо нести издержки на гипервизор и лицензии, а сами приложения в контейнерах работают быстро и имеют встроенные средства обеспечения доступности.


          

Microsoft launches Visual Studio Online

 Cache   

#241 — November 6, 2019

Read on the Web

StatusCode
Covering the week's news in software development, infrastructure, ops, platforms, and performance.

Recursive Sans and Mono: A Free Variable Type Family — This is a new ‘highly-flexible’ type family that takes advantage of variable font tech to let you pick the right style along five different axes. It’s pretty clever, well demonstrated, and very suitable for presenting data, code, or to be used in documentation and UIs.

Arrow Type

Microsoft Launches Visual Studio Online — It’s basically a collaborative version of VS Code that runs in the browser letting you develop from anywhere in a cloud-based environment. This isn’t a new idea but it’s great to see Microsoft’s might behind such an effort.

Visual Studio

Top CI Pipeline Best Practices — At the center of a good CI/CD setup is a well-designed CI pipeline. If your team is adopting CI, or your work involves building or improving CI pipeline, this best practices guide is for you.

Datree.io sponsor

You Can't Submit an Electron 6 (or 7) App to the Mac App Store? — Electron is a popular cross-platform app development toolkit maintained by GitHub. The bad news? It uses Chromium which uses several ‘private’ Apple APIs and Apple aren’t keen on accepting apps that use them for a variety of reasons.

David Costa

Dart 2.6: Now with Native Executable Compilation — Dart began life as a Google built, typed language that compiled to JavaScript but is now a somewhat broader project. The latest version includes a new dart2native tool for compiling Dart apps to self-contained, native executables for Windows, macOS, and Linux.

Michael Thomsen

GitHub Sponsors Is Now Out of Beta in 30 Countries — GitHub launched its Sponsors program in beta several months ago as a way for open source developers to accept contributions for their work and projects more easily. It’s now generally available in 30 countries with hopefully more to follow.

Devon Zuegel (GitHub)

Quick bytes:

💻 Jobs

DevOps Engineer at X-Team (Remote) — Work with the world's leading brands, from anywhere. Travel the world while being part of the most energizing community of developers.

X-Team

Find a Job Through Vettery — Vettery specializes in tech roles and is completely free for job seekers. Create a profile to get started.

Vettery

📕 Tutorials and Stories

How Monzo Built Network Isolation for 1,500 Services — 1,500 services power Monzo, a British bank, and they want to keep them all as separate as possible so that no single bad actor can bring down their platform. Here’s the tale of how they’ve been working towards that goal.

Monzo

A Comparison of Static Form Providers — A high level comparison of several providers who essentially provide the backend for your HTML forms.

Silvestar Bistrović

▶  An Illustrated Guide to OAuth and OpenID Connect — A 16 minute video rich with illustrations and diagrams.

Okta

Intelligent CI/CD with CircleCI: Test Splitting — Did you know that CircleCI can intelligently split tests to get you your test results faster?

CircleCI sponsor

▶  Writing Maintainable Code Documentation with Automated Tools and Transclusion — A 37 minute podcast conversation between Robby Russell and Ana Nelson, the creator of Dexy, a documentation writing tool.

Maintainable Podcast podcast

▶  Git is Hard but Time Traveling in Git Isn't — A lightning talk from React Conf 2019 that flies through some interesting Git features in a mere 6 minutes.

Monica Powell

Highlights from Git 2.24 — Take a look at some of the new features in the latest Git release including feature macros and a new way to ‘rewrite history’.

GitHub

Create a Bookmarking Application with FaunaDB, Netlify and 11ty — Brings together FaunaDB’s serverless cloud database, the Netlify platform (which uses Lambda under the hood), and 11ty (a static site generator) to create a bookmark management site.

Bryan Robinson

File Systems Unfit As Distributed Storage Backends: Lessons From Ten Years of Ceph Evolution — You can’t help but be won over by a comment like “Ten years of hard-won lessons packed into just 17 pages makes this paper extremely good value for your time.”

the morning paper

An SQL Injection Tutorial for Beginners — This is not a tutorial for you to follow but more a look at what hackers will attempt to do to your systems, if you let them. The techniques used are sneaky and interesting.

Marezzi

🛠 Code and Tools

Stripe CLI: A Command Line Development Environment for Stripe Users — Stripe has become somewhat ubiquitous in the payment processing space and their focus on developers is pretty neat, not least in this new tool for building and testing integrations.

Tomer Elmalem

Mark Text: A Simple, Free Markdown Editor — Works on macOS, Windows, and Linux. Built in Node with Electron.

Luo Ran

Sell Your Managed Services and APIs to Millions of Developers

Manifold sponsor

Yumda: Yum Packages, but for AWS Lambda — Essentially a collection of AWS Lambda-ready binary packages that you can easily install. You can request new packages, build your own, or use the existing ones that include things like GraphicsMagick, OpenEXR, GCC, libpng, Ruby, TeX, and more.

LambCI

K-Rail: A Workload Policy Enforcement Tool for Kubernetes — A webhook-based policy enforcement tool built in Go that lets you define policies in Go code too.

Cruise

Gitql: A Git Query Language and Tool — Lets you query a git repository using a SQL-like syntax, e.g. select date, message from commits where date < '2014-04-10'

Claudson Oliveira


          

Deploying and running Kubernetes on VMware Cloud on AWS

 Cache   

Leveraging VMware PKS in VMware cloud on AWS:   Since the announcement of VMware Tanzu—a portfolio of products and services to help enterprises build, run and manage applications on Kubernetes—at VMworld US, our customers have been expressing their excitement. They are eager to drive adoption of Kubernetes and are looking to VMware to simplify the

The post Deploying and running Kubernetes on VMware Cloud on AWS appeared first on Virtualize Applications.


          

Portworx Enterprise Storage Platform for Kubernetes Achieves VMware Ready Status

 Cache   
LOS ALTOS, CA – November 4, 2019 — /BackupReview.info/ — Portworx, the container storage and data management company modern enterprises trust to manage data in containers, today announced that its Portworx Enterprise Storage Platform for Kubernetes has achieved VMware Ready™ status. This designation indicates that after a detailed validation process Portworx Enterprise 2.1.5 has achieved [...] Related posts:
  1. Portworx Enables Mission Critical Databases to Run on IBM Cloud Kubernetes Service
  2. Portworx Advances Cloud Native Storage for Kubernetes
  3. Portworx Launches Portworx Enterprise 2.2 to Address Top Challenges for Enterprise Container Adoption: Data Security, Data Protection, and Disaster Recovery
  4. Barracuda Backup Achieves VMware Ready Status
  5. Datos IO RecoverX Achieves VMware Ready for Application Software Status

          

Infrastructure Engineer - Chantilly

 Cache   
Technology is constantly changing and our adversaries are digitally exceeding law enforcement’s ability to keep pace. Those charged with protecting the United States are not always able to access the evidence needed to prosecute crime and prevent terrorism. The Government has trusted in Peraton to provide the technical ability, tools, and resources to bring criminals to justice. In response to this challenge, Peraton is seeking an Infrastructure Engineer to provide proven, industry leading capabilities to our customer. What you’ll do… Provide day-to-day operational maintenance, support, and upgrades for operating systems and servers Perform software installations and upgrades to operating systems and layered software packages Schedule installations and upgrades and maintain them in accordance with established IT policies and procedures Monitor and tune the system to achieve optimum performance levels Ensure workstation/server data integrity by evaluating, implementing, and managing appropriate software and hardware solutions of varying complexities Ensure data/media recoverability by developing and implementing a schedule of system backups and database archive operations Plans and implement the modernization of servers Develop, implement and promote standard operating procedures and schedules Conduct hardware and software audits of workstations and servers to ensure compliance with established standards, policies, and configuration guidelines Improve automation, configuration management and DevOps processes You’d be a great fit if… You’ve obtained a BS degree and have eight (8) years of relevant experience . However, equivalent experience may be considered in lieu of degree. You have ten (10) years of systems engineering/administration experience You possess five (5) years of experience with virtualization platforms You have five (5) years of experience coordinating activities of technology product and service vendors and leading technical infrastructure design activities You have a current Top Secret security clearance with SCI eligibility and the ability to obtain a polygraph It would be even better if you… Understand high-availability, fail overs, backups, scaling and clustering operational systems Have experience with the following technologies: Windows Networking and Infrastructure Microsoft SQL Server or similar Microsoft PowerShell Configuration management tools (Puppet, Chef) Continuous integration tools (Jenkins, CircleCI) Container orchestration tools (Kubernetes, Docker Hub) Cloud services (AWS, Azure) Linux operating systems (Red Hat, CentOS) Other databases (MySQL, MongoDB, PostgreSQL, etc.) SharePoint 2013 DC/OC, Apache, Mesos What you’ll get… An immediately-vested 401(K) with employer matching Comprehensive medical, dental, and vision coverage Tuition assistance, financing, and refinancing Company-paid infertility treatments Cross-training and professional development opportunities Influence major initiatives *This position requires the candidate to have a current Top Secret security clearance and the ability to obtain a polygraph. Candidate must possess SCI eligibility. We are an Equal Opportunity/Affirmative Action Employer. We consider applicants without regard to race, color, religion, age, national origin, ancestry, ethnicity, gender, gender identity, gender expression, sexual orientation, marital status, veteran status, disability, genetic information, citizenship status, or membership in any other group protected by federal, state, or local law.
          

Oho Group Ltd.: Senior Software Engineer

 Cache   
£45000 - £65000 per annum: Oho Group Ltd.: Java, Scala, Python, C++, AWS, GCP, MicroServices, Docker, Kubernetes, AKKA, React Would you be interested in a position developing the next genera... Hertfordshire, South East
          

Student DevOps Engineer, CASS Software Development Group

 Cache   
SunIRef:Manu:title Student DevOps Engineer, CASS Software Development Group Oregon State University 468 reviews - Corvallis, OR Full-time, Part-time Oregon State University 468 reviews Read what people are saying about working here. This is a POOLED POSTING. That means it will stay open for several months and we will pull applicants from the pool for interviews as openings occur, until the closing date. This recruitment will be used to fill part-time (a maximum of 20 hours per week) Student Information Technology positions for the College of Engineering at Oregon State University (OSU). The Center for Applied Systems and Software (CASS) exists as a student experiential learning program. Hourly undergraduate students are hired and trained by full time staff engineers or student managers to perform development projects. Student internships average 1-3 years in length and most graduates are hired by technology companies. In addition to the technical and development skills students gain, they also learn how to work as a member of a team, gain an appreciation of the importance of deadlines, and other facets of running a business such as planning, budgeting, resource allocation, documentation, and communication. The OSU CASS-SDG is looking for hourly students for Student DevOps Engineers. This position is part-time during the academic year, and full-time during the summer as long as the student is not taking any classes (in summer term, only). Student DevOps Engineers are responsible for: Working with clients and managers to identify the best hosting providers and supplementary services (databases, logging solutions, etc.) to compose the foundation of development projects Configuring selected hosting providers and services to work in tandem with CASS-developed project solutions Designing and implementing continuous integration, continuous delivery, and continuous deployment workflows Supporting developers through automating tedious development and deployment processes Teaching and enforcing source control, code quality, and code testing standards throughout CASS-SDG Applicants must have at least four (4) terms remaining in their program and be able to work during summer 2020. We prefer students who have more time left in their program to help reduce graduate turnover, so freshman applicants are welcome. This position is only open to students attending a college or university in Oregon. This position is responsible for reporting to work at scheduled times and reporting to their scheduled CASS manager. Position Duties While Student Developers are responsible for the frontend and server-side code that composes a software development project, Student DevOps Engineers are responsible for supporting Student Developers by managing other aspects of the software development process, outside of writing code. Student DevOps Engineers can expect to: Plan and scaffold project file structures within Git repositories Configure webpack and MSBuild processes for various JavaScript and C# projects Configure Circle CI and Azure Pipelines to run unit tests automatically when code is pushed Develop Helm charts to assist with deploying applications to Kubernetes clusters Manage the deployment of NPM packages and Docker images to their respective registries They should be able to work in a team environment comprised of students and staff members Minimum Qualifications Employment Eligibility Requirements (***************************************************************************** Additional Required Qualifications Must have: Experience using and navigating a command line interface Experience using Git and GitHub workflows Experience with either JSON or YAML configuration languages An understanding of the overall code development process, from design to deployment A passion for helping others A passion for process optimization The ability to work well in a team Strong problem-solving skills and the ability to learn quickly Preferred (Special) Qualifications Nice to have: Experience with JavaScript and C# programming languages Experience with node.js and NPM package management Experience with webpack and JS bundling Experience building and publishing Docker images Experience with Helm charts and Kubernetes deployment Experience using continuous integration tools Working Conditions / Work Schedule Typical work schedule is at least two-hour-consecutive periods between 8am - 6pm Monday - Friday. Students are expected to work on site in Corvallis. Students are expected to work on site in Corvallis campus office. Must work a minimum of 15-hours per week during the academic year, and up to 40-hours in summer. Posting Detail Information Posting Number P05956SE Number of Vacancies 1-2 Anticipated Appointment Begin Date 05/25/2020 Anticipated Appointment End Date Posting Date 10/24/2019 Full Consideration Date Closing Date 06/30/2020 Indicate how you intend to recruit for this search Competitive / Student - open to ALL qualified/eligible students Special Instructions to Applicants When applying you will be required to attach the following electronic documents: 1) A Resume/Vita 2) A cover letter indicating how your qualifications and experience have prepared you for this position. For additional information please email: ************************* This is a POOLED POSTING. This position remains open for CASS to draw applicants from as needed for contracted work.You may or may not be contacted if you have not been selected. OSU commits to inclusive excellence by advancing equity and diversity in all that we do. We are an Affirmative Action/Equal Opportunity employer, and particularly encourage applications from members of historically underrepresented racial/ethnic groups, women, individuals with disabilities, veterans, LGBTQ community members, and others who demonstrate the ability to help us achieve our vision of a diverse and inclusive community. Note: All job offers are contingent upon Human Resources final approval. Oregon State University - Just posted report job - original job
          

Project Lead Site Reliability Engineer

 Cache   
Project Lead Site Reliability Engineer Position Purpose: Position Responsibilities: Develop a roadmap and plan for Continuous Feedback and Delivery initiatives. Build platforms that teams can leverage to accelerate innovation in the areas of reliability, scalability and velocity. Develop cross functional code base for automation. Mentor team members in object oriented programming strategies. Provide technical leadership for software development and software operation transformation initiatives. Implement and customize service mesh systems while leveraging strategies to package platforms and services. Responsible for building innovation in the areas of distributed system flow and resilience, and continuous feedback and delivery. Creating efficiency and cultural transformation through the curation of new systems and capabilities. Education/Experience: Bachelor's degree in Computer Science, Information Systems Management, Engineering or related field or equivalent experience. 5+ years of experience in the field or in a related area. Experience with software engineering, enterprise operations support, object oriented programming, automation, consulting with internal customers, cloud based enterprise-grade cloud systems management and full stack engineering. Ability to quickly learn technologies such as Docker, Kubernetes, Ansible, Puppet, Nginx, HAProxy, Elasticsearch, MariaDB, GoLang and Python. Centene is an equal opportunity employer that is committed to diversity, and values the ways in which we are different. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or other characteristic protected by applicable law. Information Technology USA-Missouri-Chesterfield Corporate Full-time
          

DevOps Engineer - MidLevel

 Cache   
Please email resumes directly to ******************** DevOps Engineer - MidLevel - Burbank, CA. The Mid-Level DevOps Engineer will be responsible for the automation of cloud infrastructure as well as the delivery of software. The Mid-Level. DevOps Engineer must understand Continuous Integration and Continuous Delivery methodologies and technologies to allow Content Technology to rapidly innovate in support of business needs. Technical Knowledge/Skills in the following areas: - Experience with 2 or more scripting languages such as python, ruby or javascript. - Experience with source code and knowledge repositories such as git, jira, or equivalent systems. - High level proficiency using AWS CloudFormation or Terraform. - A solid understanding of containerization technologies such as Docker or Kubernetes. - Experience in AWS at scale leveraging core services such as Lambda, RDS, and EC2. - Expertise in Build and Deployment tools such as Jenkins, Chef, Ansible, or equivalent - Expertise with operational tools such as Datadog, ELK Stack, or AWS CloudWatch - Proficient in a Linux environment - Experience of core DevOps principles, preferably in a production like setting - Proficiency in the SDLC in an agile environment #LI-MC1 - provided by Dice
          

Cloud Security Systems Engineer

 Cache   
Description Our Mission At Palo Alto Networks everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. We have the vision of a world where each day is safer and more secure than the one before. These aren't easy goals to accomplish - but we're not here for easy. We're here for better. We are a company built on the foundation of challenging and disrupting the way things are done, and we're looking for innovators who are as committed to shaping the future of cybersecurity as we are. Your Career As a Palo Alto Networks Systems Engineer - Prisma Cloud Security Specialist, you will be the authority on our cybersecurity offerings for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP) include Containers and Serverless. You are the go to resource for customer interactions that exceed the typical Systems Engineering requirements. This is a technical role that directly supports the sales delivery of quota. You are measured by your expertise and by your ability to lead to customer wins. There is also a requirement for close interaction with Product Management, Marketing and Competitor intelligence to ensure that we continue to out-innovate our competition. Your Impact Articulate to customers at all levels in the hierarchy, from engineer to CIO the value proposition of the platform Lead conversations about trends and emerging changes to the cloud security landscape that every customer needs to be aware of and planning for when utilizing public cloud IaaS and or PaaS services to for critical data, intellectual property, and applications Discuss, with credibility, the competitive landscape and position ours as the best alternative Interact locally and remotely with customers in an equally persuasive manner Help customers embrace our cloud security offerings Be the technical voice of Sales for all things related to security and compliance in the cloud (AWS, Azure & Google Cloud) and Containerized infrastructure Be an evangelist to further bring Security, DevOps, and SecOps together (DevSecOps) Provide technical demos to and lead deep dive discussions with prospective customers Act as a conduit for customer feedback to Product Management, Technical Marketing, competitor intelligence, and R & D to create requirements and deliver product features for our customers Provide design consultation and best practices and mentorship for the rollout, and implementation during the 'pre-sales' process for strategic opportunities, including 'proof of concept' Provide product update and improvement training to other SEs in the region or theater Assist in the training of new SEs in their designated regions Your Experience Degree in CS or equivalent and 5+ years of experience in a highly technical customer facing roles. Security architect, Infrastructure architect, Systems engineer, or Solutions architect Strong general infrastructure skills and specific knowledge of cloud platforms like AWS, Azure, and Google Compute Platform. Experience using APIs Experience with CloudFormation, Terraform, Azure Resource Manager, or GCP Cloud Deployment Manager Templates Proven experience with AWS, Microsoft Azure and Google Cloud Platform configuration and administration of security features and services (including and not limited to identity and access management, service-related security features, networking, firewalls, encryption, and related best practices) In depth experience in security, cloud services i.e. Serverless technologies Expertise in container and DevOps technologies such as Kubernetes, Jenkins, Docker, and OpenShift. Security skills should include areas such as access control, runtime defense /anti-malware, and vulnerability management. Deep understanding of Unix/Linux and Windows operating systems as well as Containers Experience with IaaS and PaaS deployments, connectivity, network security, virtualization and compute Experience working with customers, positioning, demonstrating, configuring and troubleshooting infrastructure security products Travel within the designated region The Team As part of our Systems Engineering team, you'll support the sales team with technical expertise and guidance when establishing trust with key clients. You won't find someone at Palo Alto Networks that isn't committed to your success - with everyone pitching in to assist when it comes to solutions selling, learning, and development. As a member of our systems engineering team, you are motivated by a solutions-focused sales environment and find fulfillment in working with clients to resolve incredible complex cyberthreats. Our Commitment We're trailblazers that dream big, take risks, and challenge cybersecurity's status quo. It's simple: we can't accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at ***********************************. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics.
          

Principal Engineer SW Development

 Cache   
SunIRef:Manu:title Principal Engineer SW Development Verizon 25,553 reviews - Alpharetta, GA 30022 Verizon 25,553 reviews Read what people are saying about working here. What you'll be doing... Verizon Connect is guiding a connected world on the go by automating, optimizing and revolutionizing the way people, vehicles and things move through the world. Our full suite of industry-defining solutions and services put innovation, automation and connected data to work for customers and help them be safer, more efficient and more productive. With more than 3,500 dedicated employees in 15 countries, we deliver leading mobile technology platforms and solutions. As a SaaS leader, we know our talent is the most important component to our success. We hire top talent and empower them to do their best work. As a division of Verizon, we combine the fun and excitement of a start-up environment with the resources, operational excellence, and brand recognition of an established tech giant. Be a part of the rapidly growing Connected Car SaaS industry, as you work alongside some of the sharpest minds in SaaS software and mobile app development. Our engineering teams take advantage of the latest technologies to design, build and test solutions that are literally transforming connected vehicle around the world. We are looking for a principal software engineer to help tackle the exciting technical opportunities ahead. As our Principal Engineer, you will work with cross-functional teams in the design and development of SaaS. Design, Develop and maintain the IoT gateway for fleet. Read through user stories in Jira and understand your commitments. Practice lean software development in day to day development. Write clean, fast code to implement new features. Make sure changing code has enough unit test coverage. Deploy a stack into AWS with your code with the CI/CD stack. Do a code review or respond to yours. Coach junior staff engineers. Follow and over time set best in class engineering practices by taking a leading role in defining coding standards to ensure a higher quality product. Assist Application support team for production issues. What we're looking for... You'll need to have: Bachelor's degree in Computer Science, Computer Engineering or related discipline or four or more years of work experience. Six or more years of relevant work experience. Six or more years of development experience in .NET, C#, SQL Server, *******, Node JS Experience in development with .NET, MVC, C#, SQL, *******, *******, WCF, Entity Framework or NodeJS. Experience in AWS stack and microservices. Even better if you have: Master's degree in Computer Science, Computer Engineering or related technical discipline. Ability to demonstrate delivery of major projects with a focus on quality and productivity in a continuous integration/delivery environment with Amazon ECS or Kubernetes. Good communication skills (including active listening and comprehending requirements). Ability to work independently with limited supervision. Knowledge with DevOps, Native Cloud (Docker, Kubernetes, AWS S3/Lambda/Gateway/CDN, OpenShift). Experience in debugging with APM stack such as AppDynamics or Dynatrace. #LI-JA1, VZConnect When you join Verizon... You'll have the power to go beyond - doing the work that's transforming how people, businesses and things connect with each other. Not only do we provide the fastest and most reliable network for our customers, but we were first to 5G - a quantum leap in connectivity. Our connected solutions are making communities stronger and enabling energy efficiency. Here, you'll have the ability to make an impact and create positive change. Whether you think in code, words, pictures or numbers, join our team of the best and brightest. We offer great pay, amazing benefits and opportunity to learn and grow in every role. Together we'll go far. Equal Employment Opportunity We're proud to be an equal opportunity employer - and celebrate our employees' differences, including race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, and Veteran status. Different makes us better. Verizon - Just posted report job - original job
          

Lead Software Development Engineer- Mobile Applications

 Cache   
**Job Location:** United States : North Carolina : Cary **Role Value Proposition:** At MetLife we are seeking for highly motivated Lead Software Engineer / Developer to lead the technical delivery of a complex and highly visible Global Sales and Servicing Platform. The ideal candidate will possess prior technical leadership experience in insurance industry, strong mobile app development skills. You will guide team members for time boxed delivery of Sales and servicing platform. **Key Responsibilities:** + Provides technical expertise to a project team thats building Mobile Sales and Servicing Platform. + Perform proof of concepts on newer technologies and provide recommendation for different technologies. + Collaborates with the product line architect to review solution design and roadmap. + Take leading role in delivery of Sales and servicing platform in different countries in coloration with product team. + Strong organizational, problem-solving skills, attention to detail and the ability to manage multiple project assignment with tight deadlines. + Provides content and review documentation for release or new features **Essential Business Experience and Technical Skills:** **Required:** + Bachelors degree with a minimum of 8 years of software development experience and minimum 4 years of hands on mobile apps development experience + Must possess strong initiative and a get-it-done attitude. + Passion for coding, learning and adopting newer technology. + Master of at least one of the following two mobile development Frameworks: Native Code. Object-C/Swift for iOS and Java for Android; + Application performance optimization know-how. + Experience in Azure PaaS with components like AKS, Cosmos, KeyVault, Storage etc. + Experience developed and published applications for iOS and/or Android. + History of building high-level user interfaces using rapid prototyping methodologies. + Designing and developing RESTful APIs + Experience creating technical system, design document. + Knowledge on No-SQL database ( _e.g._ MongoDB) + Have Continuous Integration and Continuous Delivery (CI/CD) experience in an Agile Environment. **Preferred:** + IBM Mobile First Platform knowledge + Experience in container and orchestration technologies like Docker, Kubernetes, Swarm etc. **At MetLife, were leading the global transformation of an industry weve long defined. United in purpose, diverse in perspective, were dedicated to making a difference in the lives of our customers.** MetLife is a proud equal opportunity/affirmative action employer committed to attracting, retaining, and maximizing the performance of a diverse and inclusive workforce. It is MetLife's policy to ensure equal employment opportunity without discrimination or harassment based on race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity or expression, age, disability, national origin, marital or domestic/civil partnership status, genetic information, citizenship status, uniformed service member or veteran status, or any other characteristic protected by law. MetLife maintains a drug-free workplace. **For immediate consideration, click the button. You will be directed to complete an on-line profile. Upon completion, you will receive an automated confirmation email verifying you have successfully applied to the job.** Requisition #: ****** Categories - Sales - Mechanical Engineering - Applied Mechanics Posted: 2019-10-31 Expires: 2019-12-01 Siemens - See more jobs from this location
          

Infrastructure and DevOps Engineer (IaC)

 Cache   
SunIRef:Manu:title Infrastructure and DevOps Engineer (IaC) Fiserv, Inc. 1,997 reviews - Lincoln, NE Fiserv, Inc. 1,997 reviews Read what people are saying about working here. What does a great Infrastructure and DevOps Engineer (IaC) do? As an Infrastructure and DevOps Engineer (IaC) , you will establish yourself as a trusted technologist by helping ensure overall satisfaction with Fiserv solutions, which serve thousands of banks while processing trillions of dollars and billions of transactions a year. Product Development and Product Management will rely on you to provide performance test environment and test data management for the solutions that Fiserv provides. It is crucial that you become an advocate for our internal clients, our operating model and technology strategies, as you will be responsible for engineering automated methods of performance testing requirements. You will represent the Bank Solutions business unit as a designated IaC engineer for assigned projects. You should be comfortable creating and reviewing complex code and documentation across many disparate technologies, with a strong knowledge of cloud-enabled development and an aspiration to quickly understand complex proprietary infrastructure architecture. Your responsibility is to develop and maintain automated methods of managing performance test environments and test data for supported applications. This role will work closely with project teams to ensure performance testing requirements are completed and the results meet client expectations. This role must manage diverse system platforms and product lines within Fiserv and be able to adapt quickly to unfamiliar technologies. It is critical that you continually learn and maintain current knowledge of Fiserv Solutions and industry-standard development practices and technologies to provide meaningful and efficient performance test results of various Fiserv solutions. At times, you will be called upon to prioritize and drive resolution on escalated incidents or problems that occur in production. You will be a key resource to drive efficiencies through process improvements while developing repeatable automated performance testing methodologies. Bank Solutions, and most importantly, our clients, expect high quality applications. Success in your efforts accordingly will foster customer loyalty, and lead to improved client retention and overall client satisfaction with Fiserv. A key characteristic in the success of this role will be technical aptitude. The creation of new, and the maintenance of the existing Fiserv automated performance testing toolsets, requires programming experience. It is critical that you know how to leverage Ansible, Jenkins, Docker and Kubernetes technologies in developing infrastructure-as-code playbooks that drive optimal performance test environment and test data management. You must also exercise strong problem-solving skills with limited assistance. This demanding yet exciting role allows you to establish and build relationships across Fiserv to facilitate timely, cost-effective and efficient testing practices of Fiserv solutions. You will become a Subject Matter Expert for the products assigned to you and a Go To technical resource for standard quality assurance best practices. This will bring a great sense of accomplishment for you personally in achieving the goals of Fiserv and most importantly our clients. Job Related Experience: & Experience and strong understanding of Infrastructure-as-Code (IaC) and DevOps Experience designing, testing and automating using Ansible, Jenkins, Docker and Kubernetes Troubleshooting network, OS, and application issues for complex distributed cloud-based applications Successful track record of pipeline automation, analytical thinking, and problem solving Must be able to understand the needs of businesses and develop solutions that cater to such needs Work independently and prioritize time and projects appropriately Think outside of the box to come up with creative solutions to ensure efficient and timely performance assurance Handle multiple projects effectively while continually delivering exceptional quality Basic Requirements: High School diploma and 7+ years of professional relevant work experience Assist with the creation of the pipeline processes to ensure automation and performance testing are present Evaluate and improve existing pipeline processes to enable automation, improve testing, and extend functionality Assist with performance testing of existing and new products Design tests for various application configuration scenarios that demonstrate performance response/capacity and quantify/analyze the performance results Provide resource requirements for a given capacity or load scenario based on test results or production analysis. Outline recommendations for performance-oriented deployment strategies and assist in the definition of application configuration standards and efficiencies Preferred Skills, Experience, and Education: Bachelor's Degree in Computer Science or related technical field and 3+ years of relevant work experience Experience designing and engineering solutions that use current technologies including: C#, MySQL, NoSQL, and Redis Experience with project and portfolio management tools preferably Service Now, Jira, and *************** Experience in application profiling and performance optimization Experience in performance trending to be used for troubleshooting and forecasting of required resources Experience in using system level monitors such as Perfmon Experience in network analysis using Wireshark Experience in the complete software development lifecycle including associated deployment methodologies, QA processes, and performance tuning efforts Who We Are: In a world moving faster than ever before, Fiserv helps clients deliver solutions in step with the way people live and work today - financial services at the speed of life. With 24,000 associates, we help more than 12,000 clients worldwide create and deliver solutions to enable today's consumer to move and manage money with ease, speed and convenience. Our Aspiration is to move money and information in a way that moves the world. As a FORTUNE 500 company and one of FORTUNE Magazine World's Most Admired Companies for the sixth consecutive year, we are committed to excellence and purposeful innovation. In this role you will be aligned with solutions to our banking customers. We deliver comprehensive bank platforms and value-added products and services for community, mid-tier, and large financial institutions. We offer flexible technology solutions that enable financial institutions to quickly align to customers' expectations. With a modular approach to delivery, financial institutions can preserve platform investments while delivering both updated functionality and a consistent experience across channels. From understanding consumer needs based on the latest research to analytics and advisory services that help identify growth opportunities from accounts, payments and industry data, we help clients access and act on data to create better outcomes Fiserv bank platforms - Cleartouch, DNA, Precision, Premier, Signature - enable banks to efficiently manage a wide range of activities such as account opening, deposits, withdrawals, loans, customer information management, and general ledger and accounting tasks. Each Fiserv bank platform has unique capabilities, but they all help our clients improve customer service and streamline their back-office operations. We welcome and encourage diversity in our workforce. We are an equal opportunity employer/disability/vet. Fiserv is an Equal Opportunity Employer/Disability/Vet. Visit ********************************* for more information. Fiserv - Just posted report job - original job
          

DevOps Engineer

 Cache   
SunIRef:Manu:title DevOps Engineer Verizon 25,362 reviews - Alpharetta, GA 30022 Verizon 25,362 reviews Read what people are saying about working here. What you'll be doing... At Verizon Connect (VZConnect), we guide a connected world on the go. We're in it to win it. Today we're the #1 global provider of fleet management solutions for both enterprise and small/medium businesses. Our consumer products, like Hum, create a more connected ride with vehicle diagnostics, emergency assistance, and WiFi. And to top it off, our partnerships with major car manufacturers help us care for more drivers with our connected technologies. As a top 20 SaaS leader, we know our talent is the most important component to our success. We hire top talent and empower them to do their best work. As a division of Verizon, we combine the fun and excitement of a start-up environment with the resources, operational excellence, and brand recognition of an established tech giant. Be a part of the rapidly growing Connected Car SaaS industry, as you work alongside some of the sharpest minds in SaaS software and mobile app development. The Continuous Integration Delivery Engineers will be the cultural change agent, the custodian and the key driver in facilitating adoption of the CI/CD model and will partner with build engineers, application and infrastructure teams, sponsors and stakeholders to manage seamless operations of all development and runtime platforms. This role will be part of the continuous delivery engineering practice and will be a part of our Platform Lean Delivery team, who will then be accountable for the availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their services. Also, work closely with Software Engineers to help them deploy their applications to various systems including test and production systems. We are looking for an experienced and enthusiastic Senior DevOps Engineer. As our new Senior DevOps Engineer, you will be in charge of the specification and documentation of CICD tools, work with build engineers to accommodate self-serve deployments. In addition, you will be developing new features and writing scripts for automation using Ansible/Puppet. Lead evaluation, design, and implementation of container orchestration platform. Automation of systems provisioning/management and application deployment processes. Design, deploy, and maintain standards, best practices, and processes for production support, incident response and root cause analysis, capacity and performance management, health and security monitoring, disaster recovery, application building, packaging, configuration management, QA, and deployment. Design and implementation of Service Discovery/Registration systems with integration with software and hardware load balancers. Work with engineers and product management teams across multiple organizations to advise and influence architecture and technical strategies. Automation of systems provisioning/management and application deployment processes. Work with engineers and product management teams across multiple organizations to advise and influence architecture and technical strategies. Develop, and promote the development of, architectural/technical documentation, whitepapers, presentations, and proposals. What we're looking for... You'll need to have: Bachelor's degree or four or more years or work experience. Four or more years of relevant work experience. Three or more years of experience in one or more of the following programming/scripting languages - Jave,Python, Groovy, Bash. Four or more years of experience with containerization technologies such as Kubernetes and/or Docker;with configuration management tools like Chef, Puppet and/or Ansible; and with CloudFormation and/or Terraform. Even better if you have: A Degree. Bachelor's degree in Computer Science. Knowledge of SonarQube, Jenkins, Twistlock, Artifactory and other PaaS such as AWS or Azure DevOps cloud-based CI/CD systems. Experiencewith EC2, VPC, S3, Glacier, ELB, EBS, RDS, Route 53, CloudFront, CloudWatch, CloudTrail, and more. Experience with DevOps concepts, code deployment processes, microservices, serverless architectures, etc. Five or more years of Linux experience with hands on skills for administrative tasks. Experience with CI/CD systems such as Jenkins and experience in building release pipelines. Eight or more years of experience in the relevant field. Broad technical background in server, storage, network, virtualization, cloud, and DevOps areas. Highly energetic focus on constant learning. Experience automating things with shell scripts, AWS CLI, or other tools. VZConnect When you join Verizon... You'll be doing work that matters alongside other talented people, transforming the way people, businesses and things connect with each other. Beyond powering America's fastest and most reliable network, we're leading the way in broadband, cloud and security solutions, Internet of Things and innovating in areas such as, video entertainment. Of course, we will offer you great pay and benefits, but we're about more than that. Verizon is a place where you can craft your own path to greatness. Whether you think in code, words, pictures or numbers, find your future at Verizon. Equal Employment Opportunity We're proud to be an equal opportunity employer- and celebrate our employees' differences,including race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, and Veteran status. Different makes us better. Verizon - Just posted report job - original job
          

API Integration Engineer

 Cache   
OVERVIEW

Are you a problem solver, explorer, and knowledge seeker always asking, What if*



If so, you may be the new team member we re looking for. Because at SAS, your curiosity matters whether you re developing algorithms, creating customer experiences, or answering critical questions. Curiosity is our code, and the opportunities here are endless.



What we do

We re the leader in analytics. Through our software and services, we inspire customers around the world to transform data into intelligence. Our curiosity fuels innovation, pushing boundaries, challenging the status quo and changing the way we live.



What you ll do

As an API Integration Engineer at SAS, you will collaborate with product managers, technical leads, developers, documentation team members, developer advocates, architects, and other stakeholders to determine tool and automation needs related to the design, development and publishing of APIs across all of SAS.



You will:

* Deliver, support and maintain, and continuously improve test and deployment automation that leverages OpenAPI 2/3 specifications across the entire API lifecycle and within our DevOps pipeline, including but not limited to:

* API Design tooling (openapi-gui, apicurio, stoplight studio).

* API Standards linting (spectral).

* Contract testing (dredd).

* Forward/backward compatibility testing.

* Governance adherence verification.

* Documentation rendering (slate/widdershins, swagger-ui, Redoc).

* Documentation deployment automation.

* Client and server SDK generation tools (openapi-generator, swagger-codegen).

* API conversion tools (REST to GraphQL like openapi-to-graphql or gRPC/protobufs like openapi2proto, OpenAPI 2 to 3 like swagger2openapi).

* Assess and make recommendations about the use of existing open source tooling that supports the API lifecycle.

* Solicit feedback and requirements from stakeholders and prioritize new functionality aligned with overall business needs.

* Regularly communicate work progress with management, identifying issues early and resolving them quickly to avoid or minimize impacts to projects.

* Anticipate time needed to complete projects and assist in project estimates/scheduling.

* Update job knowledge by independent and structured research.



What we re looking for

* You re curious, passionate, authentic, and accountable. These are our values and influence everything we do.

* You have a bachelor s degree in Computer Science, Engineering, or a related quantitative field.

* Experience with:

* Linux operating system, commands, and shell programming tools.

* Scripting languages and automation techniques for testing and deployment.

* Writing or leveraging OpenAPI (Swagger) 2.0 documentation.

* Familiarity with OpenAPI 3.x.

* Understanding of the API lifecycle and awareness of common existing tooling across that ecosystem (visit tools/ in a web browser).

* Understanding of DevOps principles and commonly used tooling in DevOps pipelines.



The nice to haves

* Master s degree or higher in Computer Science, Statistics, or related field.

* Experience integrating / interacting with SAS programmatically through one or more types of APIs found on https://developer.sas.com/home.html.

* Familiarity with usage and programming for cloud platforms such as AWS, Google Cloud, and Azure.

* Understanding of RESTful API principles.

* Experience with:

* Docker containers and/or Kubernetes.

* API Management Solutions (Apigee, API Connect).

* User and developer experience design (UX and DX).

* Developing and/or consuming HTTP APIs, especially REST.

* Contributing to open source projects.

* HTML, CSS, JavaScript to build, support, and maintain internal dashboards.



Other knowledge, skills, and abilities

* Professional software development experience.

* Familiarity with Agile methodologies.



Why SAS

* We love living the #SASlife and believe that happy, healthy people have a passion for life, and bring that energy to work. No matter what your specialty or where you are in the world, your unique contributions will make a difference.

* Our multi-dimensional culture blends our different backgrounds, experiences, and perspectives. Here, it isn t about fitting into our culture, it s about adding to it - and we can t wait to see what you ll bring.

#LI-TP1



SAS looks not only for the right skills, but also a fit to our core values. We seek colleagues who will contribute to the unique values that makes SAS such a great place to work. We look for the total candidate: technical skills, values fit, relationship skills, problem solvers, good communicators and, of course, innovators. Candidates must be ready to make an impact.



Additional Information:

To qualify, applicants must be legally authorized to work in the United States, and should not require, now or in the future, sponsorship for employment visa status. SAS is an equal opportunity employer. All qualified applicants are considered for employment without regard to race, color, religion, gender, sexual orientation, gender identity, age, national origin, disability status, protected veteran status or any other characteristic protected by law. Read more: Equal Employment Opportunity is the Law. Also view the supplement EEO is the Law, and the notice Pay Transparency



Equivalent combination of education, training and experience may be considered in place of the above qualifications. The level of this position will be determined based on the applicant's education, skills and experience. Resumes may be considered in the order they are received. SAS employees performing certain job functions may require access to technology or software subject to export or import regulations. To comply with these regulations, SAS may obtain nationality or citizenship information from applicants for employment. SAS collects this information solely for trade law compliance purposes and does not use it to discriminate unfairly in the hiring process.



Want to stay up to date with life at SAS, products and jobs* Follow us on LinkedIn
          

Software Engineer

 Cache   
Job ID: 76434

Required Travel :No Travel

Managerial - No
Who are we?
If you're a smartphone user then you are part of an ever more connected and digital world. At Amdocs, we are leading the digital revolution into the future. From virtualized telecommunications networks, Big Data and Internet of Things to mobile financial services, billing and operational support systems, we are continually evolving our business to help you become more connected. We make sure that when you watch a video on YouTube, message friends on SnapChat or send your images on Instagram, you get a great service anytime, anywhere, and on any device. We are at the heart of the telecommunications industry working with giants such as AT&T, Vodafone, Telstra and Telefonica, helping them create an amazing new world for you where technology is being used in amazing new ways every single day.
In one sentence
Responsible for software systems design, development, modification, debugging and maintenance, as well as software solutions integration, deployment and production support.
What will your job look like?





  • You will design, develop, modify, debug and maintain software code according to functional, non-functional and technical design specifications.

  • You will deploy and integrate Amdocs solutions into the customer telecom network, working directly with customers and 3rd party vendors
  • You will support customer functional and non-functional testing activities, providing technical expertise on Amdocs products and telco standards (LTE, 3G, SS7, Diameter)
  • You will work closely with off-shore and on-shore Amdocs teams while troubleshooting and investigating issues
  • You will support production systems for critical customers 24/7 in weekly shifts (on-call duty)
  • You will deliver trainings and functional overviews about Amdocs products to the customers and internal teams.
  • You will investigate issues by reviewing/debugging code, provide fixes and workarounds, and review changes for operability to maintain existing software solutions.
  • You will work within a team, collaborate and add value through participation in peer code reviews, provide comments and suggestions, work with cross functional teams to achieve goals.
  • You will follow Amdocs software engineering standards, applicable software development methodology and release processes, to ensure code is maintainable, scalable , supportable and demo the software products to stakeholders
  • You will assume technical accountability for your specific work products within an application and provide technical support during solution design for new requirements.
  • You will be encouraged to actively look for continuous improvement, efficiency in all assigned tasks.



    All you need is...





    • Bachelor's degree in Science/IT/Computing or equivalent
    • 3+ years of experience in writing software code on java
    • 2-3 years of knowledge in Unix/Linux

    • Good knowledge in Object Oriented Design and development
    • Knowledge of telecom domain, network and protocols (Diameter and SS7 is a must)
    • Experience in maintaining complex distributed systems
    • Experience in DB operations and troubleshooting from application perspective
    • Application Server maintenance and troubleshooting experience

    • SLEE technology knowledge

    • OpenStack, Kubernetes, Docker, VMWare knowledge

    • Knowledge and understanding of main principles of online charging systems

    • Experience in development of 5x9 highly-available solutions

    • JVM performance tuning and optimization to support telco grade application
    • Knowledge of main principles of 5G, IoT is a plus

    • Experience with troubleshooting techniques and tools for telco applications and protocols
    • Customer facing experience

    • Be able to work independently in solving / investigating issues
    • Be able to advocate for Amdocs systems / solutions while working on issue resolution with other application teams and vendors, collecting and providing evidences based on specification, industry standards, traces and platform logs.



      Why you will love this job:





      • You will be part of the talented team, working on challenging projects

      • You will have the opportunity to learn the industry most recent trends

      • You will be challenged to develop non-standard solutions for continuously evolving environments and standards

      • You will have the opportunity for personal growth

      • You will have the opportunity to work with the industry most advanced technologies




        Nearest Major Market: Champaign

        Nearest Secondary Market: Urbana
          

Senior Red Hat Delivery Manager

 Cache   
mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And you ll do it with cutting-edge technologies, thanks to our close partnerships with the world biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too. We re proud to be publicly recognized as a Top Workplace year a year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled. Perficient currently has a career opportunity for a Sr Red Hat Delivery Manager. Job_Overview A Delivery Manager is expected to be knowledgeable in RedHat technologies. This resource may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills. You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs. You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and will mentor delivery team members. Responsibilities * Be part of the Sales team supporting RedHat initiatives, providing technical credibility to our customers; master Red Hat OpenShift Container Platform and support technologies to assist in the sales of our offerings * Scope, design, develop, and present proofs of concept for Red Hat OpenShift Container Platform and supporting technologies * Conduct deep dive sessions and workshops to coach customers using Red Hat OpenShift Container Platform and supporting technologies * Provide feedback to product management and engineering teams on the direction of our offerings and customer applicability of features * Assist sales teams in answering technical questions, possibly in the form of requests for proposals (RFPs) and requests for information (RFIs) * Form relationships with the technical associates of our customers to identify new opportunities * Project and solution estimation and team structure definition. * Develop Proof-of-Concept projects to validate new architectures and solutions. * Engage with business stakeholders to understand required capabilities, integrating business knowledge with technical solutions. * Engage with Technical Architects and technical staff to determine the most appropriate technical strategy and designs to meet business needs. Qualifications * Practical experience with Linux container and container clustering technologies like Docker, Kubernetes, Rocket, and the Open Container Initiative (OCI) project * 5+ years of experience working in enterprise application architecture; development skills * At least 3 years of experience in a professional services company, consulting firm, or agency * Container-as-a-Service (CaaS) and Platform-as-a-Service (PaaS) experience using Red Hat OpenShift, Pivotal Cloud Foundry (PCF), Docker EE, Mesosphere, or IBM Bluemix * Deep understanding of multi-tiered architectures and microservices * Ability to engage in detailed conversations with customers of all levels * Practical experience with Java development technologies like Spring Boot, WildFly Swarm, or JEE (Red Hat JBoss Enterprise Application Platform, WildFly, Oracle WebLogic, or IBM WebSphere) * Familiarity with Java development frameworks like Spring, Netflix OSS, Eclipse Vert.x, or Play and other technologies like Node.js, Ruby, PHP, Go, or .NET development * Practical experience with application build automation tools like Apache Maven, Gradle, Jenkins, and Git * Ability to present technical and non-technical presentations * Willingness to travel up to 50% * Experience working on multiple concurrent projects. * Excellent problem-solving skills. * Be independent and self-driven. * Bachelor s degree in Computer Science or related field. Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work. More_About_Perficient Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions. Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index. Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law. Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time. Select work authorization questions to ask when applicants apply Are you legally authorized to work in the United States? Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
          

Senior Software Developer

 Cache   
SunIRef:it Senior Software Developer Palco 76 reviews - Little Rock, AR 72223 Palco 76 reviews Read what people are saying about working here. The Sr. Software Developer will primarily be involved in design, development, and maintenance of web applications and work as part of a small development team. This position would also require understanding of core business requirements from staff and clients to improve, expand, and enhance functionalities of existing applications. Duties and Responsibilities Designs, develops and implements new applications to meet the business needs of Palco staff and clients. Collaborates with the staff to create cohesive and effective software solutions. Works quickly and efficiently to solve issues within the existing software solutions. Keeps up with current technologies to provide industry standard solutions. Updates and maintains existing software/web applications. Assists with generation of reports from SQL database. Participates in requirement discussions and meetings. Effectively presents information to top management, public groups, and/or board of directors. Other duties as assigned by management. Occasional travel may be required. Skills and Attributes At least 10 years development and integration experience with emphasis on JavaScript/React.js, HTML/XML, REST API/JSON, Elixir and ********** programming languages. At least 5 years of experience with relational databases, preferably MYSQL. Knowledge of working in Windows Server environments and in Google Cloud Platform. Working knowledge in JIRA, GitHub, and Docker is required. Knowledge or experience in Google Kubernetes, Compute, and App Engines is a plus. Understands all the aspects of the software development life cycle (analysis, design, code, implement, test, document, etc.) and various development methodologies. Ability to design intuitive UI/UX for use by people with little to no computer skills. Expert troubleshooting and problem solving skills. Detail-oriented to the highest degree and self-organization and time-management skills. Demonstrate good judgment in selecting methods and techniques for creating effective solutions. Communicates effectively and professionally with a range of individuals with varying backgrounds, abilities and communication styles via written and oral formats. Education and Experience Required: Bachelor's degree in computer science, computer engineering or similar technical discipline. Minimum of 10 years of related hands-on experience. Experience with project management a plus. Supervisory experience a plus. Palco, Inc. is an Equal Employment Opportunity (EEO) employer and does not discriminate in any employer/employee relations based on race, color, religion, sex, sexual orientation, gender identity and expression, national origin, age, marital status, disability, veteran status, genetic information or any other basis. Palco - Just posted report job - original job If you require alternative methods of application or screening, you must approach the employer directly to request this as Indeed is not responsible for the employer's application process.
          

Senior Software Engineer

 Cache   
JOB DESCRIPTION:

Broadcom creates software that fuels transformation for companies and enables them to seize the opportunities of the application economy. From planning to DevOps to security to systems management, our solutions power innovation and drive competitive advantage for businesses everywhere.

As a Senior Software Engineer you will be responsible for working closely with product management and architects to design, develop, and test highly complex and sophisticated software systems and applications. Provide high level research and analysis related to software design and development and solve complex problems for a product or family of products in the area of CA Application Performance Management.

RESPONSIBILITIES

Have a passion for designing and coding complex modules that meet functional and business requirements on schedule and within budget

Perform unit/module testing of software to find errors and confirm programs meet specifications.

Participate in design and code reviews with other developers.

Fix bugs, add enhancements and ensure that the product meets all functional and non-functional requirements

Assist in strategic research and design as directed

Evaluate impact of software performance and recommend changes to software design team.

Write and maintain documentation to describe program development, logic, coding, testing, changes, and corrections.

Escalate issues to management as appropriate.

Mentor, train, develop and serve as knowledge resource for less experienced Software Engineers

REQUIREMENTS

Bachelor's Degree or global equivalent in Computer Science or related field

7+ years of professional experience with application development

Strong coding skills in Java

Strong expertise in Object Oriented Analysis & Design, Design Patterns

Hands-on web technologies like XML, JSON, and RESTful API, SOAP

Strong working knowledge of Linux/Unix and Windows, Networking concepts and Databases

Practical experience with NoSQL and Big Data technologies such as RocksDB, ElasticSearch, Cassandra, Kafka, Hadoop HDFS

Working knowledge of Docker, Kubernetes or OpenShift

Knowledge in the Application Performance Management domain preferred

Experience with Agile Software Development (Scrum and/or Kanban)

Curious, a fast learner, and fast to respond

A team player with good social skills.

If you want to fulfill your potential, be acknowledged for your achievements, and be given autonomy to make decisions for your business and customers; if you want to work with a company that respects you as an individual - recognizing both your needs at work and your responsibilities outside of it - then Broadcom is where you belong.

At Broadcom, your passion and expertise can directly impact the business and you ll help offer our customers practical approaches to delivering new, innovative services and value through IT.

IF YOU ARE LOCATED OUTSIDE USA, PLEASE BE SURE TO FILL OUT A HOME ADDRESS AS THIS WILL BE USED FOR FUTURE CORRESPONDENCE.

Broadcom Inc. is committed to creating a diverse work environment and is proud to be an equal opportunity employer.
          

Cloud Platform Architect

 Cache   
Cloud Architect
( Jersey City, NJ )
The Sky Team within Core Engineering is responsible for enabling the use of public cloud services across the firm. As part of your role you will be partnering with core and business-aligned software engineering and SRE teams to deliver secure, resilient, and scalable cloud native solutions. Additionally, a key responsibility will be researching, architecting, and securing new cloud services, solutions, and features for general use by our Engineering organization. We are in the growth stage of adopting cloud native principles for our applications and you will be directly helping to architect and engineer the technology strategies that will give our business a competitive edge!
RESPONSIBILITIES AND QUALIFICATIONSHOW YOU WILL FULFILL YOUR POTENTIAL
--- Collaborate with business software engineering teams on solving business problems via the architecture and engineering of cloud native applications
--- Engage with the larger Core Engineering organization to create and deliver usable, safe cloud native engineering patterns with associated guardrails and operational practices
--- Partner with our information security teams on the identification, analysis, and mitigation of risks related to cloud services
--- Create, communicate, and promote best practices for public cloud native development across the firm

SKILLS AND EXPERIENCE WE ARE LOOKING FOR
--- Experience architecting, designing, administering, or developing applications in Amazon Web Services, Google Cloud Platform, or Microsoft Azure (SaaS, PaaS, IaaS)
--- Ability to communicate technical concepts effectively, both written and orally, as well as the interpersonal skills required to collaborate effectively with colleagues across diverse technology teams
--- Engineer secure applications and solutions in a cloud native environment
--- Proficiency in designing, developing, and testing software in one or more of Python, Java, Groovy, or golang; open to using and learning multiple languages
--- Ability to reason about performance, security, and process interactions in complex distributed systems
--- Ability to understand and effectively debug both new and existing solutions

Preferred Qualifications
--- Familiarity with cloud network architectures and the integration with hybrid cloud networking
--- Experience with serverless platforms such as AWS Lambda and Google Cloud Functions
--- Experience with infrastructure and configuration as code solutions, including Terraform, Ansible, or cloud-init
--- Experience with container orchestration and service mesh architectures, including Kubernetes/Istio, Pivotal Cloud Foundry, or Consul
--- Familiarity with Linux OS engineering, configuration management, and troubleshooting
          

Microsoft SQL Server 2019 15.0.2000.5 (RUS/ENG)

 Cache   
Microsoft SQL Server 2019 15.0.2000.5 (RUS/ENG)

SQL Server 2019 — это передовые механизмы безопасности и соблюдения нормативных требований, производительность на уровне лидеров отрасли, высокая доступность и продвинутая аналитика для всех ваших данных, а теперь еще и поддержка работы с большими данными.

Преимущества SQL Server 2019:
Анализ любых данных. SQL Server — это пространство для интеграции данных. Благодаря мощным возможностям SQL Server и Spark вы можете трансформировать и анализировать как структурированные, так и неструктурированные данные.
Выбор языка и платформы. Создавайте современные приложения с инновационными функциями, используя платформу и язык по своему выбору. Теперь поддерживаются Windows, Linux и контейнеры.
Производительность на уровне лидеров отрасли. Воспользуйтесь преимуществами великолепной масштабируемости, производительности и доступности для критически важных интеллектуальных приложений, хранилищ и озер данных.
Расширенные функции безопасности. Защищайте данные во время хранения и использования. SQL Server — это СУБД, которая уже более 8 лет признается наименее уязвимой по результатам тестов на уязвимость, проводимым Национальным институтом стандартов и технологий США (NIST).
Принимайте более оперативные и обоснованные решения. Благодаря серверу отчетов Power BI вы можете создавать профессиональные интерактивные отчеты, а также использовать возможности формирования отчетов SQL Server Reporting Services.
Кластеры больших данных. С SQL Server 2019 управлять средой для больших данных гораздо проще. Этот продукт содержит основные элементы озера данных — распределенную файловую систему Hadoop (HDFS), Spark и инструменты анализа. Все они тесно интегрированы с SQL Server и поддерживаются Microsoft. Вы можете легко развертывать контейнеры Linux в кластере Kubernetes.
Виртуализация данных. В SQL Server 2016 вы могли получать данные из Hadoop в структурированном формате, не выходя из SQL Server, благодаря PolyBase и запросам T-SQL — для этого даже не нужно было копировать или перемещать данные. В новом выпуске мы развили идею виртуализации данных и добавили новые источники данных, в том числе Oracle, Teradata, MongoDB и другие серверы SQL Server.

Доступы редакции для установки:
• SQL Server 2019 Enterprise Core Edition
• SQL Server 2019 Enterprise Edition
• SQL Server 2019 Developer Edition
• SQL Server 2019 Standard Edition
• SQL Server 2019 Web Edition
          

Azure DevOps Engineer

 Cache   
Are you someone that likes to work with teams across the organization to help create high performing delivery teams? Do you enjoy implementing DevOps and Application Lifecycle Management (ALM) solutions to break down traditional silos between development, testing, project management, and operations to establish cohesive processes from requirements to deployments? Required experience and skills:
  • Understand how to manage Agile projects using Azure Boards
  • Experience customizing process templates to fit needs from portfolio to teams
  • Know how to surface and visualize Azure DevOps data to an organization through tools in Azure DevOps and Power BI
  • Experience extending Azure DevOps by creating custom extensions
  • Leverage Git version control for providing a workflow for helping to ensure quality through branch policies, CI, and branching strategies
  • Be able to explain various branching strategies and working with clients to fit their needs
  • Create CI process across multiple platforms to compile, execute unit tests, and perform code quality checks.
  • Create Software Delivery Pipelines (SDP) to deploy applications, infrastructure as code across the multiple environments including SDLC controls.
  • Experience including security controls into the SDP process including SAST, DAST, and 3rd party Open Source Software (OSS) scanning
  • Experience using Azure Artifacts to manage 1st party built libraries and upstream 3rd party libraries
  • Leverage Azure DevOps for Test Case Management
  • Experience with automated testing across all levels - unit, service level, and UI testing and incorporating these tests in the pipeline.
  • Experience scripting in PowerShell and developing in C#
  • Experience working with web applications, services, and containerized applications
  • Experience is building Infrastructure as Code for provisioning public cloud infrastructure
  • Building and architecting public cloud applications
  • Customer-oriented, diligent, proactive, focused on achieving customer's business objectives as a top priority
  • Able to work successfully both individually and as a team
  • Easy-going, friendly, communicative, strives to see opportunities rather than problems

    Nice to Haves Experience with Kubernetes Deep Azure Experience


    Do you want to join the most talented team in the industry? We are looking for a creative, collaborative, entrepreneurial-like spirit who thrives in innovation and a fast pace environment. The salary for this position is highly dependent on experience and negotiable for the right candidate. Green House Data offers outstanding competitive benefits! Want to know more? Here are just some of the things that we can offer you:
    • Flexible Paid Time Off (YOU take the time you need)
    • 8 Day Holiday Pay
    • Paid Volunteer Day
    • Employer Contributed 3 Tier Medical Plan Options
    • Employer Contributed Dental Plan
    • 100% Employer-Paid Vision Plan
    • 100% Employer-Paid Short Term Disability Plan
    • 100% Employer-Paid $50,000 Life Insurance Plan including AD&D
    • Voluntary Long Term Disability Plan
    • Voluntary Benefits including; Accident, Critical Illness, and Medical Bridge Options
    • Additional Supplemental Life Insurance Plan including Spouse and Children
    • 3% Employer Match Simple Retirement IRA Plan
    • Life Assistance and Wellness Programs
    • Green Initiatives
    • Training and Development Programs
    • Employee Events
    • 100% Employer Paid Gym Membership
    • AND MORE...
          

Cloud Technology Leader

 Cache   
We believe work is not a place, but rather a thing you do. Our technology revolves around this core philosophy. We are relentlessly committed to helping people work and play from anywhere, on any device. Innovation, creativity and a passion for ever-improving performance drive our company and our people forward. We empower the original mobile device: YOU!
What we're looking for:


Citrix is a cloud company that enables mobile workstyles. We create a continuum between work and life by allowing people to work whenever, wherever, and however they choose. Flexibility and collaboration is what we are all about. The perks: We offer competitive compensation and a comprehensive benefits package. You will enjoy our workstyle within an incredible culture. We will give you all the tools you need to succeed so you can grow and develop with us.
Citrix is rapidly developing and expanding its portfolio of global-scale cloud applications across all product lines, including app/desktop virtualization, networking, and content collaboration. These applications must deliver enterprise-grade availability, performance, security, and compliance while leveraging consumer internet economies of scale in multiple clouds (and in hybrid scenarios that reach behind the enterprise firewall). This requires excellent architecture and DevOps process in software development, operations, and site reliability engineering.
We need a seasoned engineer with deep understanding and hands-on experience in these areas to deliver corporate-wide technical leadership across our 2,000+ strong engineering organization. The right candidate for this role excels in both outbound leadership (evangelism, leveraged communication, and executive influence) and inbound leadership (innovating, solving hard technical problems hands-on-keyboard, and rapidly learning and evaluating new technologies).

Responsibilities:
- Be an Citrix engineering authority on overall cloud architecture, and site reliability engineering.
- Be the Citrix engineering authority on one or more special topics, such as Kubernetes-based service meshes, wide-column NoSQL, or global traffic management.
- Work with other senior technical leaders to develop, validate, and communicate the technical vision for our cloud operations (and how we can best deliver value to our customers).
- Serve as SME and design lead on project teams solving real-world problems in architecture, cloud resiliency and deployment at scale.
- Advise engineering and corporate leadership on cloud research investment and long-term direction.
- Ensure Citrix leads the industry in cloud application delivery.

Desired Skills and Experience:
- Expertise in multiple cloud application programming languages and platforms
- Demonstrable knowledge of multiple tier-1 cloud platforms (Azure, AWS, Google Cloud, etc.)
- Proven record of innovation combined with shipping real products
- Notable published research and patents in cloud application technology
- Industry software development experience in SaaS, large-scale web applications, and large-scale enterprise software
- Ability to design and execute on research agenda.
- Proven ability to deliver scalable, reliable, secure, and winning multi-tenant, cloud native applications
- Cost engineering large scale, SaaS applications to meet profit margin and earnings targets
- Examples of leveraging cloud services for product development
- Examples of developing micro-service system architectures (moderate to complex)
- Knowledge of SQL, NoSQL, big data, and object storage technologies (including global data replication)
- Practical knowledge of key web UI technologies (React, Angular, etc.)
- Design and implementation of cloud-based systems for global availability including practice in chaos engineering
- Practical knowledge of APM, monitoring, and logging at scale
- Experience with cloud-based security principles and crypto technologies including HSMs and key management
- Experience building SaaS solutions at scale using technologies and concepts such as CDN, messaging, caching, batch, FaaS, DDos, GTM, and search
- Experience with cloud hosting and provisioning technologies such as Kubernetes, Istio, Terraform, Helm

Requirements:
- MS or Ph.D. in Computer Science or related technical field, or equivalent practical experience
- 7 years of industry experience in cloud at scale #LI-RW1

What you re looking for:
Our technology is built on the idea that everyone should be able to work from anywhere, at any time, and on any device. It s a simple philosophy that guides everything we do including how we work. If you re an engineer, we ll give you plenty of ways to test your skills on cutting edge technology. We want employees to do what they do best, every day.

Be bold. Take risks. Imagine a better way to work. If this sounds like you then we d love to talk.

Functional Area:Cloud Ops
About us:

Citrix is a cloud company that enables mobile workstyles. We create a continuum between work and life by allowing people to work whenever, wherever, and however they choose. Flexibility and collaboration is what we re all about. The Perks: We offer competitive compensation and a comprehensive benefits package. You ll enjoy our workstyle within an incredible culture. We ll give you all the tools you need to succeed so you can grow and develop with us.

Citrix Systems, Inc. is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all federal, state and local laws that prohibit employment discrimination on the basis of age, race, color, gender, sexual orientation, gender identity, ethnicity, national origin, citizenship, religion, genetic carrier status, disability, pregnancy, childbirth or related medical conditions, marital status, protected veteran status and other protected classifications.

Citrix uses applicant information consistent with the Citrix Recruitment Policy Notice at https://www.citrix.com/about/legal/privacy/citrix-recruitment-privacy-notice.html

Citrix welcomes and encourages applications from people with disabilities. Reasonable accommodations are available on request for candidates taking part in all aspects of the selection process. If you are an individual with a disability and require a reasonable accommodation to complete any part of the job application process, please contact us at (877) 924-8749 or email us at ASKHR@citrix.com for assistance.

If this is an evergreen requisition, by applying you are giving Citrix consent to be considered for future openings of other roles of similar qualifications.
          

Senior DevOps Engineer

 Cache   
JOB DESCRIPTION

Polaris Alpha, a Parsons Company, has emerged as a leader in the development of cutting edge solutions for the Department of Defense and Intelligence Community. Our tremendous success can be attributed to our people and our priorities. We hire the best, we make them a priority and we never lose focus on the mission. It s why we re here. We have built this cultural legacy by working closely with analysts and operators to understand their needs and delivering meaningful value through innovative, cost effective and intuitive software solutions.

Our Space Operations Directorate is passionate about making America the undisputed leader in Space because we understand that ensuring our nation s security for future generations depends on it. Polaris creates game changing space solutions by teaming highly respected subject matter experts with brilliant technologists. Are you an experienced Software or DevOps Engineer looking for an opportunity to grow your skillset? Do you want to be part of a team that is helping the government solve major national security challenges in the space domain? We need your help.

We are supporting a game-changing software development approach in support of the United States Air Force (USAF) and the larger Space Community through robust DevSecOps pipelines and containerized deployments to help deliver new capabilities to everything from operations centers to F-16 platforms in support of the warfighter. Our team is looking for an experienced Software Engineer or DevOps Engineer with a broad enterprise DevOps background who can work in a dynamic, fast-paced environment. In this position, you will be a member of a highly collaborative, multi-contractor support team while also being embedded directly with government customers. We re looking for team players who are willing to embrace pair programming and possess strong communication skills.

You will be supporting the development and deployment of DevOps capabilities as well as space mission applications for a wide range of government customers. Flexibility to work across different job roles such as IT Support, DevOps or Software Development is essential. Physical location for this work will be at Catalyst Campus in downtown Colorado Springs, Colorado. Catalyst Campus, located in the historic train station in downtown Colorado Springs, provides an open, collaborative work environment that inspires creative problem solving where engineers can work hard and play hard. Occasional offsite support may also be required at Schriever AFB.

REQUIRED SKILLS

* Bachelor s degree in Computer Science or an engineering field with at least 10 years technical experience. Relevant experience may be accepted in place of a degree

* Experience working in an Agile Software Development environment using the Scrum methodology

* Willingness to participate in a pair-programming work environment

* Experience with DevOps tools (e.g. Gitlab, Artifactory, Jenkins, SonarQube, Docker)

* Experience with Amazon Web Services (AWS) to include services such as VPC, EC2, IAM, S3, Lambda, CloudWatch

* Experience with one or more scripting languages (e.g. Bash, Python, PowerShell)

* Ability to support and troubleshoot issues on common Operating Systems (e.g. MacOS, Windows, Linux, yum/brew)

* Great interpersonal and communications skills with a desire and ability to work in a highly collaborative environment

* Must be comfortable working in a fast-paced, flexible environment and possess a willingness to take the initiative to learn new tools and concepts quickly

* Excellent communication skills in both spoken and written English

* Must be a US Citizen due to DoD contract

DESIRED SKILLS

* Experience with configuration management tools (Ansible, Puppet)

* Experience with Infrastructure as Code (IaC) tools (Terraform, CloudFormation)

* Experience with object-oriented programming languages

* Experience with application deployment in Docker Containers

* Experience with Kubernetes, KNative, Istio and other container orchestration tools

* Experience with client account technical support (Windows Active Directory, DNS, application upgrades)

* Foundational understanding of networking concepts (Firewall, WiFi and VPN setup)

* Top Secret clearance with DCID eligibility for SCI

Must be eligible to obtain and maintain, or currently possess Prescreen Required clearance.

Ready for action? We re looking for the kind of people who see this opportunity and don t hesitate to act. Parsons is a leader in the world of Technical Services and Engineering. We hire people with a broad set of technical skills who have proven experience tackling some of the greatest challenges. Take your next step and apply today.

Parsons is an equal opportunity, drug-free employer committed to diversity in the workplace. Minority/Female/Disabled/Protected Veteran/LGBT.
          

为什么你不必害怕 Kubernetes

 Cache   

Kubernetes 绝对是满足复杂 web 应用程序需求的最简单、最容易的方法。

Digital creative of a browser on the internet

在 90 年代末和 2000 年代初,在大型网站工作很有趣。我的经历让我想起了 American Greetings Interactive,在情人节那天,我们拥有了互联网上排名前 10 位之一的网站(以网络访问量衡量)。我们为 AmericanGreetings.comBlueMountain.com 等公司提供了电子贺卡,并为 MSN 和 AOL 等合作伙伴提供了电子贺卡。该组织的老员工仍然深切地记得与 Hallmark 等其它电子贺卡网站进行大战的史诗般的故事。顺便说一句,我还为 Holly Hobbie、Care Bears 和 Strawberry Shortcake 运营过大型网站。

我记得那就像是昨天发生的一样,这是我们第一次遇到真正的问题。通常,我们的前门(路由器、防火墙和负载均衡器)有大约 200Mbps 的流量进入。但是,突然之间,Multi Router Traffic Grapher(MRTG)图示突然在几分钟内飙升至 2Gbps。我疯了似地东奔西跑。我了解了我们的整个技术堆栈,从路由器、交换机、防火墙和负载平衡器,到 Linux/Apache web 服务器,到我们的 Python 堆栈(FastCGI 的元版本),以及网络文件系统(NFS)服务器。我知道所有配置文件在哪里,我可以访问所有管理界面,并且我是一位经验丰富的,打过硬仗的系统管理员,具有多年解决复杂问题的经验。

但是,我无法弄清楚发生了什么……

当你在一千个 Linux 服务器上疯狂地键入命令时,五分钟的感觉就像是永恒。我知道站点可能会在任何时候崩溃,因为当它被划分成更小的集群时,压垮上千个节点的集群是那么的容易。

我迅速跑到老板的办公桌前,解释了情况。他几乎没有从电子邮件中抬起头来,这使我感到沮丧。他抬头看了看,笑了笑,说道:“是的,市场营销可能会开展广告活动。有时会发生这种情况。”他告诉我在应用程序中设置一个特殊标志,以减轻 Akamai 的访问量。我跑回我的办公桌,在上千台 web 服务器上设置了标志,几分钟后,站点恢复正常。灾难也就被避免了。

我可以再分享 50 个类似的故事,但你脑海中可能会有一点好奇:“这种运维方式将走向何方?”

关键是,我们遇到了业务问题。当技术问题使你无法开展业务时,它们就变成了业务问题。换句话说,如果你的网站无法访问,你就不能处理客户交易。

那么,所有这些与 Kubernetes 有什么关系?一切!世界已经改变。早在 90 年代末和 00 年代初,只有大型网站才出现大型的、规模级web-scale的问题。现在,有了微服务和数字化转型,每个企业都面临着一个大型的、规模级的问题——可能是多个大型的、规模级的问题。

你的企业需要能够通过许多不同的人构建的许多不同的、通常是复杂的服务来管理复杂的规模级的网站。你的网站需要动态地处理流量,并且它们必须是安全的。这些属性需要在所有层(从基础结构到应用程序层)上由 API 驱动。

进入 Kubernetes

Kubernetes 并不复杂;你的业务问题才复杂。当你想在生产环境中运行应用程序时,要满足性能(伸缩性、性能抖动等)和安全性要求,就需要最低程度的复杂性。诸如高可用性(HA)、容量要求(N+1、N+2、N+100)以及保证最终一致性的数据技术等就会成为必需。这些是每家进行数字化转型的公司的生产要求,而不仅仅是 Google、Facebook 和 Twitter 这样的大型网站。

在旧时代,我还在 American Greetings 任职时,每次我们加入一个新的服务,它看起来像这样:所有这些都是由网站运营团队来处理的,没有一个是通过订单系统转移给其他团队来处理的。这是在 DevOps 出现之前的 DevOps:

  1. 配置 DNS(通常是内部服务层和面向公众的外部)
  2. 配置负载均衡器(通常是内部服务和面向公众的)
  3. 配置对文件的共享访问(大型 NFS 服务器、群集文件系统等)
  4. 配置集群软件(数据库、服务层等)
  5. 配置 web 服务器群集(可以是 10 或 50 个服务器)

大多数配置是通过配置管理自动完成的,但是配置仍然很复杂,因为每个系统和服务都有不同的配置文件,而且格式完全不同。我们研究了像 Augeas 这样的工具来简化它,但是我们认为使用转换器来尝试和标准化一堆不同的配置文件是一种反模式。

如今,借助 Kubernetes,启动一项新服务本质上看起来如下:

  1. 配置 Kubernetes YAML/JSON。
  2. 提交给 Kubernetes API(kubectl create -f service.yaml)。

Kubernetes 大大简化了服务的启动和管理。服务所有者(无论是系统管理员、开发人员还是架构师)都可以创建 Kubernetes 格式的 YAML/JSON 文件。使用 Kubernetes,每个系统和每个用户都说相同的语言。所有用户都可以在同一 Git 存储库中提交这些文件,从而启用 GitOps。

而且,可以弃用和删除服务。从历史上看,删除 DNS 条目、负载平衡器条目和 Web 服务器的配置等是非常可怕的,因为你几乎肯定会破坏某些东西。使用 Kubernetes,所有内容都处于命名空间下,因此可以通过单个命令删除整个服务。尽管你仍然需要确保其它应用程序不使用它(微服务和函数即服务 [FaaS] 的缺点),但你可以更加确信:删除服务不会破坏基础架构环境。

构建、管理和使用 Kubernetes

太多的人专注于构建和管理 Kubernetes 而不是使用它(详见 Kubernetes 是一辆翻斗车)。

在单个节点上构建一个简单的 Kubernetes 环境并不比安装 LAMP 堆栈复杂得多,但是我们无休止地争论着构建与购买的问题。不是 Kubernetes 很难;它以高可用性大规模运行应用程序。建立一个复杂的、高可用性的 Kubernetes 集群很困难,因为要建立如此规模的任何集群都是很困难的。它需要规划和大量软件。建造一辆简单的翻斗车并不复杂,但是建造一辆可以运载 10 吨垃圾并能以 200 迈的速度稳定行驶的卡车则很复杂。

管理 Kubernetes 可能很复杂,因为管理大型的、规模级的集群可能很复杂。有时,管理此基础架构很有意义;而有时不是。由于 Kubernetes 是一个社区驱动的开源项目,它使行业能够以多种不同方式对其进行管理。供应商可以出售托管版本,而用户可以根据需要自行决定对其进行管理。(但是你应该质疑是否确实需要。)

使用 Kubernetes 是迄今为止运行大规模网站的最简单方法。Kubernetes 正在普及运行一组大型、复杂的 Web 服务的能力——就像当年 Linux 在 Web 1.0 中所做的那样。

由于时间和金钱是一个零和游戏,因此我建议将重点放在使用 Kubernetes 上。将你的时间和金钱花费在掌握 Kubernetes 原语或处理活跃度和就绪性探针的最佳方法上(表明大型、复杂的服务很难的另一个例子)。不要专注于构建和管理 Kubernetes。(在构建和管理上)许多供应商可以为你提供帮助。

结论

我记得对无数的问题进行了故障排除,比如我在这篇文章的开头所描述的问题——当时 Linux 内核中的 NFS、我们自产的 CFEngine、仅在某些 Web 服务器上出现的重定向问题等)。开发人员无法帮助我解决所有这些问题。实际上,除非开发人员具备高级系统管理员的技能,否则他们甚至不可能进入系统并作为第二双眼睛提供帮助。没有带有图形或“可观察性”的控制台——可观察性在我和其他系统管理员的大脑中。如今,有了 Kubernetes、Prometheus、Grafana 等,一切都改变了。

关键是:

  1. 时代不一样了。现在,所有 Web 应用程序都是大型的分布式系统。就像 AmericanGreetings.com 过去一样复杂,现在每个网站都有扩展性和 HA 的要求。
  2. 运行大型的分布式系统是很困难的。绝对是。这是业务的需求,不是 Kubernetes 的问题。使用更简单的编排系统并不是解决方案。

Kubernetes 绝对是满足复杂 Web 应用程序需求的最简单,最容易的方法。这是我们生活的时代,而 Kubernetes 擅长于此。你可以讨论是否应该自己构建或管理 Kubernetes。有很多供应商可以帮助你构建和管理它,但是很难否认这是大规模运行复杂 Web 应用程序的最简单方法。


via: https://opensource.com/article/19/10/kubernetes-complex-business-problem

作者:Scott McCarty 选题:lujun9972 译者:laingke 校对:wxy

本文由 LCTT 原创编译,Linux中国 荣誉推出


          

Comment by srujanreddy for

Same issue.

2016-11-25 16:21:24.062 13858 ERROR heat.engine.resource [req-51007baf-2b3c-45b0-bfad-613a542b12de 16bfd673590e48069db821c9128a7c72 527da6b91b9c49afb97a2c286fa5d2f9 - - -] Resource type OS::Neutron::RouterInterface unavailable 2016-11-25 16:21:24.062 13858 ERROR heat.engine.resource Traceback (most recent call last):

 Cache   
I got the same issue with all 3 COE's, with kubernetes I got the kube_masters is in "create_in_progress" untill the timeout and changes to "create_failed". With swarm, it's "swarm_masters" I even created a bug as shown below https://bugs.launchpad.net/magnum/+bug/1720816 Did anyone find solution?
          

Comment by proceonmw for

Same issue.

2016-11-25 16:21:24.062 13858 ERROR heat.engine.resource [req-51007baf-2b3c-45b0-bfad-613a542b12de 16bfd673590e48069db821c9128a7c72 527da6b91b9c49afb97a2c286fa5d2f9 - - -] Resource type OS::Neutron::RouterInterface unavailable 2016-11-25 16:21:24.062 13858 ERROR heat.engine.resource Traceback (most recent call last):

 Cache   
Had the same issue...make sure that the DNS server you set up for your template can resolve where your heat process is running (ie. controller if you are following the docs). Also make sure that the Swarm or Kubernetes node has access to talk back to the controller to notify heat (ie.public net)
          

Kasten K10 2.0 provides enhanced security and greater ease-of-use for cloud-native apps adoption

 Cache   

Kasten, a provider of cloud-native data management solutions, announced the general availability of Kasten K10 2.0. Purpose-built for Kubernetes, K10 provides enterprise operations teams with an easy-to-use, scalable and secure system for backup & restore, disaster recovery and mobility of Kubernetes applications. The new release includes significant security features and greater ease-of-use for accelerated adoption of cloud-native applications. “As many enterprises embrace cloud-native applications and adopt Kubernetes, they are finding that the transition can be … More

The post Kasten K10 2.0 provides enhanced security and greater ease-of-use for cloud-native apps adoption appeared first on Help Net Security.


          

IT / Software / Systems: Java Software Engineer - Web Services - Mid Level - Plano, Texas

 Cache   
Purpose of JobWe are currently seeking talented Java Software Engineers - Mid Level for our Plano, TX facility. USAA Java Software Engineers create and maintain APIs for our business software applications that our members use. Done well, many may never realize or appreciate how critical these APIs are or how we've made them simpler. Faster. Safer. Yet they help make our members' lives better. It's a great challenge and responsibility. Here, you will invent. You'll design and test. You'll take risks. You'll create new technology, but the impact you'll make on our members will be far more significant. As a Java Engineer, you are expected to be able to function in a fast-paced environment driving innovation through rapid prototyping and iterative development ensuring quality is built into all solutions leveraging TDD. Your responsibilities will require you to be knowledgeable in API development. You will be a part of teams of developers guiding on engineering and architecture best practices as well as demonstrate the ability to partner and engage with other engineers and architects. You will work closely with business clients and UI designers to analyze user requirements, code applications and customize and/or integrate commercial software packages for both internal employees and external member-facing applications across multiple platforms. As a Java Engineer, it will be your responsibility to maintain code integrity and quality. This Job Posting is for multiple openings available in 2019/2020.Job RequirementsABOUT USAAUSAA knows what it means to serve. We facilitate the financial security of millions of U.S. military members and their families. This singular mission requires a dedication to innovative thinking at every level.In each of the past five years, we've been a top-40 Fortune 100 Best Companies to Work For -, and we've ranked among Victory Media's Top 10 Military Friendly - Employers 13 years straight. We embrace a robust veteran workforce and encourage veterans and veteran spouses to apply.ABOUT USAA ITOur most important qualification isn't technical, it's human. Here, we don't just sit in front of a screen. We stand behind our 11 million members who rely on us every day. -We are over 3,000 employees strong, a passionately supportive and collaborative team built on Agile principles. We've been a top-two Computerworld 100 Best Places to Work in IT five years in a row and were recently named a Top 50 Employer for Minority Engineers & IT by Workforce Diversity Magazine. - - - - See what it's like to work for a company where your passion meets our purpose:Click here to watch the USAA Java Software Engineer - Web Services Spotlight VideoUSAA Information Technology: A Realistic PreviewPRIMARY RESPONSIBILITIES With limited guidance performs defect correction (analysis, design, code) on less complex issues and/or codes applications of medium complexity. With guidance begins to install, customize and integrate commercial software packages. Works with more tenured peers to gain understanding of systems while conducting root cause analysis of issues, reviewing new and existing code and/or performing unit testing. Works with experienced team members to conduct root cause analysis of issues, review new and existing code and/or perform unit testing. Learns to create system documentation/play books and attends requirements, design and code reviews. Receives work packages from manager and/or delegates. Understands and assists in gathering and analyzing customer requirements and may respond to outages following the appropriate processes. Partners with experienced team members to develop accurate estimates on work packages. May begin to identify issues that impact availabilityMINIMUM REQUIREMENTS Bachelor's degree or 4 additional years of I/T experience beyond the minimum required may be substituted in lieu of a degree. - - AND, 2+ years of software engineering/development experience utilizing Java with experience working with Web Services REST or SOAPPREFERRED 2+ years in REST frameworks with focus on API development 1+ years in AGILE methodology 1+ years experience working with JavaScript 1+ years experience integrating with backend services like JMS, J2C, ORM frameworks (Hibernate, JPA, JDO, etc), JDBC. Ability to implement container based APIs using a container frameworks OpenShift, Docker, or Kubernetes. Relational Database design and optimization with Oracle DB2, MySQL, Postgress Familiar with Gradle, GIT, GitHUB, GITLab, etc. around continuous integration and continuous delivery infrastructure Experience testing in REST services Experience in design and develop automated test frameworkDESIRED CHARACTERISTICSUSAA Java Engineers create innovative solutions that impact our members. Collectively, we are: Curious and excited by new ideas - Energized by a fast-paced environment Able to understand and translate business needs into leading-edge technology - Comfortable working as part of a connected team, but self-motivated - Community-focused, dependable and committed - Exceptionally detail-orientedThe above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job.At USAA our employees enjoy one of the best benefits package in the business, including a flexible business casual or casual dress environment, comprehensive medical, dental and vision plans, along with wellness and wealth building programs. - Additionally, our career path planning and continuing education will assist you with your professional goals.Relocation assistance is notavailable for this position. ()
          

This week's Postgres news

 Cache   

#330 — November 6, 2019

Read on the Web

Postgres Weekly

Postgres 12 Initial Query Performance Impressions — We’ve been getting excited about Postgres 12 for ages here, but how does it really perform? Kaarel set up a stress test with various levels of scale and.. it’s a mixed bag with no obvious conclusions to draw.

Kaarel Moppel

Building Columnar Compression in a Row-Oriented Database — How Timescale has achieved 91%-96% compression in the latest version of their TimescaleDB time-series data extension for Postgres.

Timescale

Hands-On PostgreSQL Training with Experts — Special rate for hands on PostgreSQL training with local 2ndQuadrant experts at 2Q PGConf 2019 in Chicago. Courses include: PostgreSQL Database Security, PostgreSQL Multi-master Replication, Postgres Optimization, PostgreSQL Business Continuity.

2ndQuadrant PostgreSQL Training sponsor

postgres-checkup: A Postgres Health Check Tool — A diagnostics tool that performs ‘deep analysis’ of a Postgres database’s health, detect issues, and produces recommendations for resolving any issues found. v1.3.0 has just been released.

Postgres.ai

Application Connection Failover using HAProxy with Xinetd — I’m a huge fan of haproxy, a powerful but easy to manage TCP and HTTP proxy/load balancer, so I’m looking forward to the rest of this series.

Jobin Augustine

Implementing K-Nearest Neighbor Space Partitioned Generalized Search Tree Indexes — K-nearest neighbor answers the question of “What is the closest match?”. PostgreSQL 12 can answer this question, and use indexes while doing it.

Kirk Roybal

Installing the PostgreSQL 12 Package on FreeBSD — You have to do some work since the final release of Postgres 12 isn’t in the quarterly package update yet.

Luca Ferrari

Installing Postgres on FreeBSD via Ansible

Luca Ferrari

📂 Code and Projects

PostgREST 6.0: Serve a RESTful API from Your Postgres Database — It’s not new, but it’s a mature project that’s been doing the rounds on social media again this week, so let’s shine a spotlight on it again :-)

Joe Nelson et al.

Take the Guesswork Out of Improving Query Performance — Based on the query plan, pgMustard offers you tips to make your query faster. Try it for free.

pgMustard sponsor

Managing PostgreSQL's Partitioned Tables with Rubypg_partition_manager is a new gem for maintaining partitioned tables that need to be created and dropped over time as you add and expire time-based data in your app.

Benjamin Curtis

Pgpool-II 4.1.0 Released — Adds connection pooling and load balancing to Postgres. 4.1 introduces statement level load balancing and auto failback.

Pgpool Global Development Group

supported by

💡 Tip of the Week

Putting multiple LIKE patterns into an array

A simple way to perform arbitary searches over the contents of columns is by using the LIKE clause in your queries. For example, in a table of blog posts, this query could find all posts with a title containing the string 'Java':

SELECT * FROM posts WHERE title LIKE '%Java%';

IF you want to create more elaborate queries, things can soon become unwieldy:

SELECT * FROM posts WHERE title LIKE '%Java%' OR title LIKE '%Perl%' OR title LIKE '%Python%';

Postgres supports two SQL operators called ANY (SOME is an alias meaning the same thing) and ALL that can be used to perform a single check across a set of values, and we can use this with LIKE queries.

ANY and ALL are more commonly used with subqueries, but we can put multiple LIKE match patterns into an array and then supply this to ANY or ALL like so:

SELECT * FROM posts WHERE title LIKE ANY(ARRAY['%Java%', '%Perl%', '%Python%']);

There's also a way to write array literals in a shorter style, if you prefer:

SELECT * FROM posts WHERE title LIKE ANY('{%Java%,%Perl%,%Python%}');

Naturally, while these queries will find any rows where title matches against any of the supplied patterns, you could also use ALL to ensure you only get back titles which contain all of the patterns.

This week’s Tip of the Week is sponsored by DigitalOcean. Find out how engineers at DigitalOcean built a scalable marketplace for developers on top of their managed Kubernetes service.

🗓 Upcoming Events

  • PG Down Under (November 15 in Sydney, Australia) — The second outing for this annual, Australian Postgres conference.
  • 2Q PGCONF 2019 (December 4-5, 2019 in Chicago) — A conference dedicated to exchanging knowledge about the world’s most advanced open source database: PostgreSQL
  • PgDaySF (January 21, 2020 in San Francisco) — Bringing the PostgreSQL international community to the heart of San Francisco and Silicon Valley.
  • PgConf.Russia (Febuary 3-5, 2020 in Moscow, Russia) — One day of tutorials and two days of talks in three parallel sessions.
  • PGConf India (Febuary 26-28, 2020 in Bengaluru, Maharashtra, India) — A dedicated training day and a multi-track two-day conference.
  • pgDay Paris 2020 (March 26, 2020 in Paris, France) — Learn more about the world’s most advanced open source database among your peers.

          

Senior Automation Consultant

 Cache   
DO NOT HIT BACK BUTTON IN BROWSER. BACK BUTTON WILL DELETE ALL UNSUBMITTED REQ CONTENT. CLICK "OK" BUTTON BELOW ONCE COMPLETED. THEN CLICK "SAVE AND SUBMIT" TO ROUTE FOR APPROVAL. TO CANCEL JOB POSTING CHANGES CLICK "CANCEL" BUTTON BELOW. About the Position: How can we make the world's best networking equipment better? By enabling our customers to build, update, and operate their networks faster, easier, and more efficiently by building automation solutions rooted in open-source technologies and a hefty dose of innovative application. As a Network Automation consultant, you'll have an opportunity to develop in Python and Go, use technologies like Docker, Kubernetes, Argo, and Ansible while being a valued member of a global, high-performance team. Innovation is key at Juniper and is in our DNA in Professional Services. The Network Automation team is at the forefront of driving change in the industry, building automation solutions that both enable the company to deliver solutions quicker and that allow our customers to implement, test and scale their services faster than ever before. Our customers include service providers, mobile operators, search engine giants, social media pioneers, banks, universities and governments. Our team works on projects that benefit millions of Internet users worldwide. Responsibilities: Automation technical lead for major network deployments and migrations. Produce design, validation, and migrations plan for test, build and event driven automation frameworks. Work in a consultative manner with customers, sales, and systems engineering to scope and deliver Professional Service projects. Lead automation implementation and/or execution of those plans. Work with internal team global delivery team on automation projects. Understand the customer requirement and convert to high level and low level work items. Work in a consultative manner to deliver Automation solutions into Customers Follow the Juniper PS engagement and delivery processes, working closely with the Project Manager Personal on-site and/or remote delivery of Professional Services including creation of high level and detailed design documentation, migration planning, product integration and acceptance, Demos/Trials/Proof of Concept. Develop and maintain strong relationships with the customers' technical teams Work closely with colleagues and customer's personnel to help test, design, and plan for deployments of new products and features. Participation in requirements gathering workshops, discussions and meetings with Customer's Design, Test and/or Engineering Teams Reviewing customer provided (technical) information Providing customers with technical information and documentation related to Juniper Networks solutions Assisting with hand-over to Operations or to a Juniper Networks Operational RE Preferred Qualifications: Experience with at least one programming language. Python is preferred. Should be able to use programming language to write basic scripts. (eg log parsing , sending API requests, automating everyday tasks) Expertise with Ansible, ROBOT , Jenkins , GIT and other Dev Ops tool. CI/CD and Agile methodology. Expert with Unix , Docker /Container. RESTful API knowledge is must. Contrail or similar technology knowledge. Expert communication and interpersonal skills, this means you like talking to people...yes in person, too! The mature 'customer first' focus so that you personally can close out operational challenges on both sales and delivery with CSS, Sales, Partners or other Juniper players Commitment to delivering a remarkable customer experience Experience in working with international companies Effective consultancy, planning & communication skills, excellent inter-personal skills, ability to work on multiple projects and manage customers appropriately Must be fluent in English language and possess excellent written and verbal communication skills International travel is a job requirement Experience/Background Desired 10+ years' experience in the telecommunications industry More than 3 years of hands on experience on Juniper devices 5 years Automation related experience Minimum of 3 years in a customer-facing/Consultative roles Good to have Qualifications: Strong JUNOS skills and excellent hands on and planning capabilities with MX Series products, PTX Series, EX Series, E Series, MX BNG Basic understanding of TCP/IP, MPLS, Traffic Engineering, IGP, BGP, Internet peering/exchanges Basic network design experience Extensive experience in a technical role including strong experience supporting large SP networks and mobile backhaul solutions Hands on experience of Linux servers and network services (DHCP, SNMP, FTP, TFTP, DNS,NTP).Firewalling via access control lists, Juniper SRX, and stateful firewalls and clustering Other Information: Travel requirements for the position is up to 25%
          

Principal Automation Engineer

 Cache   
SunIRef:Manu:title Principal Automation Engineer NTT DATA Services 2,079 reviews - Irving, TX NTT DATA Services 2,079 reviews Read what people are saying about working here. Req ID: 67811 At NTT DATA Services, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company's growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA Services and for the people who work here. NTT DATA Services currently seeks a Principal Automation Engineer to join our team in Irving, Texas (US-TX), United States (US). Role Responsibilities/Accountabilities: Identity all audit and compliance deliverables to be converted into Ansible playbooks for remediation Begin creating Epics, User stories, Backlogs, and sprints. Organize scrum calls and bring in key stakeholders Build integration with internal compliance tools Develop automated migration from Tectia to OpenSSH Develop integrations with external PKI infrastructure for Citi certificates Develop automated pipeline to create custom audit logs Implement Infrastructure as Code (IaC) using Terraform Basic Qualifications: 2+ years working on OpenShift/Kubernetes and Docker 3+ years working on at least two of Chef/Ansible/Puppet/SaltStack 3+ years working on at least one cloud platform (AWS/Azure/GCP) 2+ years scripting or programming languages (Ruby, Perl, Python, Shell, PowerShell, etc) 2+ years understanding and implementing RESTful APIs using IBM APIConnect or Kong 3+ years Red Hat System Administration (RHCSA certified preferred) 2+ years working with jenkins, Git, SonarQube, Altassian developer tools, JFrog Artifactory Preferences: Experience working with Terraform Knowledge of log mining & analytics tools Splunk or ELK Experience in SQL MySQL/ /SQL Server/Oracle Comfortable with frequent, incremental code, testing and deployment This position is only available to those interested in direct staff employment opportunities with NTT DATA, Inc. or its subsidiaries. Please note, 1099 or corp-2-corp contractors or the equivalent will NOT be considered. We offer a full comprehensive benefits package that starts from your first day of employment. About NTT DATA Services NTT DATA Services partners with clients to navigate and simplify the modern complexities of business and technology, delivering the insights, solutions and outcomes that matter most. We deliver tangible business results by combining deep industry expertise with applied innovations in digital, cloud and automation across a comprehensive portfolio of consulting, applications, infrastructure and business process services. NTT DATA Services, headquartered in Plano, Texas, is a division of NTT DATA Corporation, a top 10 global business and IT services provider with 118,000+ professionals in more than 50 countries, and NTT Group, a partner to 88 percent of the Fortune 100. Visit ******************* to learn more. NTT DATA, Inc. (the Company) is an equal opportunity employer and makes employment decisions on the basis of merit and business needs. The Company will consider all qualified applicants for employment without regard to race, color, religious creed, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other class protected by law. To comply with applicable laws ensuring equal employment opportunities to qualified individuals with a disability, the Company will make reasonable accommodations for the known physical or mental limitations of an otherwise qualified individual with a disability who is an applicant or an employee unless undue hardship to the Company would result. NTT Data - Just posted report job - original job
          

IT / Software / Systems: Full-stack Software Developer - Salt Lake City, Utah

 Cache   
Mavenlink is looking for talented software developers to join our Salt Lake office. We have a great team in Salt Lake, and you'll be joining in the early days, with a chance to influence the culture. You'll step into a supportive environment praised by our engineers for its focus on continuous learning. Here's how our engineering culture will support your career growth: Pair Programming "We work twice as fast and produce better code because, with two minds working together, you find solutions you wouldn't have seen if you were working by yourself." Amanda Holl, Software Engineer Continuous Learning "I don't just want to learn so I can be the best, I want to learn so I can teach the people sitting next to me-so we can all grow." - Adam Ellsworth, Software Engineer Coaching and Mentorships "Every engineer here has a 'coach' - an active, practicing engineer providing mentorship and support to help their 'coachees' grow their careers." - Andy Leavitt, Director of Engineering Open Communication "We teach team members how to give and receive feedback. I feel like the things I say are heard and acted on, and I have an opportunity to act on them myself." - Paulette Luftig, Software Engineer Full-stack development "We have open architecture meetings that everyone is invited to, which we can do because everyone is full stack, and we all know how the pieces fit together. There are very few blind spots this way." -Maggie Sheldon, Senior Director of Product As our product and customer base grow, we're seeing interesting technical challenges. We've recently finished: Moving from Sprockets to Webpack/Yarn Real-time streaming of all database events to time-sensitive application systems Automated containerized staging deployment of every green developer build Upcoming challenges include: Developing a rich & sophisticated React component architecture Evolution from a single Rails app to cohesive, decoupled services Auto-scaled, self-healing production Kubernetes Joining Mavenlink, we'll guide you toward the challenges that interest you. Skills & RequirementsThough we will eventually re-open our recruiting to early-career candidates, at this time we're only considering candidates with 2+ years of experience. Experience in our stack is not a requirement. We value empathy, communication, and care for our colleagues. ()
          

AWS Network Engineer

 Cache   
Description:Relatient is a leading provider of integrated messaging solutions for practices, hospitals and health care systems. We take a patient-centered approach to engagement, utilizing the power of real-time clinical data to deliver timely messages between patients and their care providers.Named one of Deloittes 2018 Technology Fast 500 and a 2019 Red Herring Top 100 winner, Relatient is changing the way healthcare providers engage with their patients.We are looking for a talented AWS Network Engineer to join our dynamic team to implement and maintain our AWS infrastructure. We expect an Engineer who will work hard, keep up in a fast-paced start up environment, and have fun! If you are looking to be challenged daily, make a huge impact and help define the future of patient engagement, we would love to have you join our team!Responsibilities
  • Utilizing Agile methodology, ensures that plans are followed, and issues resolved in a manner that results in a successful implementation.
  • Functions in a consultative role using advanced problem-solving and analytical skills to implement, upgrade and support complex application systems.
  • Serves as a technical liaison to development teams for technology rollouts.
  • Acts as a general internal consultant on system architecture design initiatives.
  • Collaborates with leadership on development of infrastructure standardization.
  • Responsible for providing/setting direction on infrastructure architecture.
  • Supports standardization of documentation for system maps.
  • Evaluates new AWS technology with cost benefits to the company.
  • Oversees efforts with key vendors to understand future infrastructure plans in conjunction with leadership.
  • Oversees complex configuration of applications based on user and vendor requirements on major application environments.
  • Works closely with managers, project managers and business partner leaders to define and develop or implement major software applications.
  • Works with staff, business partners and leadership to help them understand potential application functionality, development approaches, possible enhancements and process improvements.
  • Works with enterprise architecture teams to integrate application architecture into enterprise architecture.
  • Stays connected with industry best practices and vendor specific application methodologies.
  • Will require on-call coverage responsibilities..Requirements:
    • Bachelors Degree in related field or may substitute an equivalent combination of education and experience.
    • 7+ years of total IT/application experience required.
    • Demonstrated history of Cloud-based computing experience is required. With AWS Experience with RDS, SQS, EC2, Lambda, Route 53, and VPC is preferred.
    • Experience with Linux and shell scripting is required.
    • Understanding of programming languages such as PHP, Python, JavaScript/Node is required.
    • Experience with container technology such as Docker and/or Kubernetes required.
    • Knowledge of Security Concepts and Technology (SSL/TLS, SSH, SFTP, VPN) is required.
    • Knowledge of Version Control (Git) is required.
    • Knowledge of Network Routing and DNS resolution is required.
    • Knowledge of Logging, Monitoring, and Alerting is recommended.
    • Knowledge of CI/CD technologies (Jenkins, GitLabCI, TravisCI) is recommended.
    • Knowledge of Web Server Configuration (Apache, NGINX) is recommended.
    • Experience with Healthcare and HIPAA security policies is recommended.
    • Experience with Asterisk is recommended.
    • Experience in Agile methodology and Software Development Life Cycle (SDLC) is recommended. About RelatientFounded in 2014, our offices are located in historic Franklin and Cookeville, TN and include a casual, dynamic work environment with a spirit of innovation and growth.Our team consists of smart, driven, and creative people who are motivated to impact the way in which healthcare providers engage with their patients. Our platform takes a patient-centered approach to engagement, utilizing the power of real-time clinical data to deliver timely messages between patients and their care providers. Our platform effortlessly automates appointment reminders, patient billing and payment collection, satisfaction surveys, self-check-in, non-medical transportation and on demand messaging.What We Offer
      • Base salary plus incentives
      • Medical, dental, and vision insurance
      • Employer HSA match contribution
      • Employer paid Life Insurance
      • 100% employer paid long-term disability insurance
      • 401k
      • Generous PTO policy which includes 3 weeks paid time off plus all 9 paid holidays
      • Complimentary perks such as an annual employee awards banquet, holiday parties, monthly lunches, bottomless snacks and coffee
      • Casual culture with approachable leadership
      • Great office environments located in historic Franklin, TN and beautiful Cookeville, TN.To learn more about our organization please visit www.relatient.netRelatient is an equal employer. PI114694630
          

An introduction to monitoring with Prometheus

 Cache   
Wheel of a ship

Metrics are the primary way to represent both the overall health of your system and any other specific information you consider important for monitoring and alerting or observability. Prometheus is a leading open source metric instrumentation, collection, and storage toolkit built at SoundCloud beginning in 2012.


read more
          

Software Engineer - AWS/DevOps (572) with Security Clearance

 Cache   
Must already have TS/SCI clearance (with Full Scope Polygraph) used in the past 24 months 1-3 year US government contract The Sponsor runs a portfolio of COTS products which together make up the Communities and the Sponsors Access Control Services. The Sponsor is increasing its mission on the multifabric environments, particularly in relation to supporting the Sponsor's Open Source Data Layer Services and the Sponsor's Internet Network, to include O365 Integration. This JITR will establish a team to stand up the full suite of capabilities on the multifabric side to support these missions. These teams will be guided by and informed by the high side choices, and will be compliant with the Sponsor's Architecture,however these capabilities will be deployed independent of the high-side baselines, in order to establish the capability, and will be brought together overtime. Deliveries could be deployed on other networks supporting the Multi Fabric Initiative Strategies - and the target deployments for this JITR may be on any multifabric environment. The successful offeror will have demonstrated experience with establishing, and accrediting COTS and Identity Access Management Technology, in the Sponsors environment. The contractor team shall have the following required skills and demonstrated experience: - Demonstrated experience with Container Technology such as Docker, Kubernetes, etc., or commitment to receive training within 45 days of award
- Demonstrated experience with OpenShift Technology, or commitment to receive training within 45 days of award
- Demonstrated experience working with DevOps
- Demonstrated experience working with Amazon Web Services Environments
- Demonstrated experience collaborating with management, IT customers and other technical and non-technical staff and contractors at all levels.
- Demonstrated experience working with ICD 508 Accessibility compliance
- Demonstrated experience working with COTS products in Containers deployment methods
- Demonstrated experience working with Amazon Web Services environments, including S3, EMR, SQS, and SNS, to design, develop, deploy, maintain, and monitor web applications within AWS infrastructures.
- Demonstrated experience providing technical direction to software and data science teams.
- Demonstrated experience with Apache Spark.
- Demonstrated experience with PostgreSQL.
- Demonstrated experience working with RDS databases.
- Demonstrated experience developing complex data transformation flows using graphical ETL tools.
- Demonstrated experience engineering large scale data-acquisition, cleansing, transforming, and processing of structured and unstructured data.
- Demonstrated experience translating product requirements into system solutions that take into account technical, schedule, cost, security, and policy constraints.
- Demonstrated experience working in an agile environment and leading agile projects.
- Demonstrated experience providing technical direction to project teams of developers and data scientists who build web-based dashboards and reports.
          

Associate Software Engineer

 Cache   
Associate Software EngineerREF#: 34964

CBS BUSINESS UNIT: CBS Interactive

JOB TYPE: Full-Time Staff

JOB SCHEDULE:

JOB LOCATION: Louisville, KY

ABOUT US:

CBS Interactive is the premier online content network for information and online operations of CBS Corporation as well as some of the top native digital brands in the entertainment industry. Our brands dive deep into the things people care about across entertainment, technology, news, games, business and sports. With over 1 billion users visiting our properties every quarter, we are a global top 10 web property and one of the largest premium content networks online.

Check us out on [1] The Muse, [2] Instagram and [3] YouTube for an inside look into 'Life At CBSi' through employee testimonials, office photos and company updates.

References

Visible links


  • https://www.themuse.com/companies/cbsinteractive

  • https://www.instagram.com/cbsinteractive/?hl=en

  • https://www.youtube.com/channel/UCAvGapyifCtUlmNTagAl_sQ

    DESCRIPTION:

    Division Overview:

    We are an enthusiastic group leading the future of consumer and business technologies. Our brands include Gamespot, Giant Bomb, CNET, Roadshow, TVGuide, Metacritic, ZDNet, and TechRepublic, just to name a few!

    Role Details:

    The Associate Software Engineer (ASE) is primarily responsible for designing, developing and maintaining PHP, JavaScript, and CSS code for high-volume, high-traffic, end-user facing global web properties.

    Your Day-to-Day:

    The ASE will collaborate with Engineering peers, Product Managers, Designers, Product Marketers, QA, Operations and Editors, to guide and produce functional specifications and implement final production-quality code directly within our live site code base. The ASE may lead small to mid-sized development projects, in addition to performing functional change requests and other minor engineering tasks.

    Key Projects:


    • E3 - [1] E3 is the largest video game conference in the world. The GameSpot and Giant Bomb editors bring you complete coverage of E3, interviews with the top game developers, and live daytime and after dark shows. If something interesting happens in video games, the CBSi Games group is there to cover it.

    • Daily Livestreams - Every day [2] GameSpot and [3] Giant Bomb host live shows covering the latest in the gaming world as well as original content. The CBSi Games engineering team is responsible for creating and managing a full set of livestream and user interaction tools.

    • Huge Video Game and Comic Book Wikis - The CBSi Games group is responsible for two of the largest Video Game And Comic Book wikis in the world. With over 600,000 comic books and over 60,000 video games the Games group maintains some of the most comprehensive wikis covering these subjects. We build and maintain the wiki editing tools, APIs, and the browsing experience.

      References

      Visible links


      • https://www.e3expo.com/

      • https://www.gamespot.com/

      • https://www.giantbomb.com/

        QUALIFICATIONS:

        What you bring to the team:

        You have -


        • At least 1+ year(s) development experience working with the following technologies:

        • JavaScript (Native or Frameworks), AJAX (w/ Restful API access)

        • Node, PHP or equivalent server-side web development language (C#, Java, Python etc.)

        • Web Standards, the DOM, cross-browser & cross-platform HTML

        • CSS positioning, HTML/CSS architecture

        • XML or JSON

        • Ability to work as part of a close-knit cross-functional team

        • Excellent communication, problem-solving, and organizational skill

        • Excellent attention to detail and a high-level of accountability

        • Ability to estimate development schedules

          You might also have -


          • Some experience or knowledge with the following technologies:

          • Experience with jQuery, RequireJS, SASS, LESS, Symfony, Laravel, Build Systems

          • Experience with GCP or other cloud platforms

          • Experience wth Docker or Kubernetes

          • Knowledge of Agile and/or similar development methodologies

            EEO STATEMENT:

            Equal Opportunity Employer Minorities/Women/Veterans/Disabled
          

Systems Analyst

 Cache   
Location Alpharetta, GA

UST Global is hiring for an end-to-end DevOps Engineer.

Count 3
Key skill Azure DevOps engineers with experience in Jenkins

Required Skill
o Thorough understanding of Azure cloud technologies implementation deployment
o Experience with DevOps toolchains Azure DevOps Jenkins
o Experience with SCM and branching strategies
o Experience with implementing CI CD concepts and orchestration pipeline
o Experience with Automation for build, test, configuration, and deployment in complex environments from development to production
o Experience with database DevOps tools Liquibase or Redgate
o Experience with integration to test automation tools Tosca
o Excellent communication and organization skills
o Familiarity with agile development processes
o Technical scripting experience to complement project delivery

Additional Skills
o Experience with Docker and Kubernetes
o Knowledge of Cyber security requirements
o Knowledge of T SQL and SQL packages
o Software Development experience using .Net

Responsibilities
o Develop and Maintain build deployment systems to facilitate build automation and deployment
o Support regular cadence of production and non production updates and hotfixes for live service of multiple products
o Work directly with engineers, QA testers, and developers to ensure coordinated releases
o Triage and troubleshoot live issues to resolution with attention to detail and optimization
o Collaborate with and educate engineers on proper use of build and source control systems BE

UST Global celebrates diversity and inclusion. We are an Equal Employment Opportunity employer. We welcome candidates to apply for employment from diverse backgrounds and believe your unique perspective adds to the richness of our company culture. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
          

Обзор Skaffold для разработки под Kubernetes

 Cache   


Полтора года назад, 5 марта 2018, компания Google выпустила первую альфа-версию своего Open Source-проекта для CI/CD под названием Skaffold, целью которого стало создание «простой и воспроизводимой разработки под Kubernetes», чтобы разработчики могли сфокусироваться именно на разработке, а не на администрировании. Чем может быть интересен Skaffold? Как оказалось, у него есть несколько козырей в рукаве, благодаря которым он может стать сильным инструментом для разработчика, а может — и инженера по эксплуатации. Познакомимся с проектом и его возможностями. Читать дальше →

Читать полностью


          

Sr Program Manager - ISV and Community GTM

 Cache   
Job Summary

NetApp provides software, systems, and services to manage, store and share data via on-premises, and private and public clouds to customers worldwide. NetApp is emerging as a leader in providing public cloud services on all major cloud platforms, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These cloud services, such as Cloud Volumes Service; NetApp SaaS backup, Cloud Sync, Cloud Secure, and NetApp Kubernetes Service, avails customers to consider many more options on how and where to manage their critical data, for optimal business and financial outcomes. The NetApp Cloud Data Services business unit is at the forefront of our effort to transform NetApp into a leader in the cloud market. Job Requirements

The Cloud Go-to-Market Program Manager will drive cloud enablement programs for all of NetApp field and partner organizations. Developing structured programs for sales motions, use case-based content requirements, defining success metrics and program execution would be the primary objectives for this individual. You will build relationships across Sales, Partner, Marketing and Product Management organizations and ensure all internal stakeholders are aligned with our cloud go-to-market strategy. You will define processes for tracking sales motions through the sales cycle and measuring organizational performance across these motions. You will align closely with product management and product marketing to ensure successful execution of our cloud GTM programs.

KEY RESPONSIBILITIES

* Develop and execute programs to drive key Go-to-Market sales motions across multiple systems and stakeholders

* Develop and execute programs to drive field sales and partner enablement

* Define success metrics and measurement/tracking processes and manage program execution for the entire sales cycle

* Act as an expert on NetApp s cloud GTM tactics across all internal stakeholders at NetApp through process and execution excellence

Education

* 10+ years of experience in driving enterprise GTM programs and sales motions

* 5+ years of experience in managing performance metrics across various systems

* Understanding of enterprise cloud solutions and related sales motions

* Experience in sales or marketing operations management

* Experience with top cloud providers, consulting organizations and/or global system integrators is required

* Collaborative personality; comfortable working across many functions and teams

* A creative go-getter who is full of ideas and can execute on those ideas in a fast-paced, demanding environment with excellent interpersonal and social skills

* A bachelor s degree is required. MBA is a plus.

*

TRAVEL REQUIRED - 40% (includes domestic and int.)

KEY RELATIONSHIPS -Reports to Director Cloud GTM Strategy

Equal Opportunity Employer Minorities/Women/Vets/Disabled.
          

DevOps Engineer

 Cache   
DevOps Engineer =============== Apply now Date: Oct 11, 2019 Location: Milwaukee, WI, 53202 At Northwestern Mutual, we are strong, innovative and growing. We invest in our people. We care and make a positive difference. What s the role? We are looking for a DevOps engineer to join our team. The role requires a dynamic agile engineering mindset, patience and persistence to solve complex problems, and strong engineering skills. A strategic thinker, you move between diverse tasks and help to bring out the best in those around you. As a DevOps leader, you will collaborate and teach others on the team to elevate the productivity and effectiveness of all to deliver outcomes for our business. Essential Duties for Role: * Work in a DevOps oriented environment using automated testing, continuous integration, automated infrastructure and monitoring using Gitlab-CI CICD pipelines * Be willing and able to adapt to new technology trends by learning and incorporating new technology into existing systems * Educate team members on DevOps best practices that need to be addressed as part of product design * Provide identify, provide feedback on and implement ways to improve on the risk policies and processes * Document decisions during design and implementation of processes and controls * Fast learner and self-starter. Be willing to take ownership of the outlined goals and make things happen * Ability to take initiative and work with minimal supervision, yet actively interact with other team members in person or over the Internet (chat, video conference, email) * Work collaboratively on creative solutions with engineers, product managers and designers in an agile like environment Desired Skills and Experience: * Experience writing/debugging automation technology (Ansible, Terraform, Packer) with Amazon Web Services ("AWS"). * Proficient in at least one programming language (Python/JavaScript preferred) * Experience with Git and/or Gitlab * Strong sense of ownership and an ability to work through ambiguity * Demonstrated ability to create a vision for security and create buy-in for a larger audience at multiple levels within the organization * Understand DevOps practices / culture in a DevOps environment Additional Skills a Plus: * Able to learn new languages and concepts related to: Kubernetes, Containerization, Docker, Ansible, Terraform * Existing Certifications or willingness to obtain AWS Certified Developer/Architect * Existing Certifications or willingness to obtain Risk Certifications a plus (CCSP, CSSLP, CRISC, Security+) Grow your career with a best-in-class company that puts our client s interests at the center of all we do. Get started now! We are an equal opportunity/affirmative action employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender identity or expression, sexual orientation, national origin, disability, age or status as a protected veteran, or any other characteristic protected by law. Req ID: 26038 Position Type: Regular Full Time Education Experience: Bachelor's Desired Employment Experience: 3-5 years Licenses/Certifications: FLSA Status: Exempt Posting Date: 09/04/2019 Apply now
          

Site Reliability Lead (Google Cloud)

 Cache   
Site Reliability Lead (Google Cloud) Our client, a leader in their industry, has an excellent opportunity available for Site Reliability Lead to work on a six-month contract to hire position in Alpharetta, GA. The pay rate is in the $85/hour to $95/hour range. Available for w-2. The perm salary target is in the $150,000/year range. Responsibilities of the Site Reliability Lead: The Site Reliability Lead will be responsible for the design of Google Cloud platform solutions, helping mentor junior level SREs, building out CI/CD pipelines, helping with Orchestration efforts with Kubernetes and helping optimize the overall infrastructure. This effort and help establish and drive SRE processes for the firm globally. This is a high visibility role and this Site Reliability Lead will be a major contributor to the overall transformation effort. This client is scaling their Agile Model using Spotify for Agile and this person will be a key player in impacting major changes across the organization. Essential Functions: Responsible for the design of Google Cloud infrastructure Working with senior-level executives to build a cohesive SRE environment across all business units Design and deployment of containers and orchestration using and Docker and Kubernetes Provide leadership, career development, and mentoring to team members Proactively look for opportunities to improve system stability and performance of the platform Help deploy Infrastructure as Code into the environment Requirements for the Site Reliability Lead: 10 or years experience in SRE, Cloud, Automation, or DevOps related experience Experience working at the lead or architect level (hands-on designing and implementing technology and mentoring junior level staff) Strong Google Cloud Platform (GCP) design and deployment experience Docker or Kubernetes experience Infrastructure as Code Terraform, Ansible, Chef, Puppet, SALT, etc. Preferred Skills for the Site Reliability Lead: Cloud migration experience to GCP CI/CD pipeline deployment and automation experience with Jenkins or equivalent Please send your MS Word resume to Ben Crosby at bcrosby@eliassen.com or call me at 770-399-4508 for immediate consideration. No 3rd Parties Position not eligible for Visa Sponsorship or Transfer Job #: 331616 Why Choose Eliassen Group? Working as an Eliassen Group contractor gives you exceptional benefits! Our consultants receive medical, dental, vision, disability, life and prescription drug coverage through Blue Cross Blue Shield of MA. We also offer a 401(k) plan through Fidelity with matching, direct deposit, weekly payment, and a $1000 referral bonus plan. Eliassen Group also has a Consultant Advocacy Program with specialized consultant care professionals dedicated to serving you once you start working with us. We are currently achieving World Class Net Promoter Score Status and are one of Inc. Magazine s 50 Best Places to Work. Locally, Atlanta Business Chronicle has also rated us One of the Best Places to Work in Atlanta in 2017. We have over 300 clients in 22 offices and have access to the best companies and most sought-after IT career and consulting opportunities in America. Apply with Eliassen Group today to see how we can serve you! - provided by Dice
          

Devops-Engineer---Devops-Lead---Devops-Architect

 Cache   
Title: Devops Lead /Architect (Docker/Kubernetes) Location: Irving, TX Duration: 1 year Need good experience in Kubernetes, Dockers, Ansible and CI/CD Description: As a Senior DevOps Engineer, you will be an integral member of the DevOps team whose responsibility includes prototyping, designing, developing and supporting a highly scalable container orchestration solution based on Docker and Kubernetes. Thanks & regards, Kumar Beeram Tel: ************ Email: ******************** Website: ****************** Job Requirements . Job Snapshot Location US-TX-Irving Employment Type Contractor Pay Type Year Pay Rate N/A Store Type IT & Technical Company Overview Infovision Consultants Inc InfoVision was founded in 1995 by technology professionals with a vision to provide quality and cost-effective IT solutions worldwide. InfoVision is a global IT Services and Solutions company with primary focus on Strategic Resources, Enterprise Applications and Technology Solutions.
          

Pull image from private registry

 Cache   
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ Create a Secret based on existing Docker credentials A Kubernetes cluster uses the Secret of docker-registry type to authenticate with a container registry to pull a private image. If you already ran docker login, you can copy that credential into Kubernetes: If you need more control (for example, to set a namespace or a label on the…Continue reading Pull image from private registry
          

cloud security

 Cache   
Get up to date news and recommendations for cloud security platforms, cloud workload protection, container security, workload security, DevOps security tools and more for AWS, GCP, Azure, Kubernetes, Docker, and other cloud platforms for enhancing enterprise cybersecurity for IT & DevSecOps teams.

          

Software Engineer Stf

 Cache   
Description:An opportunity for candidates with a BS or MS degree in computer science, computer/electrical engineering or related discipline interested in putting their skills to work as an experienced Software Engineer. You will have the opportunity to participate in all phases of the software development lifecycle. Proficiency in Object Oriented Programming (OOP) using C++, C# or Java is required. Other desirable skills include DevSecOps, Python, JavaScript, and web or database technologies. At Lockheed Martin Rotary and Mission Systems, we are driven by innovation and integrity. We believe that by applying the highest standards of business ethics and visionary thinking, everything is within our reach and yours as a Lockheed Martin employee. Lockheed Martin values your skills, training and education. Come and experience your future!

This requisition is for the Dark Ash program starting in October 2019. Dark Ash is a Software Defined Radio (SDR) based opportunity and requires skills primarily in Systems Engineering and Software Design, Development, and Testing. Leveraging a predecessor prototype solution as a springboard, this program will produce a deployable SDR software/hardware platform over the next five years.

Applicants selected will be subject to a government investigation and must meet eligibility requirements for access to classified systems.

Basic Qualifications:

--- Experience with C++, C#, Java or another OOP language

--- Experience developing software in a Linux environment

--- Experience with automated software testing, integration, and deployment

--- Strong analysis and mathematical skills

--- Willingness to learn and adapt to new technologies

--- Effective oral and written communication skills

--- Ability to work effectively in a rapid-paced, team environment

--- Candidates must have the ability to obtain a government security clearance (TS/SCI)

Desired Skills:

--- Experience with DevSecOps tools including Docker, Kubernetes, Gitlab, Ansible, ELK (e.g. ElasticStack), etc.

--- Experience with Software Defined Radios

--- Experience designing microservice-based architectures

--- Experience with relational and/or NoSQL databases

--- Experience with JavaScript and Python

--- Experience working on an Agile team using SAFe, Scrum, or Kanban with an understanding of common Agile tools and methodologies

--- Background developing, debugging, and/or testing of web applications, web services, and Linux application processes/threads

--- Desire to lead or able to self-direct

BASIC QUALIFICATIONS:

job.Qualifications

Lockheed Martin is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.

Join us at Lockheed Martin, where your mission is ours. Our customers tackle the hardest missions. Those that demand extraordinary amounts of courage, resilience and precision. Theyre dangerous. Critical. Sometimes they even provide an opportunity to change the world and save lives. Those are the missions we care about.

As a leading technology innovation company, Lockheed Martins vast team works with partners around the world to bring proven performance to our customers toughest challenges. Lockheed Martin has employees based in many states throughout the U.S., and Internationally, with business locations in many nations and territories.

EXPERIENCE LEVEL:

Experienced Professional
          

Java Software Engineer - Web Services - Mid Level

 Cache   
Purpose of JobWe are currently seeking talented Java Software Engineers - Mid Level for our Plano, TX facility.


USAA Java Software Engineers create and maintain APIs for our business software applications that our members use. Done well, many may never realize or appreciate how critical these APIs are or how we've made them simpler. Faster. Safer. Yet they help make our members' lives better. It's a great challenge and responsibility.

Here, you will invent. You'll design and test. You'll take risks. You'll create new technology, but the impact you'll make on our members will be far more significant.

As a Java Engineer, you are expected to be able to function in a fast-paced environment driving innovation through rapid prototyping and iterative development ensuring quality is built into all solutions leveraging TDD. Your responsibilities will require you to be knowledgeable in API development. You will be a part of teams of developers guiding on engineering and architecture best practices as well as demonstrate the ability to partner and engage with other engineers and architects. You will work closely with business clients and UI designers to analyze user requirements, code applications and customize and/or integrate commercial software packages for both internal employees and external member-facing applications across multiple platforms.

As a Java Engineer, it will be your responsibility to maintain code integrity and quality.

This Job Posting is for multiple openings available in 2019/2020.Job RequirementsABOUT USAAUSAA knows what it means to serve. We facilitate the financial security of millions of U.S. military members and their families. This singular mission requires a dedication to innovative thinking at every level.In each of the past five years, we've been a top-40 Fortune 100 Best Companies to Work For -, and we've ranked among Victory Media's Top 10 Military Friendly - Employers 13 years straight. We embrace a robust veteran workforce and encourage veterans and veteran spouses to apply.ABOUT USAA ITOur most important qualification isn't technical, it's human. Here, we don't just sit in front of a screen. We stand behind our 11 million members who rely on us every day. -We are over 3,000 employees strong, a passionately supportive and collaborative team built on Agile principles. We've been a top-two Computerworld 100 Best Places to Work in IT five years in a row and were recently named a Top 50 Employer for Minority Engineers & IT by Workforce Diversity Magazine. - - - -

See what it's like to work for a company where your passion meets our purpose:Click here to watch the USAA Java Software Engineer - Web Services Spotlight VideoUSAA Information Technology: A Realistic PreviewPRIMARY RESPONSIBILITIES
  • With limited guidance performs defect correction (analysis, design, code) on less complex issues and/or codes applications of medium complexity.
  • With guidance begins to install, customize and integrate commercial software packages.
  • Works with more tenured peers to gain understanding of systems while conducting root cause analysis of issues, reviewing new and existing code and/or performing unit testing.
  • Works with experienced team members to conduct root cause analysis of issues, review new and existing code and/or perform unit testing.
  • Learns to create system documentation/play books and attends requirements, design and code reviews. Receives work packages from manager and/or delegates.
  • Understands and assists in gathering and analyzing customer requirements and may respond to outages following the appropriate processes.
  • Partners with experienced team members to develop accurate estimates on work packages. May begin to identify issues that impact availabilityMINIMUM REQUIREMENTS
    • Bachelor's degree or 4 additional years of I/T experience beyond the minimum required may be substituted in lieu of a degree. - -
    • AND, 2+ years of software engineering/development experience utilizing Java with experience working with Web Services REST or SOAPPREFERRED
      • 2+ years in REST frameworks with focus on API development
      • 1+ years in AGILE methodology
      • 1+ years experience working with JavaScript
      • 1+ years experience integrating with backend services like JMS, J2C, ORM frameworks (Hibernate, JPA, JDO, etc), JDBC.
      • Ability to implement container based APIs using a container frameworks OpenShift, Docker, or Kubernetes.
      • Relational Database design and optimization with Oracle DB2, MySQL, Postgress
      • Familiar with Gradle, GIT, GitHUB, GITLab, etc. around continuous integration and continuous delivery infrastructure
      • Experience testing in REST services
      • Experience in design and develop automated test frameworkDESIRED CHARACTERISTICSUSAA Java Engineers create innovative solutions that impact our members. Collectively, we are:
        • Curious and excited by new ideas -
        • Energized by a fast-paced environment
        • Able to understand and translate business needs into leading-edge technology -
        • Comfortable working as part of a connected team, but self-motivated -
        • Community-focused, dependable and committed -
        • Exceptionally detail-orientedThe above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job.At USAA our employees enjoy one of the best benefits package in the business, including a flexible business casual or casual dress environment, comprehensive medical, dental and vision plans, along with wellness and wealth building programs. - Additionally, our career path planning and continuing education will assist you with your professional goals.Relocation assistance is notavailable for this position.
          

Infrastructure Developer C#/SQL

 Cache   
Location: Columbus, OH Description: Our client is currently seeking an InfrastructureEngineer with .NET/C#/PowerShell/configuration tools. NO C2C or 1099. W2 Only. This is a contract to hire with a Fortune 15 Company. Apply online or send resumes directly to bpiper@judge.com and reference job # 636728This job will have the following responsibilities:
  • Experience developing & supporting with C# programming language
  • Experience in writing Web Applications in ASP.NET MVC and REST APIs in ASP.NET Web API2.
  • Understanding of micro services architecture
  • Good understanding of SOLID principles and design pattern
  • Expert level experience in scripting with PowerShell for Windows systems
  • Experience and working knowledge of build automation, building tools and processes around CI/CD pipelines involving integrations with Jenkins, testing frameworks, github, etc
  • Experience with Windows Infrastructure automation: Windows automated Build & Imaging, Networking, Storage, Clusters etc.
  • Experience with SQL procedures including writing complex SQL queries and stored procedures, and query optimization.
  • Proficiency in debugging and analyzing complex software systems, willingness to deep-dive into all layers of the technology stack including a database, network, and operating system
  • Excellent problem solving skills with a strong attention to details
  • Ability to perform well under pressure, meet deadlines, and manage multiple tasks simultaneously. Strong written and oral communication skills
  • Perform root cause analysis of production impacting issues, including opening problem cases with vendors and driving them to conclusion
  • Maintaining and bug-fixing existing applications and infrastructure
  • Familiarity with configuration management and server deployment automation, preferably SCCM or PowerShell DSC or Puppet or Ansible Qualifications & Requirements:
    • Infrastructure automation and deployment tools (Terraform, Packer, Salt/Chef/Puppet, etc.)
    • Automated infrastructure testing (Pester, Test Kitchen, InSpec, ServerSpec, etc.)
    • Containerization and scheduling (Docker, Kubernetes, etc.)
    • Knowledge of .NET Core 2.x Contact: This job and many more are available through The Judge Group. Find us on the web at - provided by Dice
          

Обзор Skaffold для разработки под Kubernetes

 Cache   


Полтора года назад, 5 марта 2018, компания Google выпустила первую альфа-версию своего Open Source-проекта для CI/CD под названием Skaffold, целью которого стало создание «простой и воспроизводимой разработки под Kubernetes», чтобы разработчики могли сфокусироваться именно на разработке, а не на администрировании. Чем может быть интересен Skaffold? Как оказалось, у него есть несколько козырей в рукаве, благодаря которым он может стать сильным инструментом для разработчика, а может — и инженера по эксплуатации. Познакомимся с проектом и его возможностями. Читать дальше →
          

IT / Software / Systems: Infrastructure Developer C#/SQL - Columbus, Ohio

 Cache   
Location: Columbus, OH Description: Our client is currently seeking an InfrastructureEngineer with .NET/C#/PowerShell/configuration tools. NO C2C or 1099. W2 Only. This is a contract to hire with a Fortune 15 Company. Apply online or send resumes directly to bpiper@judge.com and reference job # 636728This job will have the following responsibilities: Experience developing & supporting with C# programming language Experience in writing Web Applications in ASP.NET MVC and REST APIs in ASP.NET Web API2. Understanding of micro services architecture Good understanding of SOLID principles and design pattern Expert level experience in scripting with PowerShell for Windows systems Experience and working knowledge of build automation, building tools and processes around CI/CD pipelines involving integrations with Jenkins, testing frameworks, github, etc Experience with Windows Infrastructure automation: Windows automated Build & Imaging, Networking, Storage, Clusters etc. Experience with SQL procedures including writing complex SQL queries and stored procedures, and query optimization. Proficiency in debugging and analyzing complex software systems, willingness to deep-dive into all layers of the technology stack including a database, network, and operating system Excellent problem solving skills with a strong attention to details Ability to perform well under pressure, meet deadlines, and manage multiple tasks simultaneously. Strong written and oral communication skills Perform root cause analysis of production impacting issues, including opening problem cases with vendors and driving them to conclusion Maintaining and bug-fixing existing applications and infrastructure Familiarity with configuration management and server deployment automation, preferably SCCM or PowerShell DSC or Puppet or Ansible Qualifications & Requirements: Infrastructure automation and deployment tools (Terraform, Packer, Salt/Chef/Puppet, etc.) Automated infrastructure testing (Pester, Test Kitchen, InSpec, ServerSpec, etc.) Containerization and scheduling (Docker, Kubernetes, etc.) Knowledge of .NET Core 2.x Contact: This job and many more are available through The Judge Group. Find us on the web at - provided by Dice ()
          

Обзор Skaffold для разработки под Kubernetes

 Cache   


Полтора года назад, 5 марта 2018, компания Google выпустила первую альфа-версию своего Open Source-проекта для CI/CD под названием Skaffold, целью которого стало создание «простой и воспроизводимой разработки под Kubernetes», чтобы разработчики могли сфокусироваться именно на разработке, а не на администрировании. Чем может быть интересен Skaffold? Как оказалось, у него есть несколько козырей в рукаве, благодаря которым он может стать сильным инструментом для разработчика, а может — и инженера по эксплуатации. Познакомимся с проектом и его возможностями. Читать дальше →
          

New location added! We're tagging along with @awscloud, @Rancher_Labs, @gitlab, & @portwx for roadshow events throughout EU - this time in Stockholm! These events are aimed to help you understand how your org can embrace #Kubernetes in your #enterprise. https://hubs.ly/H0lDSTz0 pic.twitter.com/UQMdSkucU8

 Cache   

New location added! We're tagging along with , , , & for roadshow events throughout EU - this time in Stockholm! These events are aimed to help you understand how your org can embrace in your . https://hubs.ly/H0lDSTz0 


          

Senior Systems Engineer - Denver, CO

 Cache   
Required Security Clearance: TS/SCI with an ability to get CI poly Required Certifications: N/A Required Education: Bachelor’s degree in technology or the sciences is preferred with 12 years experience. Required Experience: 12 years’ experience with a BA/BS. We can modify education based on the length of experience and actual education level. Functional Responsibility: Support high performance (HPC) and accelerated compute environments from the ground up. Help in the creation and maintenance of a DevOps process for program efforts, from the basic data collection and pre-processing, to building and training models in AI and Machine Learning within a R&D environment. Apply experience in devops, high-performance compute, GPU-processing, and cluster management. This is not system administration, this is an engineering role to optimize. You will be working as a direct engineer. Qualifications: Experience working on Linux systems. Experience with building and deploying containerized, gpu-enabled applications in Docker, Singularity, or Kubernetes. Experience in orchestration and cluster management tools such as Slurm, Mesos, or Moab. Experience with deploying systems in both on-premise and cloud environments (AWS, Azure, Google). Preferences: Strong preference to those with experience with AI and Machine Learning Development Tool Sets (Jupyter, Keras, TensorFlow, MPI, OpenMP, OpenCL, CUDA). Working Conditions: Work is typically based in a busy office environment and subject to frequent interruptions. Business work hours are normally set from Monday through Friday 8:00am to 5:00pm, however some extended or weekend hours may be required. Additional details on the precise hours will be informed to the candidate from the Program Manager/Hiring Manager. Physical Requirements: May be required to lift and carry items weighting up to 25 lbs. Requires intermittent standing, walking, sitting, squatting, stretching and bending throughout the work day. Background Screening/Check/Investigation: Successful Completion of a Background Screening/Check/Investigation will/may’ be required as a condition of hire. Employment Type: Full-time / Exempt Benefits: Metronome offers competitive compensation, a flexible benefits package, career development opportunities that reflect its commitment to creating a diverse and supportive workplace. Benefits include, not all inclusive – Medical, Vision & Dental Insurance, Paid Time-Off & Company Paid Holidays, Personal Development & Learning Opportunities. Other: An Equal Opportunity Employer: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status or disability status. Metronome LLC is committed to providing reasonable accommodations to employees and applicants for employment, to assure that individuals with disabilities enjoy full access to equal employment opportunity (EEO). Metronome LLC shall provide reasonable accommodations for known physical or mental limitations of qualified employees and applicants with disabilities, unless Metronome can demonstrate that a particular accommodation would impose an undue hardship on business operations. Applicants requesting a reasonable accommodation may make a request by contacting us.
          

Serverless: Is It The Kubernetes Killer?

 Cache   

A few years ago, many believed OpenStack was going to take over the world. It was inevitable, and no one questioned it. Docker shifted the landscape, and then along came Kubernetes, which was another ...
          

Practical AI 63: Open source data labeling tools

 Cache   

What’s the most practical of practical AI things? Data labeling of course! It’s also one of the most time consuming and error prone processes that we deal with in AI development. Michael Malyuk of Heartex and Label Studio joins us to discuss various data labeling challenges and open source tooling to help us overcome those challenges.

Discuss on Changelog News

Sponsors

Featuring

Notes and Links


          

Corporate Puff Pieces From Linux Foundation at KubeCon San Diego

 Cache   

          

heise online News: VMware setzt sich ein ehrgeiziges Ziel: Kubernetes soll langweilig werden

 Cache   
Auf der VMworld zeigte VMware sein Projekt Pacific: Kubernetes will der Konzern in die ESXi-Welt integrieren und mit der vSphere-GUI administrieren lassen.
          

Требуется «Старший разработчик Python» (Москва)

 Cache   
Компания «АВИТО» ищет хорошего специалиста на вакансию «Старший разработчик Python». Россия, Москва. Полный рабочий день. Требуемые навыки: #backend, #senior, #Python, #Высоконагруженныесистемы, #SQL, #Redis, #Docker, #Kubernetes.
          

Требуется «Python джедай» (от 140 000 руб.)

 Cache   
Компания «Amixr Inc.» ищет хорошего специалиста на вакансию «Python джедай». От 140 000 руб. Полный рабочий день. Можно удалённо. Требуемые навыки: #backend, #Django, #Python, #MySQL, #Rabbitmq, #Redis, #Docker, #Kubernetes, #Git.
          

IT / Software / Systems: Full-stack Software Developer - Salt Lake City, Utah

 Cache   
Mavenlink is looking for talented software developers to join our Salt Lake office. We have a great team in Salt Lake, and you'll be joining in the early days, with a chance to influence the culture. You'll step into a supportive environment praised by our engineers for its focus on continuous learning. Here's how our engineering culture will support your career growth: Pair Programming "We work twice as fast and produce better code because, with two minds working together, you find solutions you wouldn't have seen if you were working by yourself." Amanda Holl, Software Engineer Continuous Learning "I don't just want to learn so I can be the best, I want to learn so I can teach the people sitting next to me-so we can all grow." - Adam Ellsworth, Software Engineer Coaching and Mentorships "Every engineer here has a 'coach' - an active, practicing engineer providing mentorship and support to help their 'coachees' grow their careers." - Andy Leavitt, Director of Engineering Open Communication "We teach team members how to give and receive feedback. I feel like the things I say are heard and acted on, and I have an opportunity to act on them myself." - Paulette Luftig, Software Engineer Full-stack development "We have open architecture meetings that everyone is invited to, which we can do because everyone is full stack, and we all know how the pieces fit together. There are very few blind spots this way." -Maggie Sheldon, Senior Director of Product As our product and customer base grow, we're seeing interesting technical challenges. We've recently finished: Moving from Sprockets to Webpack/Yarn Real-time streaming of all database events to time-sensitive application systems Automated containerized staging deployment of every green developer build Upcoming challenges include: Developing a rich & sophisticated React component architecture Evolution from a single Rails app to cohesive, decoupled services Auto-scaled, self-healing production Kubernetes Joining Mavenlink, we'll guide you toward the challenges that interest you. Skills & RequirementsThough we will eventually re-open our recruiting to early-career candidates, at this time we're only considering candidates with 2+ years of experience. Experience in our stack is not a requirement. We value empathy, communication, and care for our colleagues. ()
          

Développeur Back-End ( 75000 PARIS, France )

 Cache   

Inspiré du modèle Spotify, notre équipe tech de 75 personnes s’organise en tribus, groupes et guildes. L’ensemble de nos équipes travaille en mode agile avec des sprints de 2 semaines.

Vous rejoindrez l’une de nos équipes de premier ordre : B2C, B2B ou la tribu de l’approvisionnement, chacune composée d’environ 15 membres.

Environnement technique :

  • 140 micro-services deployés en production

  • Framework front-end : Node.j + TypeScript, Python, Go and React

  • Swift, Kotlin pour les applications mobiles

  • MongoDB, PostgreSQL, bases de données Elasticsearch

  • Plus de 30 déploiements journaliers

  • Totalement automatisé CI/CD avec CircleCI, Docker et Kubernetes

  • Monitoring complet de la stack de production (ELK, Prometheus, Grafana, APMs...)

  • De hauts standards en matière de développement : TDD, Code metrics, PeerReview, tests de charge...


          

DevOps Engineer

 Cache   
DevOps Engineer - Sonstiges

One of Argyll Scott's clients based out in Tallinn, are looking for a DevOps Engineer for Migration Support, to join them on one of their projects for an initial 6 month basis.

They are looking for the following -
Essential Skills:
. Deep working knowledge Terraform, Helm, Kubernetes
. Deep working knowledge of Jenkins 2.x and Artifactory, ad git/gitlab
. Postgresql
. AWS and good knowledge of the well architecture framework
Desirable skills:
. Chef and Inspec
. Azure
. Sonarqube
If you would like to find out more information, please do contact me directly or apply to this advert.

    Firma: Argyll Scott Technology
    Job-Typ: Andere

          

Kubernetes Online Training Kubernetes Training in Hyderabad

 Cache   
DevOps Online Hub is an Online E -learning Portal in Hyderabad. Docker & Kubernetes Training provides complete corporate level Training from Real time industry Experts. We provide Online Training in Hyderabad, Bangalore, Chennai, Noida, Delhi, etc. Call Us@ +91-9676446666.
          

Full-stack Node.js / Vue.js Developer в Virtido, Львов

 Cache   

Необходимые навыки

We are looking for an experienced full-stack software engineer comfortable with newest JavaScript technologies, relevant skills include:

— Fluent in JavaScript logic on the back-end
— Experience with Node.js
— Experience with Vue.js and other front-end frameworks
— Experience building REST services
— Experience with Docker, Linux and/or Kubernetes
— Experience in load balancing, reverse-proxies and networking
— Knowledge about security and how to secure the client and ideally also server endpoints
— Experience with git-based workflows
— Experience with Scrum and Kanban methods for development
— Passionate about high quality code, professional attitude, and high standards.
— Excellent verbal and written communication skills in English.

Предлагаем

— Medical insurance.
— Flexible working hours.
— Ability to work from home 20% of your time.
— 25 business days of fully paid annual leave, 10 business days of fully paid sick leave (annual).
— Courses of English and German.
— Comfortable and friendly environment at the office.

Обязанности

Your role in our team

— You work on our client’s newest strategic projects with short development iterations
— You are part of a small, cross-functional team where everyone has a voice
— You collaborate with the customer to find the best suitable design and architecture to build a high-quality software solution

О проекте

About Virtido

Virtido is an entrepreneurial and innovative IT company headquartered in Zurich, Switzerland. We realize ideas and projects — from strategic concept to technical implementation closely alongside our dynamic clients with a strong focus on start-up or fast-growing companies. Since inception in 2015, we have grown rapidly to currently 90+ professionals in Switzerland and Western-Ukraine.

About our Client

Our client is a technology start-up company that improves the global food value chain. They provide professional services for industrial food production sites through a global crowd service platform, using smart real-time technologies to match service technicians and customers. Services provided to customers are ranging from inspections and maintenance to repairs and spare parts as well as specific data-based services. Service technicians in the field are digitally enabled along the whole process through mobile and cloud technology.

Our client is looking for a Full-stack Node.js / Vue.js Developer to join our professional and committed development team that creates and maintains state-of-the-art web applications as well as mobile apps. They will extend and strengthen their backend to enable new, modern and innovative business models. In addition, knowledge management is a big topic, as information needs to be provided to service technicians on-site (offline on mobile device) and as training material.

Upcoming projects

Our client’s goal is to establish a web application as a platform for their customer and service partners to order and orchestrate service products.
For every participant/user they want a suitable web app or mobile app to communicate with the backend. They want to start with a mobile app for supervisors to coordinate teams.
Their technology stack is based on Vue.js, Javascript on front-end, with nginx/node.js as backend, integrated (currently) with various SAP systems through the newest Kyma (kubernetes cloud). As for the mobile framework Flutter or Nativescript is being discussed.
They believe in simplicity and elegance to create attractive and responsive interfaces.


          

Senior DevOps Engineer в Gemicle, Винница

 Cache   

Необходимые навыки

● B.Sc. Degree in Computer Science (or related degree in Computer field).
● Minimum 2 years of hands on devops or system software development experience.
● Great software development skills, advantage for experience in Golang and/or Node.JS. High level in bash scripting
● Experience and deep understanding of Docker
● Experience and deep understanding of Kubernetes
● Sysadmin experience with Linux and cloud providers aws | gcp | azure, etc
● Great troubleshooting skills
● Deep understanding of web based applications and microservice architecture, working with different http apis
● Understanding Continuous Integration and Continuous Deployment processes
● Experience with monitoring tools

Будет плюсом

● Experience with Golang and/or Node.JS.
● Experience with Prometheus
● Experience with i/o performance testing and tuning
● Experience with Helm

О проекте

We are looking for Senior DevOps Engineer to participate in architecture, development and maintaining of Codefresh infrastructure. DevOps engineer will work with Docker, Kubernetes, Prometheus, Amazon, Google Cloud, Azure, Ceph, Helm, Terraform, Ansible, Consul and other modern DevOps tools. A person that will be part of an innovative team that builds the next generation of Container Development Tools. We are looking for a mindset of collaboration, technological innovation and responsibility.


          

IBM ontwikkelt eerste publieke cloud voor financiële dienstverlening

 Cache   
IBM heeft aangekondigd dat het de eerste publieke cloud ter wereld heeft ontwikkeld die klaar is voor financiële dienstverlening. IBM verwelkomt instellingen voor financiële dienstverlening en hun leveranciers om deel te nemen aan deze publieke cloud. De Bank of America is de eerste toegewijde klant van het platform dat is gebouwd op de publieke cloud van IBM. De Bank zal belangrijke applicaties en workloads hosten om de vereisten en privacy- en veiligheidsverwachtingen van haar 66 miljoen bankklanten te ondersteunen.De openbare cloud die klaar is voor financiële services is ontworpen om te helpen voldoen aan de eisen van financiële instellingen voor naleving van regelgeving, beveiliging en veerkracht. Dit zal financiële instellingen helpen transacties uit te voeren met technologieleveranciers die aan de vereisten van het platform hebben voldaan. Het is volgens IBM het enige branchespecifieke openbare cloudplatform dat preventieve controles biedt voor gereguleerde workloads van financiële diensten, met multi-architectuur ondersteuning en proactieve en geautomatiseerde beveiliging met hoogste niveaus van encryptie.Om de controlevereisten voor het platform te helpen ontwikkelen heeft IBM intensief samengewerkt met Bank of America. De samenwerking met IBM markeert de volgende stap in de zeven jaar durende cloudreis van de Bank of America en weerspiegelt inzet van de bank voor beveiliging en privacy van bankklanten. De publieke cloud voor financiële services kan Independent Software Vendors en SaaS-aanbieders in staat stellen zich te concentreren op hun kernaanbod aan financiële instellingen. Alleen ISV- of SaaS-providers die aantonen dat ze het beleid van het platform naleven komen in aanmerking voor aanbod van diensten via het platform. De publieke cloud voor financiële services zal naar verwachting worden uitgevoerd op de openbare cloud van IBM, die Red Hat OpenShift gebruikt als primaire Kubernetes-omgeving voor beheer van gecontaineriseerde software. Het bevat meer dan 190 API-gedreven, cloud native PaaS-services om nieuwe en verbeterde cloud-native apps te mee te maken. Meer informatie: IBM
          

Microsoft SQL Server 2019 biedt datavirtualisatie

 Cache   
Tijdens de Ignite-conferentie in Orlando heeft Microsoft SQL Server 2019 gepresenteerd. Microsoft positioneert SQL Server 2019 als een Unified data platform, waarop enterprise data in een data lake kunnen worden opgeslagen en met SQL en Spark query's bevraagd.Deze versie breidt de mogelijkheden van vorige releases uit, zoals de mogelijkheid om op Linux en in containers te draaien en de PolyBase-technologie voor connecteren met big data-opslagsystemen. SQL Server 2019 maakt gebruik van PolyBase v2 voor volledige datavirtualisatie en combineert de Linux/container-compatibiliteit met Kubernetes om de ​​nieuwe technologie Big Data Clusters te ondersteunen.Big Data Clusters implementeert een op Kubernetes gebaseerde multi-cluster implementatie van SQL Server en combineert deze met Apache Spark, YARN en Hadoop Distributed File System om een ​​enkel platform te leveren dat OLTP, data lakes en machine learning faciliteert. Het kan worden geïmplementeerd op elk Kubernetes-cluster, on-premises en in de cloud, inclusief op de eigen Microsoft Azure Kubernetes Services.Microsoft wil met SQL Server 2019 ook het ETL-proces vereenvoudigen met de levering van datavirtualisatie. Applicaties en ontwikkelaars kunnen met de T-SQL taal zowel gestructureerde als gestructureerde data vanuit bronnen als Oracle, MongoDB, Azure SQL, Teradata en HDFS benaderen.Azure Data StudioMicrosoft biedt ook de GUI tool Azure Data Studio, een cross-platform database tool voor data professionals. Azure Data Studio was eerder in preview bekend als SQL Operations Studio, en biedt een moderne editor experience met IntelliSense, code snippets, source control integratie een geïntegreerde terminal. Met Azure Data Studio zijn Big Data Clusters te benaderen door middel van interactieve dashboards, en ook biedt het SQL en Jupyter Notebooks toegang.Lees hier alle details in de uitgebreide blog van Asad Khan, Partner Director of Program Management, SQL Server and Azure SQL.Meer informatie: Microsoft
          

Containers for Microservices: Kubernetes and Docker Recipes

 Cache   
Imagen: https://i90.fastpic.ru/big/2019/1021/75/eefc166bfb4398ef30ba0a102e942775.jpg MP4 | Video: AVC 1920x1080 30fps | Audio: AAC 48KHz 2ch |...
          

Devops Engineer Noida/J41191 ( 5-8 yrs. )

 Cache   
Noida
Job Post Date: Thursday, November 07, 2019
Other IT Experience
Have 5 years relevant experience in a DevOps cloud environment (Git, Jenkins, AWS, Azure, Docker, Mesos, Kubernetes)
Have strong Scripting (Bash/Salt/Ansible/Terraform/Puppet/Chef) or
Programming (Python/Ruby) skills.
Have strong Linux, Networking and System Administration skills are involved in the DevOps Community and Open Source projects (visiting conferences, meetups, giving talks, etc..)
Like to take initiative, work in close collaboration with fellow developers and share your ideas and knowledge

          

DevOps Engineer - Native Kubernetes & AWS

 Cache   
Negotiable: Project People: DevOps Engineer - Native Kubernetes & AWSContract6 months +Croydon, Bracknell or BasingstokeSecurity Clearance will be requiredMy client has an urgent Croydon
          

Ubuntu Blog: Canonical collaborates with NVIDIA to accelerate enterprise AI adoption in multi-cloud environments and at the edge

 Cache   

Enterprises currently face the challenge of how to adopt and integrate AI and ML into their operations effectively, at scale and with minimum complexity. In tandem, today’s AI workloads have become increasingly advanced and the compute power required to support them has exponentially increased. 

Canonical and NVIDIA have collaborated to help enterprises accelerate their adoption of AI and ML with Ubuntu 18.04 LTS certified on the NVIDIA DGX-2 AI system. This combination brings unprecedented performance, flexibility and security to enterprises’ AI/ML operations. With the ability to run the entire line of DGX systems either stand-alone or as part of a Kubernetes cluster on Ubuntu, enterprises can unlock containerised and cloud-native development of GPU-accelerated workloads.  

The NVIDIA DGX-2 offers unprecedented levels of compute, with 16 of the world’s most advanced GPUs delivering 2 petaFLOPS of AI performance. With the combination of DGX-2  and Ubuntu 18.04 LTS, data scientists and engineers can move faster and at a greater scale using their chosen operating system, allowing them to deliver portable AI workloads on-premises, in the cloud and at the edge.

“Ubuntu is the preferred AI and ML platform for developers and the No. 1 operating system for Kubernetes deployments on-premises and in the public cloud. This collaboration with NVIDIA enables enterprises to enhance their developers’ productivity and incorporate AI more quickly through development stages to production,” said Stephan Fabel, Director of Product at Canonical. “The combination of DGX-2 and Ubuntu helps organisations to realise the vast potential of AI, allowing them to develop and deploy models at scale via the world’s most powerful AI system.”

“DGX-2 was built to solve the world’s most complex AI challenges in a purpose-built solution,” said Tony Paikeday, Director, AI Systems, NVIDIA. “DGX-2 and Ubuntu bring the best of both worlds together, giving AI developers the power to explore without limits in a solution that enterprises can easily manage.”

Ubuntu 18.04 LTS with NVIDIA DGX-2 builds upon a long-standing collaboration with Canonical enabling NVIDIA GPU hardware in a consistent, performant and seamless manner across private and public infrastructure. Canonical’s Charmed Kubernetes fully automates the installation and enablement of NVIDIA GPUs and is tightly integrated with public cloud Kubernetes offerings, where similar enablement on the worker nodes running Ubuntu provide a uniquely portable multi-cloud experience for AI and ML use cases.


          

DevOps Engineer

 Cache   
DevOps Enginner Per un progetto in Area DevOps di un’importante multinazionale, nostra azienda cliente su Milano stiamo ricercando un DevOps Engineer con la passione per la tecnologia e lo sviluppo di sistemi software affidabili e scalabili.Se desideri mettere a frutto le tue competenze tecniche nell’ambito DevOps, sei la persona giusta per noi. Il candidato ideale deve essere preferibilmente in possesso di una Laurea ad indirizzo informatico o cultura equivalente. Skills fondamentali, per entrare nel nostro team sono dinamismo, passione per la tecnologia, voglia di fare ed intraprendenza. Skills-set richiesto:
  • Conoscenza approfondita sistemi operativi Unix-like
  • Conoscenza approfondita linguaggi di programmazione ad oggetti (Python, Java, …) e progettazione software
  • Conoscenza Kubernetes o Docker
  • Conoscenza di metodologie e processi CI/CD e strumenti come ad esempio Jenkins, Gitlab, ArgoCD, Jenkins-X o simili.
Costituiscono un plus:
  • Conoscenza di strumenti di configuration management e provisioning come Ansible, Terraform, Puppet, Chef, o simili
  • Conoscenza di cloud computing e servizi PaaS/IaaS.
Ottimo se hai..
  • Buone capacità comunicative e relazionali,
  • Orientamento al raggiungimento degli obiettivi,
  • Volontà di inserirsi in un contesto stimolante ed innovativo.
Itconsulting offre:
  • Contratto di assunzione a tempo indeterminato (CCNL Metalmeccanico)
  • Collaborazione su progetti internazionali,
  • Corso di aggiornamento professionale in azienda su tecnologie innovative.
L’inquadramento contrattuale verrà valutato sulla base dell’effettiva esperienza maturata. L’annuncio è rivolto anche a professionisti con la partita Iva. L’annuncio è rivolto a candidati di ambo i sessi (L.903/77). Gli interessati possono inviare il CV aggiornato a job@itconsultingsrl.it. Ricorda di specificare l’autorizzazione al trattamento dei dati personali contenuti nel Curriculum Vitae ai sensi del D.Lgs. 196/2003 e all’art. 13 GDPR(Regolamento UE 2016/79) ai fini della ricerca e selezione del personale.
          

IT / Software / Systems: Full-stack Software Developer - Salt Lake City, Utah

 Cache   
Mavenlink is looking for talented software developers to join our Salt Lake office. We have a great team in Salt Lake, and you'll be joining in the early days, with a chance to influence the culture. You'll step into a supportive environment praised by our engineers for its focus on continuous learning. Here's how our engineering culture will support your career growth: Pair Programming "We work twice as fast and produce better code because, with two minds working together, you find solutions you wouldn't have seen if you were working by yourself." Amanda Holl, Software Engineer Continuous Learning "I don't just want to learn so I can be the best, I want to learn so I can teach the people sitting next to me-so we can all grow." - Adam Ellsworth, Software Engineer Coaching and Mentorships "Every engineer here has a 'coach' - an active, practicing engineer providing mentorship and support to help their 'coachees' grow their careers." - Andy Leavitt, Director of Engineering Open Communication "We teach team members how to give and receive feedback. I feel like the things I say are heard and acted on, and I have an opportunity to act on them myself." - Paulette Luftig, Software Engineer Full-stack development "We have open architecture meetings that everyone is invited to, which we can do because everyone is full stack, and we all know how the pieces fit together. There are very few blind spots this way." -Maggie Sheldon, Senior Director of Product As our product and customer base grow, we're seeing interesting technical challenges. We've recently finished: Moving from Sprockets to Webpack/Yarn Real-time streaming of all database events to time-sensitive application systems Automated containerized staging deployment of every green developer build Upcoming challenges include: Developing a rich & sophisticated React component architecture Evolution from a single Rails app to cohesive, decoupled services Auto-scaled, self-healing production Kubernetes Joining Mavenlink, we'll guide you toward the challenges that interest you. Skills & RequirementsThough we will eventually re-open our recruiting to early-career candidates, at this time we're only considering candidates with 2+ years of experience. Experience in our stack is not a requirement. We value empathy, communication, and care for our colleagues. ()
          

Accelerating cloud-native application development in the enterprise

 Cache   

Each day more and more organizations experience the benefits of cloud native development. Using products like Azure Kubernetes Service (AKS), they’re able to build distributed applications that are more resilient and dynamically scalable, while enabling portability in the cloud and at the edge. Most of all, organizations want to use Kubernetes and cloud native technology […]

The post Accelerating cloud-native application development in the enterprise appeared first on Cloudmovement.


          

Red Hat Security Advisory 2019-3722-01

 Cache   
Red Hat Security Advisory 2019-3722-01 - Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments. This advisory contains the openshift-enterprise-hypershift container image for Red Hat OpenShift Container Platform 4.1.22. Issues addressed include a cross site scripting vulnerability.
          

Load balanced Kubernetes Ingress. So metal.

 Cache   

Kubernetes has some incredible features, one of them being Ingress. Ingress can be described as a way to give external access to a Kubernetes-run service, typically over HTTP(S). This is useful when you run webapps (Grafana, Binder) in your Kubernetes cluster that need to be accessed by users across your network. Typically, Ingress integrates with automation provided by public cloud providers like GCP/GKE, AWS, Azure, Digital Ocean, etc where the external IP and routing is done for you. I’ve found bare-metal Ingress configuration examples on the web to be hand-wavy at best. So what happens when there are so many standards, but not sure which one to pick? You make your own. Below is how I configured my bare-metal Ingress on my CoreOS-based Kubernetes cluster to access Grafana.

Starting from the outside in, I spun up another VM (Debian) on my network, and ran haproxy on it, the only non-Kubernetes piece of the solution. This Haproxy instance is what will load balance the incoming HTTP (port 80) and HTTPS (port 443) connections to the nginx-ingress-controller pods running on the controller nodes. This is the only single point of failure in the whole chain, and one I could further improve by running multiple VM’s with identical Haproxy configurations sharing a single IP via a VIP. An optimization for another time. Port 80 and 443 load balance to ports 30080 and 304430 respectively on one of the three Kubernetes controller nodes. Why do this and not just open up those ports on the Controller nodes themselves? The kubelet on each of the three controllers, and all nodes, does not use low-number ports (<1024), since it does not run as root, or use elevated permissions. I’d rather use haproxy’s proven track record of load balancing to distribute connections, and not the kubelet/kube-proxy. And Given the automatic-update nature of CoreOS, if one controller node restarts for an update, the other two nodes can route Ingress requests.

proxy1% cat /etc/haproxy/hahaproxy.cfg
defaults
 log global
 mode tcp
 option tcplog

listen stats
 mode http
 bind 10.10.2.14:8081
 stats enable
 stats hide-version
 stats refresh 30s
 stats show-node
 stats auth haproxyadmin:haproxyadmin
 stats uri /haproxy\_stats
 
frontend http-in
 bind 10.10.2.120:80
 default\_backend http-out
 
backend http-out
 server corea-controller0 10.10.0.125:30080 check
 server corea-controller1 10.10.0.126:30080 check
 server corea-controller2 10.10.0.127:30080 check

frontend https-in
 bind 10.10.2.120:443
 default\_backend https-out
 
backend https-out
 server corea-controller0 10.10.0.125:30443
 server corea-controller1 10.10.0.126:30443
 server corea-controller2 10.10.0.127:30443

frontend kubernetes-api-in
 bind 10.10.2.119:443
 default\_backend kubernetes-api-out
 
backend kubernetes-api-out
 server corea-controller0 10.10.0.125:6443
 server corea-controller1 10.10.0.126:6443
 server corea-controller2 10.10.0.127:6443

HTTP Requests for Grafana have now hit port 30080 on one of 10.10.0.{125,126,127} at the ingress-nginx-service. This service, routes those requests to pod with label ‘app: nginx-ingress-controller’, port (targetPort) 80. ‘Port’ in the Service specification refers to the port of the Kubernetes internally-accessed service. Port 30443, and 443 can be substituted as before for HTTPS instead of HTTP.

% kubectl get service ingress-nginx-service -o wide -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx-service NodePort 10.122.81.28 <none> 80:30080/TCP,18080:30081/TCP,443:30443/TCP 6d app=nginx-ingress-controller

% kubectl describe service ingress-nginx-service -n ingress-nginx
Name: ingress-nginx-service
Namespace: ingress-nginx
Labels: service=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"service":"ingress-nginx"},"name":"ingress-nginx-service","namespace":"ingre...
Selector: app=nginx-ingress-controller
Type: NodePort
IP: 10.122.81.28
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30080/TCP
Endpoints: 10.244.0.10:80,10.244.1.10:80,10.244.2.10:80
Port: nginxstatus 18080/TCP
TargetPort: 18080/TCP
NodePort: nginxstatus 30081/TCP
Endpoints: 10.244.0.10:18080,10.244.1.10:18080,10.244.2.10:18080
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 30443/TCP
Endpoints: 10.244.0.10:443,10.244.1.10:443,10.244.2.10:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

At this point requests are now in the Pod network and will stay there (not using externally-accessible IPs and Ports) for the remainder of the http request. One of the controllers will now access its Ingress configuration for Grafana, using the FQDN configured for this service to select where to send the http request itself.

% kubectl get ing -o wide --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
default grafana grafana.obfuscated.domain.net 80 3d


% kubectl describe ing grafana -n default
Name: grafana
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
grafana.obfuscated.domain.net
/ grafana-service:3000 (<none>)

Kubernetes will now route requests to service:grafana-service, destined for port 3000 on that pod’s. This is where the beauty of cloud infrastructure pays off. I can reboot the worker and controller nodes to my heart’s content, and requests will continue to be serviced. You do have more than one replica of your service and ingress, right?

% kubectl describe service grafana-service -n default
 Name: grafana-service
 Namespace: default
 Labels: <none>
 Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"grafana-service","namespace":"default"},"spec":{"ports":\[{"port":80,"protocol"...
 Selector: app=grafana
 Type: ClusterIP
 IP: 10.122.184.150
 Port: <unset> 80/TCP
 TargetPort: 3000/TCP
 Endpoints: 10.244.4.8:3000,10.244.5.15:3000
 Session Affinity: None
 Events: <none>

% kubectl get pod -l app=grafana -n default -o wide
 NAME READY STATUS RESTARTS AGE IP NODE
 grafana-667fc96676-rcxdh 1/1 Running 0 3d 10.244.4.8 corea-worker1
 grafana-667fc96676-tjpkg 1/1 Running 0 3d 10.244.5.15 corea-worker2.

Warts: The ingress-nginx service runs on all Kubernetes nodes (workers and controllers), which I don’t love, since I was trying to reduce the surface at which requests can be made into the Kubernetes cluster. This surface could be reduced by firewalling off those nodes/ports from the rest of the network. But this is just how Kubernetes works. A service exposed via NodePort listens on all cluster hosts (so kube-proxy can route you there). Some apps don’t play nice with the round-robin nature of http(s) requests within the ingress if you are not using any sort of session/cookie stickiness.


          

Kubernetes, CoreOS, and many lines of Python later.

 Cache   

Several months after my last post, and lots of code hacking, I can rebuild CoreOS-based bare-metal Kubernetes cluster in roughly 20 minutes. It only took  ~1300 lines of Python following Kelsey Hightower’s Kubernetes the Hard Way instructions. Why? The challenge. But really, why? I like to hack on code at home, and spinning up a new VM for another Django or Golang app was pretty heavyweight, when all I needed was an easy way to push it out via container. And with various open source projects out on the web providing easy ways to run their code, running my own Kubernetes cluster seemed like a no-brainer. From github/jforman/virthelper: First we need a fleet of Kubernetes VM’s. This script builds 3 controllers (corea-controller{0,1,2}.domain.obfuscated.net) with static IPs starting at 10.10.0.125 to .127, and 5 worker nodes (corea-worker{0,1,2,4,5}.domain.obfuscated.net) beginning at 10.10.0.110. These VMs use CoreOS’s beta channel, each with 2GB of RAM and 50GB of Disk.

$ ./vmbuilder.py --debug create_vm --bridge_interface br-vlan10 --domain_name domain.obfuscated.net \
--disk_pool_name vm-store --vm_type coreos --host_name corea-controller --coreos_channel beta \
--coreos_create_cluster --cluster_size 3 --deleteifexists --ip_address 10.10.0.125 \
--nameserver 10.10.0.1 --gateway 10.10.0.1 --netmask 255.255.255.0 --memory 2048 \
--disk_size_gb 50 $ ./vmbuilder.py --debug create_vm --bridge_interface br-vlan10 \
--domain_name domain.obfuscated.net --disk_pool_name vm-store --vm_type coreos \
--host_name corea-worker --coreos_channel beta --coreos_create_cluster --cluster_size 5 \
 --deleteifexists --ip_address 10.10.0.110 --nameserver 10.10.0.1 --gateway 10.10.0.1 \
 --netmask 255.255.255.0 --memory 2048 --disk_size_gb 50

Once that is done, the VMs are running, but several of their services are erroring out, etcd among them. Why? They use SSL certificates for secure communications among the etcd notes, and I decided to make that part of the below kubify script. I might revisit this later since one should be able to have an etcd cluster up and running without expecting Kubernetes. Carrying on…. From github/jforman/kubify:

$ /kubify.py --output_dir /mnt/localdump1/kubetest1/ --clear_output_dir --config kubify.conf --kube_ver 1.9.3 

Using the kubify.conf configuration file, this deploys Kubernetes version 1.9.3 to all the nodes, including Flannel (inter-node network overlay for pod-to-pod communication), the DNS add-on, and the Dashboard add on, using RBAC. It uses /mnt/localdump1/kubetest1 as the destination directory on the local machine for certificates, kubeconfigs, systemd unit files, etc. Assumptions made by my script (and config):

  • 10.244.0.0/16 is the pod CIDR. This is the expectation of the Flannel Deployment configuration, and it was easiest to just assume this everywhere as opposed to hacking up the kube-flannel.yml to insert a different one I had been using.
  • Service CIDR is 10.122.0.0/16.

Things learned:

  • rkt/rktlet as the runtime container is not quite ready for prime time, or perhaps its warts are not documented enough. rktlet/issues/183 rktlet/issues/182
  • kubelets crash with an NPE kubernetes/issues/59969
  • Using cfssl for generating SSL certificates made life a lot easier than using openssl directly. There are still a ton of certificates.
  • Cross-Node Pod-to-Pod routing is still incredibly confusing, and I’m still trying to wrap my head around CNI, bridging, and other L3-connective technologies.

End Result:

$ bin/kubectl --kubeconfig admin/kubeconfig get nodes
NAME STATUS ROLES AGE VERSION 
corea-controller0.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-controller1.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-controller2.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker0.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker1.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker2.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker3.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 
corea-worker4.obfuscated.domain.net Ready &lt;none&gt; 6h v1.9.3 

$ bin/kubectl --kubeconfig admin/kubeconfig get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-dns-6c857864fb-tn4r5 3/3 Running 3 6h 10.244.7.3 corea-worker4.obfuscated.domain.net 
kube-system kube-flannel-ds-dlczz 1/1 Running 2 6h 10.10.0.127 corea-controller2.obfuscated.domain.net 
kube-system kube-flannel-ds-kc45d 1/1 Running 0 6h 10.10.0.125 corea-controller0.obfuscated.domain.net
kube-system kube-flannel-ds-kz7ls 1/1 Running 2 6h 10.10.0.111 corea-worker1.obfuscated.domain.net
kube-system kube-flannel-ds-lwlf2 1/1 Running 2 6h 10.10.0.113 corea-worker3.obfuscated.domain.net 
kube-system kube-flannel-ds-mdnv8 1/1 Running 0 6h 10.10.0.110 corea-worker0.obfuscated.domain.net 
kube-system kube-flannel-ds-q44wt 1/1 Running 1 6h 10.10.0.112 corea-worker2.obfuscated.domain.net 
kube-system kube-flannel-ds-rdmr5 1/1 Running 1 6h 10.10.0.114 corea-worker4.obfuscated.domain.net 
kube-system kube-flannel-ds-sr26s 1/1 Running 0 6h 10.10.0.126 corea-controller1.obfuscated.domain.net 
kube-system kubernetes-dashboard-5bd6f767c7-bnnkm 1/1 Running 0 6h 10.244.1.2 corea-controller1.obfuscated.domain.net

          

What I read today:

 Cache   

I’d like to (try to) keep a running tab of all the technical, and non-technical, bits of information I pick up day to day. I’m hoping it might provide some insight into what I’m interested at the time, or little tidbits of helpful information I find laying around the web. Pain(less) NGINX Ingress

Once I get my Kubernetes cluster back up at home, I want to create separate environments for promotions. Right now the deployment I have running is much more pets than cattle, and I want to change that. I want to treat each piece as completely replaceable and interchangeable, and that only happens by having a setup that is not one big snowflake that you are afraid to touch.

How we grow Junior Developers at the BBC

All of this one rang true as an SRE trying to write more code. Mentoring others, while getting mentored are crucial characteristics I feel to being part of a productive team. You can’t just sit behind your monitoring with headphones on and expect to build relationships and have impact.


          

Kubernetes, the slow way.

 Cache   

It all started when I began hearing about this container thing outside of work. I’ve been a Google SRE going on 6 years, but knowing that the way we do containers internally on Borg is probably not how the rest of the world does reliable, scalable, infrastructure. I was curious, how hard could it be to spin up a few containers and play around like I do at work? Little did I know, it would take two months, a few hours a few nights a week, to get the point where I was able to access a web service inside my home grown Kubernetes cluster. Below are the high level steps, scripts, and notes I kept during the process.

Step One: Build the CoreOS cluster.

Using my virtbuilder script, I built a five-node CoreOS VM cluster on top of a Ubuntu host. I wanted enough VMs to have quorum, with enough leftover to needlessly restart, to watch pods migrate from host to host.

$ ./vmbuilder.py create_vm –bridge_interface vlan12 –domain_name foo.local.net –disk_pool_name vm-store –vm_type coreos –host_name coreA –cluster_size 5 –coreos_create_cluster –debug –ip_address 10.10.2.121 –nameserver 10.10.2.1 –gateway 10.10.2.1 –netmask 255.255.255.0 –memory 2048

Lots of VM builds happen.

core@coreD1 ~ $ etcdctl cluster-health member abc1234 is healthy: got healthy result from http://10.10.2.124:2379 member abc1235 is healthy: got healthy result from http://10.10.2.122:2379 member abc1236 is healthy: got healthy result from http://10.10.2.121:2379 member abc1237 is healthy: got healthy result from http://10.10.2.125:2379 member abc1238 is healthy: got healthy result from http://10.10.2.123:2379 cluster is healthy

Having a healthy etcd cluster is a prerequisite to building Kubernetes.

Step Two: Install Kubernetes

There are a ton of guides online explaining how to deploy a Kubernetes cluster on AWS or GCE, but not many on bare-metal. The ones I found for bare-metal were based on using vagrant (felt too turnkey), or minikube (what good is a single node?) to marshall the VM’s. Given I already had my own custom way to deploy VM’s on host machine, I had to splice in my own workflow. I wanted to run on CoreOS given its tight integration with Docker and containers, and based most of my installation workflow on CoreOS’s Kubernetes documentation. After performing many manual installs of Kubernetes on my CoreOS cluster, I wrote rudimentary shell scripts to make it a bit easier. The repository of my scripts at github.com/jforman/kubernetes (better documentation is forthcoming). These scripts create the necessary certificates, systemd unit files, and Kubernetes manifests. The final step is to deploy them to both the master and workers. It’s possible to wrap this in Ansible and do it that way, but trying to over-engineer my first rollout in some other framework felt like pre-mature optimization before I really felt like I ‘knew’ the install.

core@coreD1 ~ $ ./kubectl cluster-info Kubernetes master is running at https://10.10.2.121

core@coreD1 ~ $ ./kubectl get pods –namespace=kube-system NAME READY STATUS RESTARTS AGE kube-apiserver-10.10.2.121 11 Running 45 41d kube-controller-manager-10.10.2.121 11 Running 12 41d kube-proxy-10.10.2.121 11 Running 7 41d kube-proxy-10.10.2.122 11 Running 9 41d kube-proxy-10.10.2.123 11 Running 9 41d kube-proxy-10.10.2.124 11 Running 9 41d kube-proxy-10.10.2.125 11 Running 10 41d kube-scheduler-10.10.2.121 11 Running 12 41d

Step 3: Configure Addons

Kubernetes addons make the whole system a lot more usable, and perhaps in my opinion, functional at all? The Dashboard provides a UI for both viewing and changing state of the cluster. I found it invaluable in getting a sense of the interconnectedness of the concepts of Kubernetes (nods to pods to replica sets to deployments).

Step 4: Set up Ingress

This is the step in the process where the light bulb of Kubernetes went off over my head. What is Ingress? It is a way to route external requests to services running on the Kubernetes cluster. It watches services moving around the Kubernetes cluster, and directs traffic to them internally based upon external requests. This bit of the infrastructure is what connected the box diagrams of an externally-accessible IP and port, to an internal service running in the service subnet. I used the yaml templates from the kubernetes/ingress/examples/deployment/nginx Github repo, modifying only for the namespace. Why did I modify the namespace? Ingress currently only will route to services in the same namespace as it runs. Since I run my containers in the ‘default’ namespace, and not kube-system (where I try to keep more infrastructure-type pods), I modified the templates accordingly. Then to route to my service based upon a name-based virtual host, the ingress yaml looks like this:

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress1 namespace: default spec: rules: - host: foo.server.localdomain.net http: paths: - path: / backend: serviceName: foo-service servicePort: 80

Things I learned

Being able to access a command line in a busybox container in a pod on the Kubernetes cluster is very helpful. Why? It helped clear up the fact that you can’t just ping or nmap service IPs from outside the cluster, or even from the host VM. It just didn’t make sense until:

$ kubectl exec -ti busybox – /bin/sh 

/ # nslookup kubernetes.default.svc.cluster.local Server: 10.11.0.2 Address 1: 10.11.0.2 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default.svc.cluster.local Address 1: 10.11.0.1 kubernetes.default.svc.cluster.local

The nginx-ingress-controller is the service that is actually externally accessible (hosts NOT in the Kubernetes cluster). The ingress-controller’s IP is where DNS entries for a particular named-based virtually-hosted FQDN need to point at. If that node gets restarted, there is the potential for the controller to move to another host, therefore breaking your FQDN-to-IP mapping. Follow up action item: spread the nginx-ingress-controller to every node (or a pool of nodes that run as inbound proxies) and assign your DNS entry to all those IPs. That ‘should’ work through node reboots.


          

Google launches Skaffold in general availability

 Cache   

In a recent survey of over 5,000 enterprise companies, 58% responded that they were using Kubernetes the open source container-orchestration system for automating app deployment, scaling, and management in production, while 42% said they were evaluating it for future use. The momentum was a motivating force behind Google’s Skaffold, a command line tool that facilitates […]

The Post Google launches Skaffold in general availability appeared first on
Latest Technology News


          

Day Two Cloud 022: Day Two Cloud Scales Out And Up!

 Cache   

The Day Two Cloud podcast is getting a new co-host, Ethan Banks; and becoming a weekly show! Now you can get deep dives on cloud and infrastructure topics including Kubernetes, cloud security, design and deployment, and more every week.

The post Day Two Cloud 022: Day Two Cloud Scales Out And Up! appeared first on Packet Pushers.


          

Kasten Introduces K10 2.0 with Enhanced Security and Operational Simplicity for Data Management of Kubernetes Applications

 Cache   
New K10 release provides comprehensive security and increased operational simplicity for cloud-native data management for enterprises LOS ALTOS, CA – November 5, 2019 — /BackupReview.info/ — Kasten, a provider of cloud-native data management solutions, today announced the general availability of Kasten K10 2.0. Purpose-built for Kubernetes, K10 provides enterprise operations teams with an easy-to-use, scalable [...] Related posts:
  1. Kasten Launches K10 on Google Cloud Platform Marketplace for Data Management of Stateful Applications on Kubernetes
  2. In the Age of Kubernetes, Kasten Emerges to Reinvent Data Management for Enterprises
  3. Kasten Secures $14 Million Series A
  4. DataCore Simplifies Migration of Enterprise Applications to Kubernetes and Docker
  5. Robin.io Announces Technology Collaboration with Red Hat to Bring Advanced Data Management to Power Stateful Applications on Red Hat OpenShift

          

Red Hat drives future of Java with cloud-native, container-first Quarkus

 Cache   

Today, Red Hat and the Quarkus community announced Quarkus 1.0. Quarkus is a Kubernetes-native Java stack that is crafted from best-of-breed Java libraries and standards, and tailored for containers and cloud deployments. The overall goal of Quarkus is to bring Java into a cloud-native application development future and enable it to become a leading platform for serverless, cloud and Kubernetes environments. With Quarkus, we believe Java can be better equipped to scale in the modern application development landscape, while also improving at a faster clip.


          

Red Hat drives future of Java with cloud-native, container-first Quarkus

 Cache   

Today, Red Hat and the Quarkus community announced Quarkus 1.0. Quarkus is a Kubernetes-native Java stack that is crafted from best-of-breed Java libraries and standards, and tailored for containers and cloud deployments. The overall goal of Quarkus is to bring Java into a cloud-native application development future and enable it to become a leading platform for serverless, cloud and Kubernetes environments. With Quarkus, we believe Java can be better equipped to scale in the modern application development landscape, while also improving at a faster clip.


          

Senior Software Engineer for blokapi (190003E2) в Ciklum, Киев

 Cache   

On behalf of blokapi, Ciklum is looking for a talented Senior Software Engineer to join Kiev team on a full-time basis.

About Client:
Theblokapi.io platform provides enterprises with innovative new digital means of engaging with customers and enterprises. It reduces costs and brings new efficiencies that were never available before.
Led by a successful, experienced team the blokapi solution opens a myriad of new digital business models for enterprises.
Our integrated platform consists of a Verified-Digital-Identity wallet, a digital Smart-Vault, ready-made ERP to DLT API adapters and Business-Workflow-Templates.
We are looking for an experienced software engineer to join the Blokapi founding team and lead backend development efforts, building our platform from scratch.

Client’s website: www.blokapi.io

Responsibilities
• Take an active part in research of new and exciting blockchain technologies
• Design and develop server-side logic, analyze requirements and work closely with the management team
• Design and optimize scalable, robust services that will run at scale
• Be in charge of a complete release to production include testing, monitoring and measuring results
• Collaborate with others in an agile environment
• Develop products using Golang or Scala (depends of part of project)

Requirements
• BSc in Computer Science or equivalent
• 5+ years of engineering work experience in the design and development of scalable production systems
• Proficiency in programming languages like Scala/Java, Go or Javascript, NoSQL DBs, high scale architecture and design patterns are required
• Work experience with docker, kubernetes or similar, good understanding of automation and CI/CD principals
• Excellent analytical and communication skills, with sensitivity to cross-cultural, cross-audience communications
• Demonstrated ability to develop creative/non-traditional solutions
• Familiar with blockchain technology, a big advantage
• Experience with web/mobile development is an advantage
• Fluent English speaker

Soft Skills
• Diligent, responsible, communicative, proactive, who always passionate about what you do
• Strong & clear communication skills
• Flexible engineer, interested in new technologies and approaches

What’s in it for you?
• Possibility to join one of the most innovative project ever
• Be a part of self-sovereign identity product development
• Very close cooperation with client
• Possibility to propose solutions on a project
• Dynamic and challenging tasks
• Ability to influence project technologies
• Team of professionals: learn from colleagues and gain recognition of your skills

About Ciklum:
Ciklum is a top-five global Software Engineering and Solutions Company. Our 3,000+ IT professionals are located in the offices and delivery centres in Ukraine, Belarus, Poland and Spain.
As Ciklum’s employee, you’ll have the unique possibility to communicate directly with the client when working in Extended Teams. Besides, Ciklum is the place to make your tech ideas tangible. The Vital Signs Monitor for the Children’s Cardiac Center as well as Smart Defibrillator, the winner of the IoT World Hackathon in the USA, are among the cool things Ciklumers have developed.
Ciklum is a technology partner for Google, Intel, Micron, and hundreds of world-known companies. We are looking forward to seeing you as a part of our team!

Join Ciklum and “Cross the Borders” together with us!
If you are interested — please send your CV to hr@ciklum.com


          

IT / Software / Systems: Senior Software Engineer - ERSI - Las Vegas, Nevada

 Cache   
Energize Your Career! - Join NV Energy -- the State of Nevada's premier provider of energy NV Energy has provided Nevada homes and businesses with safe, reliable energy for more than a century. NV Energy delivers electricity to 2.4 million Nevadans and a state tourist population of more than 40 million annually. NV Energy's mission is To be the best energy company in serving our customers, while delivering sustainable energy solutions. Do you thrive in an environment where you can create and champion advanced technical solutions to solve really important problems? This position provides an opportunity to play a pivotal role in the evolution of our software and systems at NV Energy. We're looking for talented individuals to join our diverse team working to expand and improve a broad range of technologies based upon the fundamental aspects of resilient distributed computing. Your work will make a difference to millions of people in Nevada and help realize our vision for our customers, partners and colleagues. Total Rewards Compensation Package Includes: - Competitive salaries with opportunity for career advancement - 401(k) plan with generous matching - Robust benefits protection package includes med/dent/vision, wellness program, company paid life, disability, identity, legal and pet insurance available - Work-Life Balance - Flexible work arrangements - 10 company paid holidays and a front loaded PTO plan - Tuition Reimbursement - Relocation assistance available Basic Purpose Creates and communicates technical solutions to address business problems. Works with IT and Business Leadership to promote innovative and integrated solutions across the organization and demonstrates proficiency in all aspects of the System Development Life Cycle (SDLC) through requirements gathering, project leadership, architecting, developing, testing, modifying and supporting enterprise application systems for internal and external customers as well as vendor partners. Essential Duties and Responsibilities TECHNOLOGY Holds primary technical responsibility for software development and enhancements as well as system reliability of business critical applications and integration processes - Acts as the technical lead for projects related to the development of new systems, architecture, applications or technology capabilities in support of business goals - Works with stakeholders on interpretation/translation of functional requirements into system requirements - Designs and develops web and mobile applications along with related internal and vendor partner system integrations and services utilizing distributed computing fundamentals and reactive principles as required - Creates appropriate technical artifacts to support development and operations support within SDLC guidelines and application/architecture diagrams and logic flows - Writes quality code that meets standards and delivers desired functionality using the technology selected for the project and delivers easy to operate systems by performing unit, system, automated testing, and post deployment validation design. Coordinates user acceptance testing - Adheres to and drives modern software engineering, by applying Agile and DevOps methodologies with an iterative development approach - Migrates or transforms legacy solutions to micro services/cloud native - Implements architecture, solution design, and development of core platform. - Integrates monitoring, logging and metrics frameworks into every application and platform effort - Troubleshoots application issues by diagnosing and debugging issues within production systems by performing thorough root cause analysis - Ensures alignment with corporate standards and strategic technology decisions. Maintains and improves technology proficiency with evolving technologies to achieve desired technical and business outcomes - Stays current on technologies, platforms, and relevant certification. - Provides training and mentoring to other IT staff and business users at all levels of the organization - Finds ways to spread learning across the organization (gives technical talks, presentations, etc.) and mentors lower level engineers. - Learns, evaluates, recommends and adapts to new technologies and techniques - Requires availability for periodic on call responsibilities. PROJECT MANAGEMENT Leads the evaluation, planning and implementation of applications/systems and programming needs for operating departments - Works closely with IT and Business Area leadership to define and implement IT-wide application and related infrastructure vision and long-term strategy in support of business objectives - Leads software projects from department-specific to, enterprise-wide and customer- and vendor-facing implementations - Estimates projects including assessing and mitigating risk - Manages project budgets as well internal and consulting resources for any size projects or software - Performs project planning, system analysis, software design and coding, testing, documentation, implementation and research activities as necessary for software engineering projects. - Provides technical leadership and leads proofs of concept through development of valuation matrices through final recommendation, including hands-on execution where needed. - Develops business case(s) to recommend system solutions and architectures and performs ongoing capacity planning for critical infrastructure - Leads RFP efforts from gathered business and system requirements - Develops customized presentations, demonstrations, prototypes, and architecture diagrams to prove a solution's business value to technical and business stakeholders. - Conducts security architecture design reviews and threat modeling - Leads the IT application area effort on security audits and compliance reviews. Ensures all compliance aspects of position are known and followed; understands and complies with all policies, codes and regulations applicable to position and company. Performs related duties as assigned. Requirements Essential Education, Skills, and Environment Education and Work Experience Bachelor's degree from an accredited school and 5 years of related progressive work experiences as software development. Candidates that do not possess a bachelor's degree must have a minimum of 9 years of related work experience with a minimum of 5 years of software development experience. Specialized Knowledge and Skills Demonstrated knowledge of: - At least 5 years of experience with ESRI's GIS technology (v10 or higher) implementations in the role of programmer - Thorough knowledge in the use of GIS software like ArcGIS Enterprise, ArcGIS Desk-top/Pro. ArcGIS Online, ArcGIS Web App Builder and the ArcGIS Portal environment. - Experience in designing and implementing UI components for ArcGIS API for JavaScript spanning Charting, Forms, Mapping Controls, Navigation Controls, Analysis Tools etc. - Experience developing and maintaining GIS applications using JavaScript, JSON, XML, Web services, ArcGIS API, ArcGIS runtime SDK and Python. - Experience with developing / implementing ArcGIS Web/Portal using HTML5 and JavaScript frameworks - Experience with developing / implementing mobile GIS field applications (Windows and iOS devices) using ArcGIS Runtime SDK - Experience with developing / implementing ArcGIS desktop applications using .NET framework. - Proficiency with mobile mapping tools including ESRI's collector app - Experience in developing, editing and maintaining geospatial datasets and databases - Data-centric web & mobile development; Data modeling concepts - Knowledge of ESRI Utility Data Models aligned to North American standards will be an advantage. - Production support experience dealing with a wide spectrum of business stakeholders. - Familiarity with process integration of GIS with other utility enterprise operational systems such as IBM Maximo and ABB Service Suite. - Past experience being a technical lead or senior developer for a large scale ESRI implementation preferably with electric, gas or water domains preferred. - modern and reactive programming concepts which include cloud native design, micro services, containerization, cloud model architectures, distributed high volume computing design - application, service monitoring, and code instrumentation - working with event-driven, streaming architectures using Kafka - working with deployment topologies ensuring applications for resiliency and continuous availability - working with containers and container management like Kubernetes - technical knowledge with hands-on: - java + Spring Framework and related Spring ecosystem - development and deployment of Apple and Android mobile apps - test driven development, DevOps via CI/CD, automated configuration, provisioning and deployment - application security and identity management and related development techniques including secrets management - relational (Oracle, SQL, PL/SQL, PostgreSQL) and no SQL databases (Couchbase) development - testing automation tools and approaches. Demonstrated skills such as: - Communicate effectively via multiple changes (written and verbal communication skills including composing and delivering executive level presentations) with technical and non- technical staff - Interpersonal, analytical, problem-solving, initiative, and the ability to thrive under pressure and with changes in requirements - Project management with the ability to prioritize and handling multiple tasks and projects concurrently. Equipment and Applications General PC and office suites, and various software applications as outlined above Work Environment and Physical Demands General office environment. No special physical demands required. Some same-day or short duration travel to area field or business offices required. Compensation Annual Salary: $101,300 (Min) to $123,500 (Mid); Up to 10% Short Term Incentive Plan opportunity at the discretion of the company. This is a non-represented position. Benefits Medical Dental Vision Life Insurance Wellness Flexible Spending Accounts Tuition Assistance 401(k) with a Company match provision Retirement and more, visit our Corporate website for detailed information Note Depending on qualifications of applicants, this position may be filled at a lower level than that which is posted such as Software Engineer II or Software Engineer I. ()
          

Kubernetesポケットリファレンス

 Cache   
コンテナの基盤技術として標準の地位を獲得したKubernetesが,「ポケットリファレンス」シリーズに登場。 近年コンテナ技術への注目が高まり,実プロジェクトへの普及が進んでいます。本書では,Kubernetes初のリファレンス本として,kubectlのコマンドとリソースを網羅的に解説。また,入門者向けのDockerの基本やKubernetesの導入はもちろん,リファレンスだけではカバーできない実践的な使用方法やよくあるエラーとトラブル対処法も押さえました。 さらに,折り込み付録としてKubernetes利用の全体像を俯瞰できる「チートシート」を収録。初心者から上級者まで,Kubernetesを使う開発者にとって必携の1冊です。


Next Page: 10000

© Googlier LLC, 2019