Next Page: 10000

          Network Break 179: Microsoft Targets Edge Computing; HCI Revenues Boom      Cache   Translate Page   Web Page Cache   
Today's Network Break examines Microsoft's and HPE's excitement about edge computing, Intel's sale of Wind River, the boom in HCI sales, and more tech news.
          What is malware? Viruses, worms, trojans, and beyond      Cache   Translate Page   Web Page Cache   

Malware—a blanket term for viruses, worms, trojans, and other harmful computer programs—has been with us since the early days of computing. But malware is constantly evolving and hackers use it to wreak destruction and gain access to sensitive information; fighting malware takes up much of the day-to-day work of infosec professionals.

Malware definition

Malware is short for malicious software, and, as Microsoft puts it, "is a catch-all term to refer to any software designed to cause damage to a single computer, server, or computer network." In other words, software is identified as malware based on its intended use, rather than a particular technique or technology used to build it.

To read this article in full, please click here


          How Edge Computing Gives You an Edge Over Cloud Computing      Cache   Translate Page   Web Page Cache   

Edge computing is a term that’s regularly coming up in technology conversations, and being touted as the next big thing after cloud computing. There might be a great deal of truth to that given the fact that a MarketsandMarkets report forecasts edge computing to grow at a CAGR of 35%, and reach $6.72 billion USD by 2022.

So, what exactly is edge computing?

Research firm IDC describes edge computing as a “mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet”.

Basically, it’s processing data at the edge of a network, at or near the point of origin. This ‘micro data center’ could be a sensor device itself or a device with predefined computational powers that will locally process time-sensitive data and relay it back into the system. The rest of the data is then moved to the cloud for further processing.

How is it better than cloud computing?

Edge computing brings in certain distinct benefits over cloud computing, but that requires us to first acknowledge the rising importance of IoT across industries.

IoT demands a complete ecosystem of connected sensors and devices that are continuously gathering a massive volume of data. Giving this data a round-trip to the cloud/central data centre is slow and costly. What follows is where edge computing has an ‘edge’ over cloud computing.

No Latency

Sending and receiving data from the cloud, especially when data centres are physically located miles apart, can be slow. Not for us, but definitely slow for enterprises whose businesses run on the speed of data processing.

With IoT, real-time data processing has become a requirement. Sensors monitoring manufacturing lines, or cars navigating via GPS and sensor data, need to process information in real-time to be able to correctly respond to situations. And hence, waiting for the data to get back from the cloud is not a feasible option.

With edge computing, critical data is processed near to these IoT devices. This eliminates latency issues and vastly improves response rates across enterprise operations.

Cost Savings

It’s predicted that by 2020, there will be over 50 billion IoT devices collecting more than 1.44 billion data points per plant, per day. First up, that is a massive amount of data to be transferred over the network, leading to increased loads. Secondly, even if it’s done, it will be hugely expensive for businesses. There will be the cost of acquiring additional bandwidth to transfer this volume of data. Add to that the fact that they have to increase investments in load balancing and frequent maintenance of the network and data centres.

With edge computing, on the other hand, the majority of the time-sensitive data is processed locally, and relayed back to the IoT devices for further action. That leaves a manageable amount of data that needs to be transferred to the cloud, and hence does not demand huge expenditure.

Reliability

With data travelling long distances to reach the cloud, there are high chances of data getting corrupted. This can cause data loss, system crashes, and also financial loss due to incorrect data processing.

In the case of edge computing, the path from an IoT device to micro data centre is extremely short. This ensures that there is a slim-to-none chance of data corruption, or network jitters to cause data packets to reach unevenly. The data transmission is reliable and hence the data processing and insights are also more reliable.

Security

Edge computing gives businesses the option of not just locally processing, but also locally storing sensitive data. This is definitely a step beyond having to store data in public or hybrid clouds where it is slightly more prone to security risks.

Because edge computing capabilities are always close to the data source, they are almost always within premises controlled by the enterprises themselves. This means they can build and deploy custom security measures as per their standards of compliance.

Will edge computing replace cloud computing?

While edge computing definitely has its upside, one must remember that it’s a network of ‘micro’ data centres. They are great for processing time-sensitive data or storing critical data; but that is only a portion of the data being produced by enterprises. There’s still a huge amount of data that needs to be stored and processed at normal speeds. And there’s nothing better than cloud computing to achieve that.

So no, cloud computing will definitely not be replaced by edge computing. What will happen, though, is that both these methods will become indispensable and complementary parts of an enterprise data strategy. So it is advisable for businesses to start understanding and investing in building edge computing capabilities, so they get a headstart in the game.

Author Bio Sriram Sitaraman: Practice Head for Analytics and Data Science at Srijan Technologies. With over 20 years of experience in designing and delivering innovative business solutions, Sriram leverages his expertise in machine learning, statistical modelling, and business intelligence to enable digital transformation in industries as diverse as healthcare, manufacturing, retail, banking, and more.


          Google 为树莓派 Zero W 发布了基于TensorFlow 的视觉识别套件      Cache   Translate Page   Web Page Cache   

Google 发布了一个 45 美元的 “AIY Vision Kit”,它是运行在树莓派 Zero W 上的基于 TensorFlow 的视觉识别开发套件,它使用了一个带 Movidius 芯片的 “VisionBonnet” 板。

为加速该设备上的神经网络,Google 的 AIY 视频套件继承了早期树莓派上运行的 AIY 项目 的语音/AI 套件,这个型号的树莓派随五月份的 MagPi 杂志一起赠送。与语音套件和老的 Google 硬纸板 VR 查看器一样,这个新的 AIY 视觉套件也使用一个硬纸板包装。这个套件和 Cloud Vision API 是不一样的,它使用了一个在 2015 年演示过的基于树莓派的 GoPiGo 机器人,它完全在本地的处理能力上运行,而不需要使用一个云端连接。这个 AIY 视觉套件现在可以 45 美元的价格去预订,将在 12 月份发货。

 

AIY 视觉套件,完整包装(左)和树莓派 Zero W

这个套件的主要处理部分除了所需要的 树莓派 Zero W 单片机之外 —— 一个基于 ARM11 的 1 GHz 的 Broadcom BCM2836 片上系统,另外的就是 Google 最新的 VisionBonnet RPi 附件板。这个 VisionBonnet pHAT 附件板使用了一个 Movidius MA2450,它是 Movidius Myriad 2 VPU 版的处理器。在 VisionBonnet 上,处理器为神经网络运行了 Google 的开源机器学习库 TensorFlow。因为这个芯片,使得视觉处理的速度最高达每秒 30 帧。

这个 AIY 视觉套件要求用户提供一个树莓派 Zero W、一个 树莓派摄像机 v2、以及一个 16GB 的 micro SD 卡,它用来下载基于 Linux 的 OS 镜像。这个套件包含了 VisionBonnet、一个 RGB 街机风格的按钮、一个压电扬声器、一个广角镜头套件、以及一个包裹它们的硬纸板。还有一些就是线缆、支架、安装螺母,以及连接部件。

 

AIY 视觉套件组件(左)和 VisonBonnet 附件板

有三个可用的神经网络模型。一个是通用的模型,它可以识别常见的 1000 个东西,一个是面部检测模型,它可以对 “快乐程度” 进行评分,从 “悲伤” 到 “大笑”,还有一个模型可以用来辨别图像内容是狗、猫、还是人。这个 1000 个图片模型源自 Google 的开源 MobileNets,它是基于 TensorFlow 家族的计算机视觉模型,它设计用于资源受限的移动或者嵌入式设备。

MobileNet 模型是低延时、低功耗,和参数化的,以满足资源受限的不同使用情景。Google 说,这个模型可以用于构建分类、检测、嵌入、以及分隔。在本月的早些时候,Google 发布了一个开发者预览版,它是一个对 Android 和 iOS 移动设备友好的 TensorFlow Lite 库,它与 MobileNets 和 Android 神经网络 API 是兼容的。

AIY 视觉套件包装图

除了提供这三个模型之外,AIY 视觉套件还提供了基本的 TensorFlow 代码和一个编译器,因此用户可以去开发自己的模型。另外,Python 开发者可以写一些新软件去定制 RGB 按钮颜色、压电元素声音、以及在 VisionBonnet 上的 4x GPIO 针脚,它可以添加另外的指示灯、按钮、或者伺服机构。Potential 模型包括识别食物、基于可视化输入来打开一个狗门、当你的汽车偏离车道时发出文本信息、或者根据识别到的人的面部表情来播放特定的音乐。

 

Myriad 2 VPU 结构图(左)和参考板

Movidius Myriad 2 处理器在一个标称 1W 的功耗下提供每秒万亿次浮点运算的性能。在被 Intel 收购之前,这个芯片最早出现在 Tango 项目的参考平台上,并内置在 2016 年 5 月由 Movidius 首次亮相的、Ubuntu 驱动的 USB 的 Fathom 神经网络处理棒中。根据 Movidius 的说法,Myriad 2 目前已经在 “市场上数百万的设备上使用”。

更多信息

AIY 视觉套件可以在 Micro Center 上预订,价格为 $44.99,预计在(2017 年) 12 月初发货。更多信息请参考 AIY 视觉套件的 公告Google 博客、以及 Micro Center 购物页面


via: http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/

作者:Eric Brown 译者:qhwdw 校对:wxy

本文由 LCTT 原创编译,Linux中国 荣誉推出


          A Quantum Computing Startup Tries to Live Up to the Hype      Cache   Translate Page   Web Page Cache   
No quantum computer has achieved "quantum advantage," so Rigetti Computing wants everyone to just lower their expectations.
          Book Lovers Day Shortbread      Cache   Translate Page   Web Page Cache   
I'm on my 3rd Kindle, a used 6" Oasis that I snagged for cheap. It's so much better than a "real" book.
PC hardware and computing

Intel SSD 660p 1 TB SSD review @ PC Perspective
Seasonic PRIME Ultra 850 W power supply review @ HardOCP
Intel SSD 660p review @ HotHardware
Kingston UV500 960 GB SSD review @ KitGuru
Intel SSD 660p 1 TB SSD review with QLC NAND flash @ Legit Reviews
The Intel SSD 660p SSD review @ AnandTech
Games, culture, and VR

The one huge problem with Dan Simmons' sci-fi mystery Hyperion @ Quarter To Three

Read more...


          Java Front Office Developer      Cache   Translate Page   Web Page Cache   
NY-NEW YORK CITY, A top financial services firm is seeking a strong Java Developer to help develop their a next generation trading systems platform. Qualifications 3+ years of commercial development experience in Java Must have Java 1.8 experience Strong software development skills with J2EE technologies using JAVA, Spring framework, Hibernate, JBoss container, FIX protocol, Distributed Grid Computing Must have exp
          Plant Assistant - Pete Lien & Sons, Inc - Frannie, WY      Cache   Translate Page   Web Page Cache   
Complex Computing and Cognitive Thinking Y. An hourly employee who at the direction of the Plant Operator provides plant operational support duties as requested...
From Pete Lien & Sons, Inc - Wed, 08 Aug 2018 23:21:23 GMT - View all Frannie, WY jobs
          Windows System Engineer      Cache   Translate Page   Web Page Cache   
FL-Tampa, Requirements: Job Summary – Windows System Engineer Manages systems and infrastructure components for mission-critical computing environments under occasional guidance. Provides support services for systems administration, networking, performance tuning, monitoring and capacity planning while adhering to enterprise ITIL process and procedures. Builds infrastructure to support business environment.
          Senior Information Security Consultant - Network Computing Architects, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
With the ability to interface and communicate at the executive level, i.e. CIO's, CTO's and Chief Architects....
From Network Computing Architects, Inc. - Mon, 11 Jun 2018 23:15:53 GMT - View all Bellevue, WA jobs
          Senior Software Developer - Chemical Computing Group - Montréal, QC      Cache   Translate Page   Web Page Cache   
Plan to become an expert in SVL, the scientific vector language, and to also acquire knowledge of computational and medicinal chemistry and biologics....
From Chemical Computing Group - Sat, 14 Jul 2018 09:14:18 GMT - View all Montréal, QC jobs
          Scientific Software Developer - Chemical Computing Group - Montréal, QC      Cache   Translate Page   Web Page Cache   
Plan to become an expert in SVL, the scientific vector language, and to deepen or acquire expertise across the domains of computational and medicinal chemistry...
From Chemical Computing Group - Sat, 14 Jul 2018 09:14:16 GMT - View all Montréal, QC jobs
          公共ブロックチェーンでの基本的挑戦(51)      Cache   Translate Page   Web Page Cache   
公共ブロックチェーンでの基本的挑戦(51)NEW!2018-08-10 05:53:49テーマ:ブログ8. Quantum computing threat8.量子電算の脅威One of the looming threats to cryptocurrency and cryptography is the issue of quantum computers.暗号通貨と暗号に対する迫っている脅威の1つは量子コンピュータの課題です。Although quantum comp..
          Blockchain firm Soluna to build 900MW wind farm in Morocco: CEO      Cache   Translate Page   Web Page Cache   
Blockchain company Soluna plans to build a 900-megawatt wind farm to power a computing center in Dakhla in the Morocco-administered Western Sahara, its chief executive John Belizaire said in an interview.

          Facilities Systems Administrator - Newgistics, Inc. - Grapevine, TX      Cache   Translate Page   Web Page Cache   
Achieve functional expertise with computing systems in use at Newgistics while acquiring and maintaining current knowledge of relevant product offerings and...
From Newgistics, Inc. - Fri, 13 Jul 2018 15:58:28 GMT - View all Grapevine, TX jobs
          Plant Assistant - Pete Lien & Sons, Inc - Frannie, WY      Cache   Translate Page   Web Page Cache   
Complex Computing and Cognitive Thinking Y. An hourly employee who at the direction of the Plant Operator provides plant operational support duties as requested...
From Pete Lien & Sons, Inc - Wed, 08 Aug 2018 23:21:23 GMT - View all Frannie, WY jobs
          Supply Chain Drones and Fog Computing      Cache   Translate Page   Web Page Cache   
Given the high costs associated with almost all forms of delivery—shipping, trucking and aviation—it’s not surprising that industries are exploring how drones can augment traditional delivery methods to reduce costs.
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sun, 29 Jul 2018 06:18:46 GMT - View all Cheyenne, WY jobs
          Senior Reverse Engineer - Irdeto - Ottawa, ON      Cache   Translate Page   Web Page Cache   
Our Security Assurance team is an Ethical Hacking group within Irdeto bringing its considerable knowledge of the dark corners of computing to bear against...
From Irdeto - Sat, 23 Jun 2018 08:49:24 GMT - View all Ottawa, ON jobs
          Lead Times Are A Risk, But ON Semiconductor Seems Undervalued      Cache   Translate Page   Web Page Cache   
Can I really complain about the performance of ON Semiconductor (ON) over the past three months or on year-to-date basis when the shares are up 50% over the past year and have thumped not only the SOX, but peers like Texas Instruments (TXN), Infineon (OTCQX:IFNNY), and STMicrolectronics (STM)? Even so, these shares haven’t done so well lately, and I believe that’s largely due to concerns that rising lead times are signaling some weak orders and weaker revenue in the not-so-distant future.

Maybe this time will be different and the industry will navigate back to more normal lead times without major order/revenue disruptions. I don’t like to count on “maybe it will be different”, though, and the awful performance of Renesas (OTCPK:RNECY) highlights how unforgiving the market can be when companies go through an “adjustment phase”. ON Semiconductor shares do look undervalued and I do like the company’s long-term position in markets like auto and industrial and parts of communications and computing, but the risk of near-term turbulence is something to consider if you’re the type of investor who hates short-term pain in the pursuit of long-term gain.

Click here for more:
Lead Times Are A Risk, But ON Semiconductor Seems Undervalued
          Adjunct Instructor, Adult Basic Education-English as a Second Language (ESL) - Laramie County Community College - Laramie, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products... $23.19 an hour
From Laramie County Community College - Thu, 02 Aug 2018 00:37:52 GMT - View all Laramie, WY jobs
          Adjunct Instructor, Chemistry - Laramie County Community College - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products...
From Laramie County Community College - Sat, 14 Jul 2018 06:37:29 GMT - View all Cheyenne, WY jobs
          Adjunct Instructional Faculty, Mathematics - Laramie County Community College - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products...
From Laramie County Community College - Sat, 14 Jul 2018 06:37:24 GMT - View all Cheyenne, WY jobs
          Radiography Adjunct Instructor - Laramie County Community College - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products...
From Laramie County Community College - Wed, 11 Jul 2018 06:37:25 GMT - View all Cheyenne, WY jobs
          Adjunct Instructor Pool, Communication - Laramie County Community College - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products...
From Laramie County Community College - Thu, 05 Jul 2018 06:38:22 GMT - View all Cheyenne, WY jobs
          SkyScale: GPU Cloud Computing with a Difference      Cache   Translate Page   Web Page Cache   
GPU cloud computing

In this guest post, Tim Miller, president of SkyScale, covers how GPU cloud computing is on the fast track to crossing the chasm to widespread adoption for HPC applications. "Two good examples of very different markets adopting GPU computing and where cloud usage makes sense are artificial intelligence and high quality rendering."

The post SkyScale: GPU Cloud Computing with a Difference appeared first on insideHPC.


          Facilities Systems Administrator - Newgistics, Inc. - Grapevine, TX      Cache   Translate Page   Web Page Cache   
Achieve functional expertise with computing systems in use at Newgistics while acquiring and maintaining current knowledge of relevant product offerings and...
From Newgistics, Inc. - Fri, 13 Jul 2018 15:58:28 GMT - View all Grapevine, TX jobs
          Amazon partners with L.A. community colleges for cloud computing program      Cache   Translate Page   Web Page Cache   

Starting as early as this fall, students around Los Angeles will have the opportunity to learn to code in one of the biggest software growth areas: cloud computing, an increasingly popular online-based technology that is used for data analytics and file storage.

That’s due to a partnership announced...


          Field Application Engineer (GPU) - Seattle - 56925 - Advanced Micro Devices, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Thu, 28 Jun 2018 07:32:28 GMT - View all Bellevue, WA jobs
          Field Application Engineer ( Data Center) - Seattle -56141 - Advanced Micro Devices, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Fri, 22 Jun 2018 07:32:56 GMT - View all Bellevue, WA jobs
          Field Applications Engineer - 56928 - Advanced Micro Devices, Inc. - Morrisville, PA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Fri, 06 Jul 2018 07:33:38 GMT - View all Morrisville, PA jobs
          Field Applications Engineer (GPU) - 67945 - Advanced Micro Devices, Inc. - Santa Clara, CA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Mon, 23 Jul 2018 19:32:41 GMT - View all Santa Clara, CA jobs
          Field Application Engineer (Data Center) - Santa Clara - 56924 - Advanced Micro Devices, Inc. - Santa Clara, CA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Thu, 26 Apr 2018 01:39:11 GMT - View all Santa Clara, CA jobs
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sun, 29 Jul 2018 06:18:46 GMT - View all Cheyenne, WY jobs
          Amazon partners with L.A. community colleges for cloud computing program      Cache   Translate Page   Web Page Cache   

Starting as early as this fall, students around Los Angeles will have the opportunity to learn to code in one of the biggest software growth areas: cloud computing, an increasingly popular online-based technology that is used for data analytics and file storage.

That’s due to a partnership announced...


          Amazon partners with L.A. community colleges for cloud computing program      Cache   Translate Page   Web Page Cache   

Starting as early as this fall, students around Los Angeles will have the opportunity to learn to code in one of the biggest software growth areas: cloud computing, an increasingly popular online-based technology that is used for data analytics and file storage.

That’s due to a partnership announced...


          Field Application Engineer (GPU) - Seattle - 56925 - Advanced Micro Devices, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Thu, 28 Jun 2018 07:32:28 GMT - View all Bellevue, WA jobs
          Field Application Engineer ( Data Center) - Seattle -56141 - Advanced Micro Devices, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Fri, 22 Jun 2018 07:32:56 GMT - View all Bellevue, WA jobs
          Field Applications Engineer - 56928 - Advanced Micro Devices, Inc. - Morrisville, PA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Fri, 06 Jul 2018 07:33:38 GMT - View all Morrisville, PA jobs
          Field Applications Engineer (GPU) - 67945 - Advanced Micro Devices, Inc. - Santa Clara, CA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Mon, 23 Jul 2018 19:32:41 GMT - View all Santa Clara, CA jobs
          Field Application Engineer (Data Center) - Santa Clara - 56924 - Advanced Micro Devices, Inc. - Santa Clara, CA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Thu, 26 Apr 2018 01:39:11 GMT - View all Santa Clara, CA jobs
          Security Leftovers      Cache   Translate Page   Web Page Cache   
  • Voting By Cell Phone Is A Terrible Idea, And West Virginia Is Probably The Last State That Should Try It Anyway

    So we've kind of been over this. For more than two decades now we've pointed out that electronic voting is neither private nor secure. We've also noted that despite this several-decade long conversation, many of the vendors pushing this solution are still astonishingly-bad at not only securing their products, but acknowledging that nearly every reputable security analyst and expert has warned that it's impossible to build a secure fully electronic voting system, and that if you're going to to do so anyway, at the very least you need to include a paper trail system that's not accessible via the internet.

  • Dell EMC Data Protection Advisor Versions 6.2 – 6.5 found Vulnerable to XML External Entity (XEE) Injection & DoS Crash

    An XML External Entity (XEE) injection vulnerability has been discovered in Dell’s EMC Data Protection Advisor’s version 6.4 through 6.5. This vulnerability is found in the REST API and it could allow an authenticated remote malicious attacker to compromise the affected systems by reading server files or causing a Denial of Service (DoS crash through maliciously crafted Document Type Definitions (DTDs) through the XML request.

  • DeepLocker: Here’s How AI Could ‘Help’ Malware To Attack Stealthily

    By this time, we have realized how artificial intelligence is a boon and a bane at the same time. Computers have become capable of performing things that human beings cannot. It is not tough to imagine a world where you AI could program human beings; thanks to sci-fi television series available lately.

  • DeepLocker: How AI Can Power a Stealthy New Breed of Malware

    Cybersecurity is an arms race, where attackers and defenders play a constantly evolving cat-and-mouse game. Every new era of computing has served attackers with new capabilities and vulnerabilities to execute their nefarious actions.

  • DevSecOps: 3 ways to bring developers, security together

    Applications are the heart of digital business, with code central to the infrastructure that powers it. In order to stay ahead of the digital curve, organizations must move fast and deploy code quickly, which unfortunately is often at odds with stability and security.

    With this in mind, where and how can security fit into the DevOps toolchain? And, in doing so, how can we create a path for successfully deterring threats?

  • Top 5 New Open Source Security Vulnerabilities in July 2018 [Ed: Here is Microsoft's partner WhiteSource attacking FOSS today by promoting the perception that "Open Source" = bugs]
  • DarkHydrus Relies on Open-Source Tools for Phishing Attacks [Ed: I never saw a headline blaming "proprietary tools" or "proprietary back door" for security problems, so surely this author is just eager to smear FOSS]
  • If for some reason you're still using TKIP crypto on your Wi-Fi, ditch it – Linux, Android world bug collides with it [Ed: Secret 'standards' of WPA* -- managed by a corporate consortium -- not secure, still...]

    It’s been a mildly rough week for Wi-Fi security: hard on the heels of a WPA2 weakness comes a programming cockup in the wpa_supplicant configuration tool used on Linux, Android, and other operating systems.

    The flaw can potentially be exploited by nearby eavesdroppers to recover a crucial cryptographic key exchanged between a vulnerable device and its wireless access point – and decrypt and snoop on data sent over the air without having to know the Wi-Fi password. wpa_supplicant is used by Linux distributions and Android, and a few others, to configure the Wi-Fi for computers, gadgets, and handhelds.

  • Linux vulnerability could lead to DDoS attacks

read more


          3 Reasons HCI Adoption Is on the Rise for Small and Medium Businesses      Cache   Translate Page   Web Page Cache   
3 Reasons HCI Adoption Is on the Rise for Small and Medium Businesses juliet.vanwage… Thu, 08/09/2018 - 10:16

For larger businesses with dedicated IT teams, the wonders of hyperconverged infrastructure are well known. But now, small and medium businesses are starting to embrace the ways HCI can work to offer SMBs an operational edge. In fact, a new study by Techaisle Research shows that HCI adoption is poised to double over the next year and a half as the benefits become apparent, particularly as it pertains to digital transformation.

A blog post from Techaisle Research about the study notes that HCI adoption allows SMBs to grow “in lockstep with new business or customer demands,” and is viewed as an important element of business growth.

“The SMBs that are fully committed to digital transformation are on the fastest path to adoption, as HCI is an important element of a future-ready, resource-sensitive IT approach,” the blog post notes.

HCI platforms, much like converged infrastructure platforms, combine computing, storage, networking and virtualization capabilities into a single appliance. With HCI, however, all of the components are pre-integrated and controlled by one software-management layer. Moreover, as all components are provided by a single vendor, it offers IT managers a level of control and visibility they can’t normally achieve with piecemeal infrastructure.

As SMBs begin gravitating toward solutions from vendors such as Cisco Systems, Dell EMC, Hewlett Packard Enterprise, Nutanix and VMware, among others, these businesses are coming to realize several important benefits that can give them the operational edge and simplicity they crave.

SIGN UP: Get more news from the BizTech newsletter in your inbox every two weeks!

1. HCI Cuts Costs and Resource Needs

While adopting HCI won’t immediately translate to cost savings, implementing it effectively in line with business needs can save businesses bundles in the long run.

At Midwest Acoust-A-Fiber, for example, adopting a Scale Computing HCI solution saved the company about 50 percent over the cost of a traditional architecture. Moreover, it saved the company from needing to hire additional personnel.

“People just keep getting more expensive, and hardware gets cheaper. So, now if we need more performance we can just throw more hardware at the problem,” Systems Administration Manager Daniel Penrod told BizTech in a previous interview.

2. Hyperconvergence Eases IT Management

For companies with limited staff and resources, HCI can deliver a simpler way to manage all IT assets.

“HCI’s integrated, software-defined architecture provides SMB IT staff with an ability to deliver sophisticated capabilities without needing to maintain an elaborate web of resource connections,” the Techaisle blog post notes.

In a prime example, PreCheck, a healthcare background-check company tapped HCI to ease migration and ongoing management, Robert Wilcox, PreCheck's infrastructure manager, told BizTech.

"It's really simplified the environment and reduced the overhead of the hardware. We've gone from a three-tiered model to one tier," he said. "It's a complete ecosystem, and if I need to scale up, I can just buy another node and slide it in. It gets picked up by the system, and everything is done automatically."

3. HCI Improves and Simplifies Small Business Scalability

Moreover, since HCI is an integrated, modular solution, IT can scale capacity as needed without the need to invest in new resources or buy capacity in advance, Techaisle notes.

Dave Wiley, IT manager at Mayfran International, a Cleveland-based manufacturer of machine tool products, material-handling equipment and filtration systems, noticed an improvement in scalability immediately within the first year of investing in a Cisco HyperFlex HX hyperconverged system.

“We needed more disk capacity, so we expanded from three servers to four. Scalability proved to be smooth with HCI,” Wiley told BizTech in a previous interview.

And the IT manager wasn’t the only one that noticed the change.

End users could tell immediately that we installed faster servers,” he said.


          Best Buy Sales Consultant – Computing and DI - Best Buy - Macon, GA      Cache   Translate Page   Web Page Cache   
Use innovative training tools to stay current, confident and complete, driving profitable growth and achieving individual and department goals....
From Best Buy - Sat, 04 Aug 2018 04:27:05 GMT - View all Macon, GA jobs
          Service Desk Analyst - Compugen Inc - Montréal, QC      Cache   Translate Page   Web Page Cache   
Ability to conduct research into a wide range of computing issues as required. Service Desk Analyst....
From Compugen Inc - Sat, 16 Jun 2018 02:12:03 GMT - View all Montréal, QC jobs
          Multiscale Methods in Computational Mechanics: Progress and Accomplishments      Cache   Translate Page   Web Page Cache   
Multiscale Methods in Computational Mechanics: Progress and Accomplishments
Multiscale Methods in Computational Mechanics: Progress and Accomplishments by René de Borst
English | PDF | 2011 | 451 Pages | ISBN : 9048198089 | 26.69 MB


Many features in the behaviour of structures, materials and flows are caused by phenomena that occur at one to several scales below common levels of observation. Multiscale methods account for this scale dependence: They either derive properties at the level of observation by repeated numerical homogenization of more fundamental physical properties defined several scales below (upscaling), or they devise concurrent schemes where those parts of the domain that are of interest are computed with a higher resolution than parts that are of less interest or where the solution is varying only slowly. This work is a result of a sustained German-Dutch cooperation and written by internationally leading experts in the field and gives a modern, up-to-date account of recent developments in computational multiscale mechanics. Both upscaling and concurrent computing methodologies are addressed for a range of application areas in computational solid and fluid mechanics: Scale transitions in materials, turbulence in fluid-structure interaction problems, multiscale/multilevel optimization, multiscale poromechanics.
          Principal Consultant, End-User Computing - Compugen Inc - Winnipeg, MB      Cache   Translate Page   Web Page Cache   
In addition, your day-to-day activities will involve collaborating with other solution specialists and practices while providing technical quality assurance and...
From Compugen Inc - Sat, 16 Jun 2018 02:10:57 GMT - View all Winnipeg, MB jobs
          Magic Leap Makes Shutterstock Images & Video Available for Gallery & Screens Apps      Cache   Translate Page   Web Page Cache   

The narrative that Magic Leap has weaved for the Magic Leap One has focused on the freedom of spatial computing versus dated modes of 2D screens. So a partnership with Shutterstock, a company that licenses stock photos and videos to creators, is a bit unexpected. But a closer look at the details makes the partnership clearer. Magic Leap is integrating Shutterstock's photo and video library into the Gallery and Screens apps in the Lumin OS. In other words, these are the two apps that bring 2D content into the mixed reality environment. Users can also stock their photo libraries in the Gallery... more


          Microsoft threatens to pull Gab services over anti-Semitic posts      Cache   Translate Page   Web Page Cache   

The action shows how the tech industry’s efforts to tackle hate speech online are extending beyond big social-media services to cloud-computing companies that provide web-hosting services to smaller sites.
          Latest ClearFog SBC offers four GbE ports and a 10GbE SFP+ port      Cache   Translate Page   Web Page Cache   

SolidRun’s “ClearFog GT 8K” networking SBC runs Ubuntu on a network virtualization enabled quad -A72 Armada A8040 SoC and offers up to 16GB DDR4, 4x GbE ports, a WAN port, a 10GbE SFP+ port, and 3x mini-PCIe slots.

SolidRun has updated its ClearFog line of Linux-driven router SBCs with a ClearFog GT 8K model designed for high-end edge computing, virtual customer premise equipment (vCPE), network functional virtualization (NFV), network security, and general networking duty. The SBC runs Linux Kernel 4.4x, Ubuntu 16.04, and Google IoT Core on Marvell’s quad-core, up to 2GHz Cortex-A72 Armada A8040 SoC. Models are available with 8GB eMMC ($209), 128GB eMMC ($304), 8GB eMMC with 16GB RAM ($526), and 128GB eMMC with 16GB RAM ($621).

Read more


          Amazon partners with L.A. community colleges for cloud computing program      Cache   Translate Page   Web Page Cache   

Starting as early as this fall, students around Los Angeles will have the opportunity to learn to code in one of the biggest software growth areas: cloud computing, an increasingly popular online-based technology that is used for data analytics and file storage.

That’s due to a partnership announced...


          Amazon partners with L.A. community colleges for cloud computing program      Cache   Translate Page   Web Page Cache   

Starting as early as this fall, students around Los Angeles will have the opportunity to learn to code in one of the biggest software growth areas: cloud computing, an increasingly popular online-based technology that is used for data analytics and file storage.

That’s due to a partnership announced...


          Computing’s Hippocratic oath is here      Cache   Translate Page   Web Page Cache   

Computing professionals are on the front lines of almost every aspect of the modern world. They’re involved in the response when hackers steal the personal information of hundreds of thousands of people from a large corporation. Their work can protect–or jeopardize–critical infrastructures, such as electrical grids and transportation lines. And the algorithms they write may determine who gets a job, who is approved for a bank loan, or who gets released on bail.

Technological professionals are the first, and last, lines of defense against the misuse of technology. Nobody else understands the systems as well, and nobody else is in a position to protect specific data elements or ensure that the connections between one component and another are appropriate, safe, and reliable. As the role of computing continues its decades-long expansion in society, computer scientists are central to what happens next.

That’s why the world’s largest organization of computer scientists and engineers, the Association for Computing Machinery, of which I am president, has issued a new code of ethics for computing professionals. And it’s why ACM is taking other steps to help technologists engage with ethical questions.

[Photo: Hero Images/Getty Images]

Serving the public interest

A code of ethics is more than just a document on paper. There are hundreds of examples of the core values and standards to which every member of a field is held–including for organist guilds and outdoor-advertising associations. The world’s oldest code of ethics is also its most famous: The Hippocratic oath that medical doctors take, promising to care responsibly for their patients.

I suspect that one reason for the Hippocratic oath’s fame is how personal medical treatment can be, with people’s lives hanging in the balance. It’s important for patients to feel confident their medical caregivers have their interests firmly in mind.

Technology is, in many ways, similarly personal. In modern society, computers, software, and digital data are everywhere. They’re visible in laptops and smartphones, social media and video conferencing, but they’re also hidden inside the devices that help manage people’s daily lives, from thermostats to timers on coffeemakers. New developments in autonomous vehicles, sensor networks, and machine learning mean computing will play an even more central role in everyday life in coming years.

[Photo: Hero Images/Getty Images]

A changing profession

As the creators of these technologies, computing professionals have helped usher in the new and richly vibrant rhythms of modern life. But as computers become increasingly interwoven into the fabric of life, we in the profession must personally recommit to serving society through ethical conduct.

ACM’s last code of ethics was adopted in 1992, when many people saw computing work as purely technical. The internet was in its infancy and people were just beginning to understand the value of being able to aggregate and distribute information widely. It would still be years before artificial intelligence and machine learning had applications outside research labs.

Today, technologists’ work can affect the lives and livelihoods of people in ways that may be unintended, even unpredictable. I’m not an ethicist by training, but it’s clear to me that anyone in today’s computing field can benefit from guidance on ethical thinking and behavior.

[Photo: Hero Images/Getty Images]

Updates to the code

ACM’s new ethics code has several important differences from the 1992 version. One has to do with unintended consequences. In the 1970s and 1980s, technologists built software or systems whose effects were limited to specific locations or circumstances. But over the past two decades, it has become clear that as technologies evolve, they can be applied in contexts very different from the original intent.

For example, computer vision research has led to ways of creating 3D models of objects–and people–based on 2D images, but it was never intended to be used in conjunction with machine learning in surveillance or drone applications. The old ethics code asked software developers to be sure a program would actually do what they said it would. The new version also exhorts developers to explicitly evaluate their work to identify potentially harmful side effects or potential for misuse.

Another example has to do with human interaction. In 1992, most software was being developed by trained programmers to run operating systems, databases, and other basic computing functions. Today, many applications rely on user interfaces to interact directly with a potentially vast number of people. The updated code of ethics includes more detailed considerations about the needs and sensitivities of very diverse potential users–including discussing discrimination, exclusion, and harassment.

More and more software is being developed to run with little or no input or human understanding, producing analytical results to guide decision making, such as when to approve bank loans. The outputs can have completely unintended social effects, skewed against whole classes of people–as in recent cases where data-mining predictions of who would default on a loan showed biases against people who seek longer-term loans or live in particular areas. There are also the dangers of what are called “false positives,” when a computer links two things that shouldn’t be connected–as when facial-recognition software recently matched members of Congress to criminals’ mug shots. The revised code exhorts technologists to take special care to avoid creating systems with the potential to oppress or disenfranchise whole groups of people.

[Photo: Hero Images/Getty Images]

Living ethics in technology

The code was revised over the course of more than two years, including ACM members and people outside the organization and even outside the computing and technological professions. All of these perspectives made the code better. For example, a government-employed weapons designer asked whether that job inherently required violating the code; the wording was changed to clarify that systems must be “consistent with the public good.”

Now that the code is out, there’s more to do. ACM has created a repository for case studies, showing how ethical thinking and the guidelines can be applied in a variety of real-world situations. The group’s “Ask An Ethicist” blog and video series invites the public to submit scenarios or quandaries as they arise in practice. Word is also underway to develop teaching modules so that concepts can be integrated into computing education from primary school through university.

Feedback has been overwhelmingly positive. My personal favorite was the comment from a young programmer after reading the code: “Now I know what to tell my boss if he asks me to do something like that again.”

The ACM Code of Ethics and Professional Conduct begins with the statement “Computing professionals’ actions change the world.” We don’t know if our code will last as long as the Hippocratic oath. But it highlights how important it is that the global computing community understands the impact our work has–and takes seriously our obligation to the public good.

Cherri M. Pancake is Professor Emeritus of Electrical Engineering & Computer Science at Oregon State University. This post originally appeared on The Conversation.

          GGW #160: Jon’s Nook      Cache   Translate Page   Web Page Cache   
Yep, it’s true, Jon entered the world of tablet computing with the Nook!  AND he talks about it!  We also cover:  Portal 2, Nintendo, Steve Jobs, Google, Apple, Comcast, DRM, Console Gaming, President Obama, Cisco, Flip Cameras, T-Mobile and much much more! Remember Audio posts first, then Video! 🙂 Special thanks to: Jon Kessler,  Joseph […]
          Principal Consultant, End-User Computing - Compugen Inc - Montréal, QC      Cache   Translate Page   Web Page Cache   
Principal Consultant, End-User Computing Overview This role is a bilingual senior pre-sales consultant within Compugen’s Advanced Solutions award winning End...
From Compugen Inc - Sat, 16 Jun 2018 02:10:57 GMT - View all Montréal, QC jobs
          Principal Consultant, End-User Computing - Compugen Inc - Calgary, AB      Cache   Translate Page   Web Page Cache   
Overview This role is a senior pre-sales consultant within Compugen’s Advanced Solutions award winning End-User Computing Practice. The Principal Consultant...
From Compugen Inc - Sat, 16 Jun 2018 02:10:57 GMT - View all Calgary, AB jobs
          Why Red Hat Invested $250M in CoreOS to Advance Kubernetes      Cache   Translate Page   Web Page Cache   

For the last three years or so, Red Hat has been on a collision course with CoreOS, with both firms aiming to grow their respective Kubernetes platform. On Jan. 30, the competition between the two firms ended, with CoreOS agreeing to be acquired by Red Hat in a $250 million deal.

CoreOS didn't start out as a Kubernetes platform vendor, but then again neither did Red Hat. CoreOS' original innovations were the etcd distributed key value store, a purpose-built container linux operating system
Why Red Hat Invested 0M in CoreOS to Advance Kubernetes
(originally known as CoreOS Linux) and the company's Fleet platform that enabled Docker containers to easily be run as a cluster. In a 2017video interview with ServerWatch CoreOS co-founder and CTO Brandon Philips explained why his company moved on from Fleet and embraced Kubernetes with its Tectonic platform.

Red Hat's OpenShift platform was originally built based on technologyacquiredfrom Platform-as-a-Service vendor Makara in 2010. Red Hat entirely re-worked the platform for its 3.0 release in 2015, re-basing it on Docker and Kubernetes.

While Red Hat OpenShift and CoreOS Tectonic are both based on Kubernetes, they were highly competitive with each other. Though that's not how Red Hat sees it.

"CoreOS' existing commercial products are complementary to existing Red Hat solutions," Matt Hicks, senior vice president, engineering, Red Hat, told ServerWatch . "Our specific plans and timeline around integrating products and migrating customers to any combined offerings will be determined over the coming months."

Hick said that it is Red Hat belief that CoreOS customers will benefit from industry-leading container and Kubernetes solutions, a broad portfolio of enterprise open source software, world-class support and an extended partner network.

CoreOS had been leading the development of the rkt container runtime, which is a rival to Docker-backed containerd runtime. Red Hat has its own effort known as CRI-O, which is based on containerd. CRI-O 1.0 wasreleasedin October 2017.

"rkt has a sustaining community within the Cloud Native Computing Foundation (CNCF) and that won't change," Hicks said. " Red Hat and CoreOS are both committed to furthering the standardization of key container standards to further enterprise adoption, as evidenced by our leadership positions within OCI. Specific product-level decisions will come in the following weeks around future investments."

Why Now?

Red Hat and CoreOS were actively competing against each other in the market. Red Hat CEO Jim Whitehurst has repeatedly made multiple comments in recent months about the financial success OpenShift has had.

"If you believe containerized applications will be kind of how applications are developed in the future, it will be a substantial opportunity," Whitehurstsaidin September 2017. "There is a lot of value in it [OpenShift], because it includes RHEL, it includes a fully supported life cycle Kubernetes and a whole set of management tools, and then, obviously, above that a whole developer tool chain."

Now with CoreOS as part of Red Hat, the value in OpenShift can potentially be expanded even further. Hicks said that CoreOS can expand Red Hat’s technology leadership in containers and Kubernetes and enhance core platform capabilities in OpenShift, Red Hat Enterprise Linux and Red Hat’s integrated container portfolio.

"Bringing CoreOS’s technologies to the Red Hat portfolio can help us further automate and extend operational management capabilities for OpenShift administrators and drive greater ease of use for end users building and managing applications on our platform," Hicks said.

Hicks added that CoreOS’s offerings complement Red Hat’s container solutions in a number of ways:

Tectonic and its investment in the Kubernetes project that it is based on are complementary to Red Hat OpenShift and Red Hat’s own investments in Kubernetes. CoreOS can further extend Red Hat’s leadership and influence in the Kubernetes upstream community and also bring new enhancements to Red Hat OpenShift around automated operations and management.

Container Linux and its investment in container-optimized Linux and automated “over the air” software updates are complementary to Red Hat Enterprise Linux, Red Hat Enterprise Linux Atomic Host and Red Hat’s integrated container runtime and platform management capabilities. Red Hat Enterprise Linux’s content, the foundation of our application ecosystem will remain our only Linux offering. Whereas, some of the delivery mechanisms pioneered by Container Linux will be reviewed by a joint integration team and reconciled with Atomic.

Quay brings expanded registry capabilities that can both enhance OpenShift’s integrated registry component and the Red Hat Container Catalog and be used as a standalone component.

In the final analysis, with Red Hat's acquisition of CoreOS, the big shift is that there is one less competitor in the Kubernetes landscape and the biggest player, just got bigger.

Sean Michael Kerner is a senior editor at ServerWatch and InternetNews.com. Follow him on Twitter @TechJournalist.


          This week in Usability & Productivity, part 26      Cache   Translate Page   Web Page Cache   

This was quite a bugfixy week in KDE’s Usability and Productivity initiative, but we managed to squeeze in a cool new feature! See for yourself:

New Features now has a “Share” menu just like the one in Spectacle and Okular

Dolphin now has a “Share” menu just like the one in Spectacle and Okular (Nicolas Fella, KDE Applications 18.08.0):


This week in Usability & Productivity, part 26
Bugfixes Folder view widgets that have been placed in a panel once again behave correctly when clicked (Eike Hein, KDE Plasma 5.13.3) With compositing off, the default panel now covers the entire bottom of the screen properly (Vlad Zagorodniy, KDE Frameworks 5.48) GTK windows without client-side decorations (i.e. those that use the standard KWin-drawn server-side decorations, like the titlebar and shadow) once again have window decorations on Wayland (David Edmundson, GTK 3.24) Devices whose size cannot be determined no longer show up as “0 B Hard Drive” in the Places panel (Kai Uwe Broulik, KDE Frameworks 5.48) Discover no longer shows packages removed via the terminal as still being installed (Aleix Pol, KDE Plasma 5.12.6) When viewing the Properties dialog for a file on the desktop, the “Share” tab is now visible (Kai Uwe Broulik, 18.08.0) search:/ and timeline:/ entries now work again from the Kickoff Application Launcher menu (Kai Uwe Broulik, KDE Plasma 5.12.6) UI Polish & Improvement When KRunner finds and displays an entry from your Places panel, the name of its category appears to the right of it (Kai Uwe Broulik, KDE Plasma 5.12.6) Modernized the layout of some pages in Dolphin’s settings window (me: Nate Graham, KDE Applications 18.08.0):
This week in Usability & Productivity, part 26
This week in Usability & Productivity, part 26
This week in Usability & Productivity, part 26

See all the names of people who worked hard to make the computing world a better place? That could be you next week! Getting involved isn’t all that tough, and there’s lots of support available. Give it a try today ! It’s easy and fun and important.

If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon , LiberaPay , or PayPal . Also consider making a donation to the KDE e.V. foundation .


          Product Manager - Myant - Etobicoke, ON      Cache   Translate Page   Web Page Cache   
Discovery, design, development and success. At Myant, we are creating the world’s first textile computing platform, integrating technology directly into the...
From Myant - Fri, 20 Jul 2018 23:56:05 GMT - View all Etobicoke, ON jobs
          Low-complexity 8-point DCT Approximation Based on Angle Similarity for Image and Video Coding. (arXiv:1808.02950v1 [eess.IV])      Cache   Translate Page   Web Page Cache   

Authors: R. S. Oliveira, R. J. Cintra, F. M. Bayer, T. L. T. da Silveira, A. Madanayake, A. Leite

The principal component analysis (PCA) is widely used for data decorrelation and dimensionality reduction. However, the use of PCA may be impractical in real-time applications, or in situations were energy and computing constraints are severe. In this context, the discrete cosine transform (DCT) becomes a low-cost alternative to data decorrelation. This paper presents a method to derive computationally efficient approximations to the DCT. The proposed method aims at the minimization of the angle between the rows of the exact DCT matrix and the rows of the approximated transformation matrix. The resulting transformations matrices are orthogonal and have extremely low arithmetic complexity. Considering popular performance measures, one of the proposed transformation matrices outperforms the best competitors in both matrix error and coding capabilities. Practical applications in image and video coding demonstrate the relevance of the proposed transformation. In fact, we show that the proposed approximate DCT can outperform the exact DCT for image encoding under certain compression ratios. The proposed transform and its direct competitors are also physically realized as digital prototype circuits using FPGA technology.


          Feature Dimensionality Reduction for Video Affect Classification: A Comparative Study. (arXiv:1808.02956v1 [cs.LG])      Cache   Translate Page   Web Page Cache   

Authors: Chenfeng Guo, Dongrui Wu

Affective computing has become a very important research area in human-machine interaction. However, affects are subjective, subtle, and uncertain. So, it is very difficult to obtain a large number of labeled training samples, compared with the number of possible features we could extract. Thus, dimensionality reduction is critical in affective computing. This paper presents our preliminary study on dimensionality reduction for affect classification. Five popular dimensionality reduction approaches are introduced and compared. Experiments on the DEAP dataset showed that no approach can universally outperform others, and performing classification using the raw features directly may not always be a bad choice.


          Auto-Scaling Network Resources using Machine Learning to Improve QoS and Reduce Cost. (arXiv:1808.02975v1 [cs.NI])      Cache   Translate Page   Web Page Cache   

Authors: Sabidur Rahman, Tanjila Ahmed, Minh Huynh, Massimo Tornatore, Biswanath Mukherjee

Virtualization of network functions (as virtual routers, virtual firewalls, etc.) enables network owners to efficiently respond to the increasing dynamicity of network services. Virtual Network Functions (VNFs) are easy to deploy, update, monitor, and manage. The number of VNF instances, similar to generic computing resources in cloud, can be easily scaled based on load. Hence, auto-scaling (of resources without human intervention) has been receiving attention. Prior studies on auto-scaling use measured network traffic load to dynamically react to traffic changes. In this study, we propose a proactive Machine Learning (ML) based approach to perform auto-scaling of VNFs in response to dynamic traffic changes. Our proposed ML classifier learns from past VNF scaling decisions and seasonal/spatial behavior of network traffic load to generate scaling decisions ahead of time. Compared to existing approaches for ML-based auto-scaling, our study explores how the properties (e.g., start-up time) of underlying virtualization technology impacts Quality of Service (QoS) and cost savings. We consider four different virtualization technologies: Xen and KVM, based on hypervisor virtualization, and Docker and LXC, based on container virtualization. Our results show promising accuracy of the ML classifier using real data collected from a private ISP. We report in-depth analysis of the learning process (learning-curve analysis), feature ranking (feature selection, Principal Component Analysis (PCA), etc.), impact of different sets of features, training time, and testing time. Our results show how the proposed methods improve QoS and reduce operational cost for network owners. We also demonstrate a practical use-case example (Software-Defined Wide Area Network (SD-WAN) with VNFs and backbone network) to show that our ML methods save significant cost for network service leasers.


          Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning. (arXiv:1709.10205v3 [cs.NE] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Georgios Detorakis, Sadique Sheik, Charles Augustine, Somnath Paul, Bruno U. Pedroni, Nikil Dutt, Jeffrey Krichmar, Gert Cauwenberghs, Emre Neftci

Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, the most neuromorphic hardware is trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.


          Inapproximability of Matrix $p\rightarrow q$ Norms. (arXiv:1802.07425v2 [cs.CC] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Vijay Bhattiprolu, Mrinalkanti Ghosh, Venkatesan Guruswami, Euiwoong Lee, Madhur Tulsiani

We study the problem of computing the $p\rightarrow q$ norm of a matrix $A \in R^{m \times n}$, defined as \[ \|A\|_{p\rightarrow q} ~:=~ \max_{x \,\in\, R^n \setminus \{0\}} \frac{\|Ax\|_q}{\|x\|_p} \] This problem generalizes the spectral norm of a matrix ($p=q=2$) and the Grothendieck problem ($p=\infty$, $q=1$), and has been widely studied in various regimes. When $p \geq q$, the problem exhibits a dichotomy: constant factor approximation algorithms are known if $2 \in [q,p]$, and the problem is hard to approximate within almost polynomial factors when $2 \notin [q,p]$.

The regime when $p < q$, known as \emph{hypercontractive norms}, is particularly significant for various applications but much less well understood. The case with $p = 2$ and $q > 2$ was studied by [Barak et al, STOC'12] who gave sub-exponential algorithms for a promise version of the problem (which captures small-set expansion) and also proved hardness of approximation results based on the Exponential Time Hypothesis. However, no NP-hardness of approximation is known for these problems for any $p < q$.

We study the hardness of approximating matrix norms in both the above cases and prove the following results:

- We show that for any $1< p < q < \infty$ with $2 \notin [p,q]$, $\|A\|_{p\rightarrow q}$ is hard to approximate within $2^{O(\log^{1-\epsilon}\!n)}$ assuming $NP \not\subseteq BPTIME(2^{\log^{O(1)}\!n})$. This suggests that, similar to the case of $p \geq q$, the hypercontractive setting may be qualitatively different when $2$ does not lie between $p$ and $q$.

- For all $p \geq q$ with $2 \in [q,p]$, we show $\|A\|_{p\rightarrow q}$ is hard to approximate within any factor than $1/(\gamma_{p^*} \cdot \gamma_q)$, where for any $r$, $\gamma_r$ denotes the $r^{th}$ norm of a gaussian, and $p^*$ is the dual norm of $p$.


          Multi-user Multi-task Offloading and Resource Allocation in Mobile Cloud Systems. (arXiv:1803.06577v2 [cs.IT] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Meng-Hsi Chen, Ben Liang, Min Dong

We consider a general multi-user Mobile Cloud Computing (MCC) system where each mobile user has multiple independent tasks. These mobile users share the computation and communication resources while offloading tasks to the cloud. We study both the conventional MCC where tasks are offloaded to the cloud through a wireless access point, and MCC with a computing access point (CAP), where the CAP serves both as the network access gateway and a computation service provider to the mobile users. We aim to jointly optimize the offloading decisions of all users as well as the allocation of computation and communication resources, to minimize the overall cost of energy, computation, and delay for all users. The optimization problem is formulated as a non-convex quadratically constrained quadratic program, which is NP-hard in general. For the case without a CAP, an efficient approximate solution named MUMTO is proposed by using separable semidefinite relaxation (SDR), followed by recovery of the binary offloading decision and optimal allocation of the communication resource. To solve the more complicated problem with a CAP, we further propose an efficient three-step algorithm named MUMTO-C comprising of generalized MUMTO SDR with CAP, alternating optimization, and sequential tuning, which always computes a locally optimal solution. For performance benchmarking, we further present numerical lower bounds of the minimum system cost with and without the CAP. By comparison with this lower bound, our simulation results show that the proposed solutions for both scenarios give nearly optimal performance under various parameter settings, and the resultant efficient utilization of a CAP can bring substantial cost benefit.


          Subgame Perfect Equilibria of Sequential Matching Games. (arXiv:1804.10353v2 [cs.GT] UPDATED)      Cache   Translate Page   Web Page Cache   

Authors: Yasushi Kawase, Yutaro Yamaguchi, Yu Yokoi

We study a decentralized matching market in which firms sequentially make offers to potential workers. For each offer, the worker can choose "accept" or "reject," but the decision is irrevocable. The acceptance of an offer guarantees her job at the firm, but it may also eliminate chances of better offers from other firms in the future. We formulate this market as a perfect-information extensive-form game played by the workers. Each instance of this game has a unique subgame perfect equilibrium (SPE), which does not necessarily lead to a stable matching and has some perplexing properties.

We show a dichotomy result that characterizes the complexity of computing the SPE. The computation is tractable if each firm makes offers to at most two workers or each worker receives offers from at most two firms. In contrast, it is PSPACE-hard even if both firms and workers are related to at most three offers. We also study engineering aspects of this matching market. It is shown that, for any preference profile, we can design an offering schedule of firms so that the worker-optimal stable matching is realized in the SPE.


          Blockchain firm Soluna to build 900MW wind farm in Morocco - CEO      Cache   Translate Page   Web Page Cache   
Blockchain company Soluna plans to build a 900-megawatt wind farm to power a computing center in Dakhla in the Morocco-administered Western Sahara, its chief executive John Belizaire said in an interview.
          SLATE Update: Making Math Libraries Exascale-ready      Cache   Translate Page   Web Page Cache   

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remains something of a challenge. Consider Summit, the U.S. supercomputer at ORNL which captured the top spot on the Top500 champion in June; Summit has 4356 nodes, each with two IBM 22-core Power9 CPUs and six Nvidia Tesla […]

The post SLATE Update: Making Math Libraries Exascale-ready appeared first on HPCwire.


          European Commission Highlights Use of HPC for Better Agriculture      Cache   Translate Page   Web Page Cache   

Aug. 9, 2018 — Using high performance computing resources, a team of researchers from Laboratoire de Chimie et Physique Quantiques de Toulouse aimed to understand the interactions between pesticides and a soil component. The results help users track pesticide levels in watercourses. Agriculture is the principal source of livelihood in many regions of the developing world, […]

The post European Commission Highlights Use of HPC for Better Agriculture appeared first on HPCwire.


          Quality Assurance Manager - Myant - Etobicoke, ON      Cache   Translate Page   Web Page Cache   
About us: At Myant, we are creating the world’s first textile computing platform, integrating technology directly into the only thing we’ve been wearing our...
From Myant - Tue, 15 May 2018 17:12:08 GMT - View all Etobicoke, ON jobs
          Product Manager - Myant - Etobicoke, ON      Cache   Translate Page   Web Page Cache   
About us: At Myant, we are creating the world’s first textile computing platform, integrating technology directly into the only thing we’ve been wearing our...
From Myant - Fri, 20 Jul 2018 23:56:05 GMT - View all Etobicoke, ON jobs
          ซัมซุงเปิดตัว Galaxy Watch: ชาร์จไร้สาย, เชื่อมต่อ LTE, แบตเตอรี่อยู่ได้ทั้งสัปดาห์      Cache   Translate Page   Web Page Cache   

ซัมซุงเปิดตัวนาฬิกา Galaxy Watch รุ่นใหม่ ที่ปรับปรุงการใช้งาน ตั้งแต่การเชื่อมต่อที่รองรับ LTE ในตัวสามารถใช้งานได้ใน 15 ประเทศ, อายุแบตเตอรี่เพิ่มขึ้น สามารถใช้งานได้ต่อเนื่องหลายวันต่อการชาร์จแต่ละครั้ง, การชาร์จรองรับชาร์จแบบไร้สาย โดยซัมซุงจะมีขายเแท่น Wireless Charge Duo สำหรับชาร์จพร้อมกับโทรศัพท์

นอกจากการจับอัตราการเต้นหัวใจแล้ว Galaxy Watch ยังรองรับการจับความเครียด และเตือนให้ผู้ใช้หายใจลึกๆ ปรับปรุงการตรวจการนอน และเพิ่มประเภทการออกกำลังกายเป็น 39 รูปแบบ ที่สำคัญคือรองรับ Spotify แบบออฟไลน์ สามารถดาวน์โหลดเพลงลงนาฬิกาไปฟังได้

ชิปภายในเป็น Exynos 9110 มีรุ่น LTE แรม 1.5GB รุ่น Bluetooth แรม 768MB หน่วยความจำแฟลช 4GB ซอฟต์แวร์ใช้ Tizen ไม่ใช่ Wear OS ของกูเกิล

นาฬิกามี 2 ขนาด 46 มิลลิเมตร แบตเตอรี่ 472mAh ใช้งานได้ปกติ 80 ชั่วโมง หากใช้น้อยอยู่ได้นานทั้งสัปดาห์ ขนาด 42 มิลลิเมตร แบตเตอรี่ 270mAh ใช้งานปกติ 45 ชั่วโมง หากใช้น้อยอยู่ได้ 5 วัน

เริ่มขายในสหรัฐฯ 24 สิงหาคมนี้ ยังไม่แจ้งราคา

ที่มา - Samsung

No Description


          Amazon partners with L.A. community colleges for cloud computing program      Cache   Translate Page   Web Page Cache   

Starting as early as this fall, students around Los Angeles will have the opportunity to learn to code in one of the biggest software growth areas: cloud computing, an increasingly popular online-based technology that is used for data analytics and file storage.

That’s due to a partnership announced...


          This Week In Rust: This Week in Rust 246      Cache   Translate Page   Web Page Cache   

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is warp, a fast, composable web framework. Thanks to Willi Kappler for suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

165 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

We put in a lot of work to make upgrades painless; for example, we run a tool (called “crater”) before each Rust release that downloads every package on crates.io and attempts to build their code and run their tests.

Rust Blog: What is Rust 2018.

Thanks to azriel91 for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.


          How Mature Are Your Cyber Defender Strategies?       Cache   Translate Page   Web Page Cache   

Our latest research examines real-world vulnerability assessment practices at 2,100 organizations to understand how defenders are approaching this crucial step in cyber hygiene.

For our latest research study, "Cyber Defender Strategies: What Your Vulnerability Assessment Practices Reveal," we explore how organizations are practicing vulnerability assessment (VA), and what these practices teach us about cyber maturity.

Our curiosity was piqued by our previous study, “Quantifying the Attacker's First-Mover Advantage,” which found it takes attackers a median of five days to gain access to a functioning exploit. In contrast, we learned, defenders take a median 12 days to assess for a vulnerability. The difference between the two results is a median seven-day window of opportunity for an attacker to strike, during which a defender isn’t even aware they’re vulnerable. This led us to consider how defenders are performing in the all-important discovery and assess phases of the Cyber Exposure Lifecycle.

Our Cyber Defender Strategies Report specifically focuses on key performance indicators (KPIs) associated with the Discover and Assess stages of the five-phase Cyber Exposure Lifecycle. During the first phase – Discover – assets are identified and mapped for visibility across any computing environment. The second phase – Assess – involves understanding the state of all assets, including vulnerabilities, misconfigurations, and other health indicators. While these are only two phases of a longer process, together they decisively determine the scope and pace of subsequent phases, such as prioritization and remediation.

We wanted to learn more about how end users are conducting vulnerability assessment in the real world, what this tells us about their overall maturity level, and how this varies based on demographics.

Cyber Defender Strategies: Understanding Vulnerability Assessment KPIs

For our Cyber Defender Strategies Report, we analyzed five key performance indicators (KPIs) based on real-world end user vulnerability assessment behavior. These KPIs correlate to four VA maturity styles: Diligent, Investigative, Surveying and Minimalist.

We discovered about half (48%) of the enterprises included in the data set are practicing very mature (exhibiting a Diligent or Investigative style) vulnerability assessment strategies. However, just over half (52%) exhibit moderate- to low-level VA maturity (exhibiting a Surveying or Minimalist style). We’ll tell you more about what all this means in a moment. First, let’s take a quick look at the methodology we applied to arrive at these results.

To identify our four VA Styles, we trained a machine learning algorithm called archetypal analysis (AA) with anonymized scan telemetry data from more than 2,100 individual organizations in 66 countries. We analyzed just over 300,000 scans during a three-month period from March to May 2018. We identified a number of idealized VA behaviors within this data set and assigned organizations to groups defined by the archetype to which they most closely relate. The vulnerability assessment characteristics for each defender style are described in the table below.

Four Vulnerability Assessment Styles: What They Reveal

VA Style

VA Maturity Level

Characteristics

Diligent

High

The Diligent conducts comprehensive vulnerability assessment, tailoring scans as required by use case, but only authenticates selectively.

Investigative

Medium to High

The Investigator executes vulnerability assessments with a high level of maturity, but only assesses selective assets.

Surveying

Low to Medium

The Surveyor conducts frequent broad-scope vulnerability assessments, but focuses primarily on remote and network-facing vulnerabilities.

Minimalist

Low

The Minimalist executes bare minimum vulnerability assessments, typically as required by compliance mandates.

Source: Tenable Cyber Defender Strategies Report, August 2018.

Here’s what we learned about each vulnerability assessment style:

  • Only five percent of enterprises follow the Diligent style, displaying a high level of maturity across the majority of KPIs. Diligent followers conduct frequent vulnerability assessments with comprehensive asset coverage, as well as targeted, customized assessments for different asset groups and business units.
  • Forty-three percent follow the Investigative style, indicating a medium to high level of maturity. These organizations display a good scan cadence, leverage targeted scan templates, and authenticate most of their assets.
  • Nineteen percent of enterprises follow the Surveying style, placing them at a low to medium maturity level. Surveyors conduct broad-scope assessments, but with little authentication and little customization of scan templates.
  • Thirty-three percent of enterprises are at a low maturity, following the Minimalist style and conducting only limited assessments of selected assets.

Tenable Cyber Defender Strategies Report: Key Findings

Tenable Cyber Defender Strategies Report Key Findings August 2018

Source: Tenable, Cyber Defender Strategies Report, August 2018.

Vulnerability Assessment Matters at Every Maturity Level

By now, you’re probably forming an opinion about how your vulnerability assessment strategies stack up. If your organization seems to be leaning toward the lower-maturity Surveying or Minimalist styles, fear not. There is nothing wrong with being at a low maturity. What is wrong is choosing to remain there.

If you’re a later adopter, it means you have more work to do to catch up. It also means you can learn from the mistakes and experiences of early adopters. Rather than having your organization serve as a testing bed for untried, novel and immature solutions, you’ll benefit from the availability of tried-and-tested offerings. There’s also an existing pool of expertise you can tap into, rather than trying to develop your strategies from scratch. Skipping the experimentation phase, you are poised to jump right into optimization and innovation.

And, if you identify with the most mature vulnerability assessment strategies highlighted here, it doesn’t mean you can take a lengthy sabbatical. Even the most sophisticated defenders know their work is never done.

The ultimate objective – regardless of which style most closely aligns to your own – is to always keep evolving toward a higher level of maturity. We know it isn’t easy. Cybersecurity professionals are hauling a lot of historical baggage. You’re dealing with legacy technology and dependencies alongside the complexities of managing a growing portfolio of continuously evolving and emerging technologies. Meanwhile, the threat environment has escalated noticeably over the past few years. And all of this is happening against a backdrop of competitive business pressures.

When it comes to cybersecurity, we have hit escape velocity, and most organizations now get it.

Our Cyber Defender Strategies Report provides recommendations for each VA style, to help you advance to the next maturity level. We also explore how these four VA styles are distributed across major industry verticals and by organization size, so you can compare yourself with your peers. Click to download the full report.

Learn More:


          Principal Physical Designer - Microsoft - Redmond, WA      Cache   Translate Page   Web Page Cache   
Microsoft Research has been studying quantum computing for several years and has become the world's center of expertise on topological quantum computing. The...
From Microsoft - Thu, 03 May 2018 22:51:00 GMT - View all Redmond, WA jobs
          Amazon partners with L.A. community colleges for cloud computing program      Cache   Translate Page   Web Page Cache   

Starting as early as this fall, students around Los Angeles will have the opportunity to learn to code in one of the biggest software growth areas: cloud computing, an increasingly popular online-based technology that is used for data analytics and file storage.

That’s due to a partnership announced...


          The Chinese typewriter: a history      Cache   Translate Page   Web Page Cache   
Nominally a book that covers the rough century between the invention of the telegraph in the 1840s and that of computing in the 1950s, The Chinese Typewriter is secretly a history of translation and empire, written language and modernity, misguided struggle and brutal intellectual defeat. The Chinese typewriter is 'one of the most important and illustrative domains of Chinese techno-linguistic innovation in the 19th and 20th centuries ... one of the most significant and misunderstood inventions in the history of modern information technology', and 'a historical lens of remarkable clarity through which to examine the social construction of technology, the technological construction of the social, and the fraught relationship between Chinese writing and global modernity'. It was where empires met.

          Sustainable Computing Ad      Cache   Translate Page   Web Page Cache   
Advertisement, IEEE.
          IT Technician - Camberley      Cache   Translate Page   Web Page Cache   
Job Details £25,314 per annum for 36 hours per week, 52 weeks per year. The role will be based with the IT and Funding Team at Camberley Adult Learning Centre, but will require regular business travel around the county. The post holder will be a contractual car user and so will be required to maintain a driving licence and provide a vehicle with insurance for business use. Where possible, we offer flexible working , mobile phones and laptops. 24 days annual leave for full time staff or pro rata for part time staff. Local government salary-related pension offered, discounted child care vouchers as well as the option to join the car lease scheme. For more information, please visit MyBenefits for Surrey County Council staff. Surrey County Council is committed to safeguarding and promoting the welfare of children, young people and vulnerable adults and expects all staff and volunteers to share this commitment. We want to be an inclusive and diverse employer of first choice reflecting the community we serve and particularly welcome applications from all under represented groups. About the Role Surrey Adult Learning (SAL) has seven dedicated Adult Learning Centres in north and south west Surrey. With funding from the Education and Skills Funding Agency (ESFA), SAL delivers around 2,800 courses each year in its learning centres and at over 100 other community-based venues around the county. The use of Information Learning Technology (ILT) is mandated by Ofsted and a vital tool to enhance learning not just in computing classes, but across the curriculum from fine art to English and maths. We have IT teaching suites at four of our sites, interactive whiteboards and multimedia projectors in many of our classrooms and mobile devices such as iPads and laptop computers. As a member of the IT and Funding Team, the IT Technician will deploy and maintain SAL’s IT equipment and operate its service desk. This is an exciting and challenging role for an individual who, working as part of a small team, can plan and implement works to a timetable, but also respond quickly and work under pressure to resolve business critical faults as they arise. The IT Technician will work with all SAL’s staff and stakeholders and so must project a positive, professional approach and be able to build and maintain excellent customer relations. The successful candidate will be able to demonstrate knowledge and prior experience of the following: Microsoft client and server operating systems, domain administration with AD DS, O365 administration Apple iMac and iPad devices with macOS and iOS operating systems Switched networks and associated protocols such as TCP/IP, network printers and scanners, wireless network deployment and associated security principles, enterprise security including anti-virus, firewalls and web filtering Client PC build and imaging processes via a centralised deployment tool Typical school/college classroom ILT equipment, e.g. interactive whiteboards and projectors Virtual Learning Environment (VLE) administration, e.g. Moodle Service delivery frameworks such as ITIL and hands-on service desk operation including problem analysis, trouble-shooting and fault resolution. On occasion, the IT Technician will work flexibly in terms of working hours and location. They will be required to travel around the county to deploy mobile equipment, to attend scheduled meetings, to provide occasional staff training and to resolve faults that cannot be addressed remotely (e.g. hardware failures and network connectivity). For more information, please find attached to the bottom of this advert a full job description and person specification. We look forward to receiving your application. Additional Information The closing date for applications is 26th August 2018. Interviews will take place in Camberley in week commencing 3rd September 2018. Values and Behaviours Our values are what support our vision, shape the culture and are crucial in delivering our corporate strategy. For more information about our values and behaviours please follow this link .
          Sr Build & Release Engineer - Secure Computing - General Electric - Evendale, OH      Cache   Translate Page   Web Page Cache   
Works independently and contributes to the immediate team and to other teams across business. GE is the world's Digital Industrial Company, transforming...
From GE Careers - Sat, 26 May 2018 10:20:53 GMT - View all Evendale, OH jobs
          Salesforce Senior Engagement Manager, Director - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Silverline is a Salesforce Platinum Cloud Alliance Partner focused on delivering robust business solutions leveraging cloud-computing platforms....
From Silverline Jobs - Wed, 30 May 2018 06:22:53 GMT - View all Casper, WY jobs
          Microsoft threatens to pull Gab services over anti-Semitic posts      Cache   Translate Page   Web Page Cache   

The action shows how the tech industry’s efforts to tackle hate speech online are extending beyond big social-media services to cloud-computing companies that provide web-hosting services to smaller sites.
          Byte Into IT - 8 August 2018      Cache   Translate Page   Web Page Cache   

Simon and Warren jump in the studio this Wednesday to bring us all the recent news in technology, computing, the internet and startups.

Fi Slaven, program director of Go Girl, Go for IT, comes in to speak about their program and all of the vocational avenues into IT that are available to Victorian school girls.

We then get a call from UNSW's professor of artificial intelligence Toby Walsh, who sparks a conversation about the future of AI and tells us about his new book: 2062: The World That AI Made.


          Samsung beefs up Bixby with more conversational capabilities - CNET      Cache   Translate Page   Web Page Cache   
The digital assistant remains behind Alexa and Google Assistant in the voice-computing race.
          Prometheus monitoring tool joins Kubernetes as CNCF’s latest ‘graduated’ project      Cache   Translate Page   Web Page Cache   
The Cloud Native Computing Foundation (CNCF) may not be a household name, but it houses some important open source projects including Kubernetes, the fast-growing container orchestration tool. Today, CNCF announced that the Prometheus monitoring and alerting tool had joined Kubernetes as the second “graduated” project in the organization’s history. The announcement was made at PromCon, the […]
          Salesforce Solution Architect - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast-paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Mon, 07 May 2018 06:20:29 GMT - View all Casper, WY jobs
          Comment on USA Temperature: can I sucker you? by jiminy      Cache   Translate Page   Web Page Cache   
It's an interesting result. Apparently, using the same logic and computing the mean location of the US from all of the stations in use in each year, the US has drifted about 3 degrees East. It's cooler over there.
          Plant Assistant - Pete Lien & Sons, Inc - Frannie, WY      Cache   Translate Page   Web Page Cache   
Complex Computing and Cognitive Thinking Y. An hourly employee who at the direction of the Plant Operator provides plant operational support duties as requested...
From Pete Lien & Sons, Inc - Wed, 08 Aug 2018 23:21:23 GMT - View all Frannie, WY jobs
          Digital Writing Instruments Market Size, Competitive Landscape, Key Country Analysis, Opportunity Assessment, and Forecasts to 2027      Cache   Translate Page   Web Page Cache   

Digital pencil and pens segment is expected to reach a market size of over US$ 1,400 Mn by the end of 2027, growing at a CAGR of 9.7% during the forecast period.

Albany, NY -- (SBWIRE) -- 08/09/2018 -- The market for digital writing instruments market growing with the expansion of this Industry Sector Worldwide. Market Research Report Search Engine (MRRSE) has added a new report titled "Digital Writing Instruments Market: Global Industry Analysis (2012 – 2016) and Opportunity Assessment (2017 – 2027)" which offer details about the current trends as well as scope for the near future. This research study also covers information about the production and market share based on different active regions. Furthermore, an anticipated growth at a double-digit CAGR for the concerned digital writing instruments market is highlighted in the report which indicates a prosperous future.

Get Exclusive Sample Copy of Research Report on Digital Writing Instruments Market for More Technical & Industry Insights @ https://www.mrrse.com/sample/4910

As technological advancements are taking place, there is a visible increase in the demand for different digital writing instruments. People are using digital writing instruments to operate several gadgets, be it PC, laptop, or smartphones. There are mainly two types of instruments available to people, namely, digital pens and digital stylus. A digital pen is barely bigger than a stylus, supplies extra performance and features than the latter, and includes a digital eraser, digital camera, internal memory, and programmable buttons. Most digital pens, often simply called pens, are pressure-sensitive. A digital pen can be integrated with a smart writing system to recognize individual pages, different paper tablets, and specific times and dates. In some cases, a digital pen can also be used as a multifunctional scanning pen by text readers.

A digital stylus is generally smaller and much thinner than a digital pen, because it contains no internal electronics. A stylus is generally used to tap, write, and draw on touchscreen devices, with features including precision accuracy, pressure sensing, and palm rejection. Moreover, a digital stylus is a small metal or plastic device that looks like a tiny ink pen, but uses pressure on input screens such as tablets, smartphones, notebooks, and PDA (personal digital assistant) devices. Majorly, a stylus is primarily used to input and manipulate information on a PDA device. Among these two, digital pens and pencils are ahead of digital stylus in terms of both market size and growth rate. Digital pencil and pens segment is expected to reach a market size of over US$ 1,400 Mn by the end of 2027, growing at a CAGR of 9.7% during the forecast period.

Digital pens and pencils to be in high demand mostly for smartphones

The smartphones market is rising at a significant rate. Advanced smartphones are available with new and innovative features, which is expected to increase the demand for digital pens and pencils. Advanced electronic devices, including smartphones and tablets, are gradually gaining traction among young population who are using digital writing instruments for sketching, learning, scanning, etc., in smartphones and tablets.

Outlook Complete Research Reports on Digital Writing Instruments Market with Industry Key Players and Complete List of Tables & Figures @ https://www.mrrse.com/digital-writing-instruments-market

This trend is expected to have a positive effect on the growth of the digital writing instruments market. There's also a rise in the import of electronic goods, due to the rise in its demand. Companies are focusing on research and development of computing and electronic devices, with the objective to improve product life cycles and gain a competitive edge in the market. But as the production cost of these products is not affordable for many companies, manufacturers and distributors are focusing on manufacturing and importing smartphones, tablets, laptops/notebooks, computer peripherals, and other electronic accessories from other regions, or assembling the components that are manufactured worldwide. Along with import of these products, the demand for digital writing instruments is expected to rise in the coming years.

The demand for digital writing instruments can be hindered by high import taxes

As sellers are importing new and advanced digital products like writing instruments in order to meet rising demands, they are also expected to face restrictions like high import taxes. Due to high import taxes on consumer electronic devices such as digital pens, styli, e-Book readers, smartphones, tablets, notebooks, etc., the prices of these products rise and price conscious consumers find it difficult to purchase these products. High price has always been a major concern for consumers and the factor is expected to adversely affect the growth of digital writing instruments market over the forecast period.

Inquire more or share questions if any on this report @ https://www.mrrse.com/enquiry/4910

About Market Research Reports Search Engine
Market Research Reports Search Engine (MRRSE) is an industry-leading database of market intelligence reports. MRRSE is driven by a stellar team of research experts and advisors trained to offer objective advice. Our sophisticated search algorithm returns results based on the report title, geographical region, publisher, or other keywords.

MRRSE partners exclusively with leading global publishers to provide clients single-point access to top-of-the-line market research. MRRSE's repository is updated every day to keep its clients ahead of the next new trend in market research, be it competitive intelligence, product or service trends or strategic consulting.

Contact
State Tower
90, State Street
Suite 700
Albany, NY - 12207
United States Telephone: +1-518-730-0559
Email: sales@mrrse.com
Website: https://www.mrrse.com/

For more information on this press release visit: http://www.sbwire.com/press-releases/digital-writing-instruments-market-size-competitive-landscape-key-country-analysis-opportunity-assessment-and-forecasts-to-2027-1024787.htm

Media Relations Contact

Nivedita
Manager
MRRSE
Telephone: 1-518-621-2074
Email: Click to Email Nivedita
Web: https://www.mrrse.com/

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000


          Solutions Architect - Amazon Web Services - Amazon.com - San Francisco, CA      Cache   Translate Page   Web Page Cache   
DevOps, Big Data, Machine Learning, Serverless computing etc. High level of comfort communicating effectively across internal and external organizations....
From Amazon.com - Thu, 26 Jul 2018 08:17:05 GMT - View all San Francisco, CA jobs
          API Integration with Power Bi - 09/08/2018 23:16 EDT      Cache   Translate Page   Web Page Cache   
What we need is basically an integration of data flow into Power Bi for an automated process of data updates for visualization purposes. We 3 different sources of data coming in; - Xero (open API),... (Budget: $750 - $1500 AUD, Jobs: .NET, Cloud Computing, Microsoft SQL Server)
          Fujitsu Powers the Data-Enriched World with Four New Mobile and Desktop CELSIUS Workstations      Cache   Translate Page   Web Page Cache   
Fujitsu introduces four new models to its CELSIUS workstation portfolio, combining powerful computing capabilities with extra strong security features. Fujitsu CELSIUS workstations are the ideal platform for accelerating innovation and data-rich design for immersive visualization experiences such as virtual reality (VR).
          Senior Information Security Consultant - Network Computing Architects, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
With the ability to interface and communicate at the executive level, i.e. CIO's, CTO's and Chief Architects....
From Network Computing Architects, Inc. - Mon, 11 Jun 2018 23:15:53 GMT - View all Bellevue, WA jobs
          Principal Physical Designer - Microsoft - Redmond, WA      Cache   Translate Page   Web Page Cache   
Microsoft Research has been studying quantum computing for several years and has become the world's center of expertise on topological quantum computing. The...
From Microsoft - Thu, 03 May 2018 22:51:00 GMT - View all Redmond, WA jobs
          Assistant/Associate Web Development Professor - Computing and New Media Technologies (CNMT) - UW Stevens Point - Stevens Point, WI      Cache   Translate Page   Web Page Cache   
Tim Krause, Computing New Media Technology Chair. Position Title Web Development Professor....
From University of Wisconsin System - Thu, 31 May 2018 19:32:31 GMT - View all Stevens Point, WI jobs
          Comment on Cryptocurrency insecurity: IOTA, BCash and too many more by Dave South      Cache   Translate Page   Web Page Cache   
With the advancement of technology security nowadays very much important concern and issue for every business house. Cyber security becomes necessity which protects the data, computer and network from theft, damage and unauthorized access. The major trends in cyber security in 2018 are block chain security, ransomware, cyber warfare, quantum computing and hacking therapy. For more mail rootgatehacks at tutanota dot com
          (USA-TX-Austin) Order Review Specialist      Cache   Translate Page   Web Page Cache   
**Order Review Specialist in Austin, TX at Volt** # Date Posted: _8/9/2018_ # Job Snapshot + **Employee Type:** Contingent + **Location:** 5505 West Parmer Lane Austin, TX + **Job Type:** Call Centers + **Duration:** 36 weeks + **Date Posted:** 8/9/2018 + **Job ID:** 131183 + **Pay Rate** $16.0 - $16.0/Hour + **Contact Name** Volt Branch + **Phone** 512-362-5245 # Job Description **Volt is hiring Order Review Specialists to work with us at our client, a leader in mobile devices and personal computers, in North Austin!** **Pay is $16/hour! Training starts 8/20, so APPLY NOW!** # Job Summary Volt is hiring a team of Order Review Specialists to work onsite with our client, a leader in consumer mobile communication and computing devices. The Order Review Specialists will promptly resolve payment processing issues using excellent analytical skills, as well as contribute to the customer experience by preventing and minimizing unnecessary customer contacts. # Key Qualifications + 1-2 years customer service experience + Bachelor’s degree preferred or equivalent work experience + Excellent problem solving and analytical skills + Able to exercise objective judgment and make decisions with minimal supervision; comfortable working with ambiguity + Strong written and verbal communications skills + Meticulous and able to work within multiple windows/screens in a fast-paced work environment + Has excellent work ethic and a positive attitude even in stressful situations + Team player + Happy to accept feedback from managers/coaches/mentors + Must be able to work a schedule from 7:00 am to 11:00pm (CT), including at least one weekend day and possibly holidays, with additional flexibility needed during high volume times of the year # Responsibilities **The Order Review Specialist:** + Reviews and Analyzes complex orders that have been blocked + Detects any suspicious activity; determines appropriate next steps + If necessary, cancels orders due to suspicious activity and serves as a first point of contact between customers, banks, and shipping carriers + Identifies and analyses trends in reoccurring incidents + Researches and resolves credit card charge back disputes + Identifies innovative ideas to improve the customer experience **If this looks like the job for you, apply to this job posting immediately or contact us at 512-362-5245 for more information. We look forward to speaking with you!** # Volt is an Equal Opportunity Employer. In order to promote this harmony in the workplace and to obey the laws related to employment, Volt maintains a strong commitment to equal employment opportunity without unlawful regard to race, color, national origin, citizenship status, ancestry, religion (including religious dress and grooming practices), creed, sex (including pregnancy, childbirth, breastfeeding and related medical conditions), sexual orientation, gender identity, gender expression, marital or parental status, age, mental or physical disability, medical condition, genetic information, military or veteran status or any other category protected by applicable law.
          (USA-TX-Plano) Advertising & Analytics - Principal Data Scientist (AdCo)      Cache   Translate Page   Web Page Cache   
The Data Scientist will be responsible for designing and implementing processes and layouts for complex, large- scale data sets used for modeling, data mining, and research purposes. The purpose of this role is to conceptualize, prototype, design, develop and implement large scale big data science solutions in the cloud and on premises, in close collaboration with product development teams, data engineers and cloud enterprise teams. Competencies in implementing common and new machine learning, text mining and other data science driven solutions on cloud based technologies such as AWS are required. The data scientist will be knowledgeable and skilled in the emerging data science trends and must be able to provide technical guidance to the other data scientists in implementing emerging and advanced techniques. The data scientist must also be able to work closely with the product and business teams to conceptualize appropriate data science models and methods that meet the requirements. Key Roles and Responsibilities + Uses known and emerging techniques and methods in data science (including statistical, machine learning, deep learning, text and language analytics and visualization) in big data and cloud based technologies to conceptualize, prototype, design, code, test, validate and tune data science centric solutions to address business and product requirements + Conceptualizes data science enablers required for supporting future product features based on business and product roadmaps, and guides cross functional teams in prototyping and validating these enablers + Mentors and guides other data scientists + Uses a wide range of existing and new data science and machine learning tools and methods as required to solve the problem on hand. Skilled in frameworks and libraries using but not limited to R, python, spark, scala, pig, hive, mllib, mxnet, tensorflow, keras, theanos etc. + Is aware of industry trends an collaborates with the platform and engineering teams to update the data science development stack for competitive advantage + Collaborates with third party data science capability vendors and provides appropriate recommendations to the product development teams + Works in a highly agile environment **Experience** Typically requires 10 or more years experience or PhD in an approved field with a minimum of 6 years of relevant experience. **Education** Preferred Masters of Science in Computer Science, Math or Scientific Computing; Data Analytics, Machine Learning or Business Analyst nanodegree; or equivalent experience.
          (USA-AL) Territory Account Manager-Law Enforcement/Public Safety      Cache   Translate Page   Web Page Cache   
Every moment of every day, people all over the world turn to Panasonic to make their lives simpler, more enjoyable, more productive and more secure. Since our founding almost a century ago, we’ve been committed to improving peoples’ lives and making the world a better place–one customer, one business, one innovative leap at a time. Come join our journey. **What You’ll Get to Do:** The TAM is responsible for designing, developing and executing a sales strategy for increasing sales and profits through several small to medium accounts within a territory. + Communicates the PSCNA value proposition "Why Panasonic" through engaging presentations and conversational meetings vs. leading with hardware while developing/facilitating customer relationships and creates opportunities with territory accounts + Establishes, communicates and holds direct and matrix team accountability for contract terms, program budgets, schedules, operational activities and performance metrics to drive profitability and customer satisfaction + Provides timely, accurate & complete sales reports and forecasts along with updating Sales Opportunities in PSCNA SalesLogix system for forecasting. + Engages and leverages effective relationships with internal resources and industry partners to create opportunities and further drive solution based sales. + Manages programs from initiation through delivery, interfacing with the customer and company resources on technical and business issues. **What You’ll Bring:** **Education & Experience:** + Bachelor’s degree in business management, marketing or a related discipline. + Fully qualified sales account manager. 2-5 years sales experience in similar industry. + Has working knowledge of electronic products and services. + Understands company cultures and networks for resolving a variety of issues that are aligned with + Panasonic’s Basic Business Principles. + Ability to articulate product portfolios and their connectivity for a solution. **Competencies** : + Resource Manager - Understands and uses informal structures. Makes work related contacts and collaborates with internal and external resources to accomplish company goals. Asks direct questions of immediately people and consults available resources before taking action. + Strategic Thinking - Takes a systematic approach to solve problems by understanding the end to end value. Accurately determines the length and difficulty of tasks and projects. Sets clear, realistic, and time-bound objectives and goals that align with long term business growth. Breaks down work into the process steps. Sets priorities and time parameters to accomplish tasks and projects. + Executive with Passion - Makes personal sacrifices or expends extraordinary effort to complete a job and represents the company in professional manner. Self- directed and takes initiative without direction. + Leadership- Ability to work cooperatively with others, interpersonal relations and communication skills, ability to work effectively with a wide range of individuals, and possess leadership skills. Makes decisions independently and works without constant supervision. + Communications: Key communications contacts (internal/external) and level of persuasion required + Uses direct persuasion in a discussion or presentation. Listens to others opinions and ideas prior to making decisions. Observes questions, analyzes and cooperates with others to foster an environment of collaboration. Informs others of changes, ideas and outcomes. Speaks clearly and is concisely. Internal and external presentations are informative, persuasive and tailored to the audience. + Provides information on our services and the standards which our customers can expect. Promotes company solutions and services that meet our customer needs. Establish yourself as a creditable source and responds in a timely and effective manner. **Other Requirements** : + 75% travel **What We Offer:** Family like environment with an entrepreneurial spirit Collaborative culture that thrives on innovation and new ideas Rewards and recognition for great achievements Growth opportunities for career development Flexible work arrangements to help balance life and work Competitive benefits and compensation package Through its broad range of integrated business technology solutions, Panasonic empowers professionals to do their best work. Customers in government, healthcare, production, education and a wide variety of commercial enterprises, large and small, depend on integrated solutions from Panasonic to reach their full potential, achieve competitive advantage and improve outcomes. The complete suite of Panasonic solutions addresses unified business communications, mobile computing, security and surveillance systems, retail information systems, office productivity solutions, high definition visual conferencing, projectors, professional displays and HD and 3D video 4K Cinema production. As a result of its commitment to R&D, manufacturing and quality control, Panasonic engineers reliable and long-lasting solutions as a partner for continuous improvement. Panasonic solutions for business are delivered by Panasonic System Solutions Company of North America (PSSNA), Division of Panasonic Corporation of North America (PNA), the principal North American subsidiary of Panasonic Corporation. Panasonic is proud to be an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender identity, sex, sexual orientation, national origin, disability status, protected veteran status, and any other characteristic protected by law or company policy. All qualified individuals are required to perform the essential functions of the job with or without reasonable accommodation. Pre-employment drug testing is required for safety sensitive positions or as may otherwise be required by contract or law. Due to the high volume of responses, we will only be able to respond to candidates of interest. All candidates must have valid authorization to work in the U.S. Thank you for your interest in Panasonic Corporation of North America.
          ADC Market in Transition: A Sign of the Times      Cache   Translate Page   Web Page Cache   
The modern data center is evolving with the uptake of cloud computing, containers and virtual machines. So is the application delivery controller market. Partners need to know what new challenges their customers face and understand the product landscape in order to provide the solutions that they need.
          Pick a serverless fight: A comparative research of AWS Lambda, Azure Functions ...      Cache   Translate Page   Web Page Cache   

The saturation point is nowhere to be seen in the serverless discussion with tons ofnews coming online every day andnumerous reports trying to take the pulse of one of the hottest topics out there.

This time, however, we are not going to discuss any of the above. This article is going to be a bit more…academic!

During the last USENIX Annual Technical Conference ’18 that took place in Boston, USA in mid-July, an amazingly interesting academic research was presented .

The paper “Peeking Behind the Curtains of Serverless Platforms” is a comparative research and analysis of the three big serverless providers AWS Lambda, Azure Functions and Google Cloud Functions. The authors (Liang Wang, Mengyuan Li, Yinqian Zhang, Thomas Ristenpart, Michael Swift) conducted the most in-depth (so far) study of resource management and performance isolation in these three providers.

SEE ALSO: The state of serverless computing: Current trends and future prospects

The study systematically examines a series of issues related to resource management including how quickly function instances can be launched, function instance placement strategies, and function instance reuse. What’s more, the authors examine the allocation of CPU, I/O and network bandwidth among functions and the ensuing performance implications, as well as a couple of exploitable resource accounting bugs .

Did I get your attention now?

In this article, we have an overview of the most interesting results presented in the original paper.

Let’s get started!

Methodology

First things first. Let’s have a quick introduction to the methodology of this study.

The authors conducted this research by integrating all the necessary functionalities and subroutines into a single function that they call a measurement function .

According to the definition found in the paper, this function performs two tasks:

Collect invocation timing and function instance runtime information Run specified subroutines (e.g., measuring local disk I/O throughput, network throughput) based on received messages

In order to have a clear overview of the specifications for each provider, the following table provides a comparison of function configuration and billing in the three services.


Pick a serverless fight: A comparative research of AWS Lambda, Azure Functions  ...

The authors examined how instances and VMs are scheduled in the three serverless platforms in terms of instance coldstart latency, lifetime, scalability, and idle recycling and the results are extremely interesting.

Scalability and instance placement

One of the most intriguing findings, in my opinion, is on the scalability and instance placement of each provider. There is a significant discrepancy among the three big services with AWS being the best regarding support for concurrent execution :

AWS:“3,328MB was the maximum aggregate memory that can be allocated across all function instances on any VM in AWS Lambda. AWS Lambda appears to treat instance placement as a bin-packing problem, and tries to place a new function instance on an existing active VM to maximize VM memory utilization rates.”

Azure:Despite the fact that Azure documentation states that it will automatically scale up to at most 200 instances for a single Nodejs-based function and at most one new function instance can be launched every 10 seconds, the tests of Nodejs-based functions performed by the authors showed that “at most 10 function instances running concurrently for a single function”, no matter how the interval between invocations were changed.

Google:Contrary to what Google claims on how HTTP-triggered functions will scale to the desired invocation rate quickly, the service failed to provide the desired scalability for the study. “In general, only about half of the expected number of instances, even for a low concurrency level (e.g., 10), could be launched at the same time, while the remainder of the requests were queued.”

Interesting fact: More than 89% of VMs tested achieved 100% memory utilization.

Coldstart and VM provisioning

Concerning coldstart (the process of launching a new function instance) and VM provisioning, AWS Lambda appears to be on the top of its game :

AWS:Two types of coldstart events were examined: “a function instance is launched (1) on a new VM that we have never seen before and (2) on an existing VM. Intuitively, case (1) should have significantly longer coldstart latency than (2) because case (1) may involve starting a new VM.” However, the study shows that “case (1) was only slightly longer than (2) in general. The median coldstart latency in case (1) was only 39 ms longer than (2) (across all settings). Plus, the smallest VM kernel uptime (from /proc/uptime) that was found was 132 seconds, indicating that the VM has been launched before the invocation.” Therefore, these results show that AWS has a pool of ready VMs! What’s more, concerning the extra delays in case (1), the authors argue that they are “more likely introduced by scheduling rather than launching a VM.”

Azure:According to the findings, it took much longer to launch a function instance in Azure, despite the fact that their instances are always assigned 1.5GB memory. The median coldstart latency was 3,640 ms in Azure.

Google:“The median coldstart latency in Google ranged from 110 ms to 493 ms. Google also allocates CPU proportionally to memory, but in Google memory size has a greater impact on coldstart latency than in AWS.”

SEE ALSO: What do developer trends in the cloud look like?

Additional to the tests described above, the research team “collected the coldstart latencies of 128 MB, python 2.7 (AWS) or Nodejs 6.* (Google and Azure) based functions every 10 seconds for over 168 hours (7 days), and calculated the median of the coldstart latencies collected in a given hour.” According to the results, “the coldstart latencies in AWS were relatively stable, as were those in Google (except for a few spikes). Azure had the highest network variation over time, ranging from about 1.5 seconds up to 16 seconds.” Take a look at the figure below:


Pick a serverless fight: A comparative research of AWS Lambda, Azure Functions  ...

Source: “Peeking Behind the Curtains of Serverless Platforms”, Figure 8, p. 139

Instance lifetime

The research team defines as instance lifetime “the longest time a function instance stays active.

Keeping in mind that users prefer the longer lifetimes, the results depict Azure winning this one since Azure functions provide significantly longer lifetimes than AWS and Google, as you can see in the figures below:


Pick a serverless fight: A comparative research of AWS Lambda, Azure Functions  ...

Source: “Peeking Behind the Curtains of Serverless Platforms”, Figure 9, p.140

Idle instance recycling

Instance maximum idle time is defined by the authors as “the longest time an instance can stay idle before getting shut down.” Specifically for each service provider, the results show:

AWS:An instance could usually stay inac
          Offer - Wonderful Offer HP Z840 Workstation Rental and sale Mumbai - INDIA      Cache   Translate Page   Web Page Cache   
Built for high-end computing and visualization, the HP Z840 delivers outstanding performance in one of the industry’s most expandable chassis Product Highlights Processor: Intel® Xeon® E5-2683 v4 Memory : 40GB DDR4-2400 Graphics: NVIDIA® NVS™ 310 Hard Disk: 300 GB up to 1.2 TB SAS (10000 rpm) Contact Rental India Name – Chackravarthy (8754542653) Name – Anushree (8971423090) Visit Us: https://shop.rental-india.com/product/hp-z840-workstation-available-on-rental-sale/ Mail Us: enquiry@rental-india.com Mandaveli, Chennai-28.
          VLOOKUP and SUMIF: Replicate in Python      Cache   Translate Page   Web Page Cache   

Often times, a new user to python will wish to replicate analysis previously done in Excel. Two major instances of this are the VLOOKUP and SUMIF commands.

VLOOKUP: Combining data through a common index SUMIF: Summing up values by category

Let’s take a look at how we can replicate these commands in Python. You will see that the pandas library offers quite a degree of flexibility when it comes to summarizing and wrangling data in Python in this way.


VLOOKUP and SUMIF: Replicate in Python

Firstly, we will import our libraries as standard, set our file path, and import the relevant datasets:

import pandas as pd import numpy as np import os; os.getcwd() path='/home/michaeljgrogan/Documents/a_documents/computing/data science/datasets' os.chdir(path) sales=pd.read_csv('sales.csv') sales customers=pd.read_csv('customers.csv') customers

Note that the datasets “customers.csv” and “sales.csv” are available on the “Datasets” page.

VLOOKUP with merge

In the sales dataset, you will notice that we have Date, ID and Sales figures. In the customers dataset, we also have a corresponding ID variable, but we do not have a date column.

Suppose that we wish to combine the “Date” variable in the sales dataset to the rest of the data in the customers dataset. Ordinarily, we would use VLOOKUP or INDEX-MATCH in Excel to do this. However, let us see how this can be done in Python:

#VLOOKUP sales.merge(customers, on='ID', how='right')

Once we have done this, you will see that we have the two datasets merged, with the Date variable on the left and the rest of the data on the right:

>>> sales.merge(customers, on='ID', how='right') Date ID Sales Age Country 0 2014-02-12 49 113769 23 Trinidad and Tobago 1 2014-02-14 57 122965 46 Singapore 2 2014-03-18 2 164556 28 Lao PDR .. ... .. ... ... ... 98 2016-12-06 32 126092 33 Saint Vincent and Grenadines 99 2016-12-27 45 117126 47 Iran, Islamic Republic of SUMIF with groupby

Often times, an Excel user will use the SUMIF function to sum up different values by category. This can also be replicated in Python.

Let’s firstly create a new dataframe in pandas:

df1 = pd.DataFrame({'names': ['John', 'Elizabeth', 'Michael', 'John', 'Elizabeth', 'Michael'], 'webvisits': ['24', '32', '40', '71', '65', '63'], 'minutesspent': ['20', '41', '5', '6', '48', '97']}, index=[0, 1, 2, 3, 4, 5])

Essentially, what we wish to do here is group by name, and get the total “minutesspent” for each person.

#SUMIF df1.groupby("names")["minutesspent"].sum()

Now, we have our results:

names Elizabeth 4148 John 206 Michael 597 Name: minutesspent, dtype: object Further Reading Dataquest: Pandas Cheat Sheet Python for Data Science Data Cleaning and Wrangling in R

Thanks for viewing this short tutorial, and please leave any questions in the comments below!


          Offer - Wonderful Offer HP Z840 Workstation Rental and sale Mumbai - INDIA      Cache   Translate Page   Web Page Cache   
Built for high-end computing and visualization, the HP Z840 delivers outstanding performance in one of the industry’s most expandable chassis Product Highlights Processor: Intel® Xeon® E5-2683 v4 Memory : 40GB DDR4-2400 Graphics: NVIDIA® NVS™ 310 Hard Disk: 300 GB up to 1.2 TB SAS (10000 rpm) Contact Rental India Name – Chackravarthy (8754542653) Name – Anushree (8971423090) Visit Us: https://shop.rental-india.com/product/hp-z840-workstation-available-on-rental-sale/ Mail Us: enquiry@rental-india.com Mandaveli, Chennai-28.
          Blockchain firm Soluna to build 900MW wind farm in Morocco: CEO      Cache   Translate Page   Web Page Cache   
Blockchain company Soluna plans to build a 900-megawatt wind farm to power a computing center in Dakhla in the Morocco-administered Western Sahara, its chief executive John Belizaire said in an interview.

          Salesforce Senior Engagement Manager, Director - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Silverline is a Salesforce Platinum Cloud Alliance Partner focused on delivering robust business solutions leveraging cloud-computing platforms....
From Silverline Jobs - Wed, 30 May 2018 06:22:53 GMT - View all Casper, WY jobs
          Senior Bios Engineer - ZT Systems - Austin, TX      Cache   Translate Page   Web Page Cache   
Join us at this critical growth inflection point as we engineer the hardware infrastructure powering a world of cloud computing, cloud storage, artificial...
From ZT Systems - Thu, 31 May 2018 00:20:51 GMT - View all Austin, TX jobs
          Comment on Oxford: “Conservatives are right to be skeptical of scientific establishments” by sycomputing      Cache   Translate Page   Web Page Cache   
<em>Social science isn’t really science, as we know the science that has evolved over the centuries. Not quite sure what it should be called...</em> "Catastrophic Anthropogenic Global Warming" might be considered a valid label in some circles...
          Senior Information Security Consultant - Network Computing Architects, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
With the ability to interface and communicate at the executive level, i.e. CIO's, CTO's and Chief Architects....
From Network Computing Architects, Inc. - Mon, 11 Jun 2018 23:15:53 GMT - View all Bellevue, WA jobs
          Technical Support Analyst      Cache   Translate Page   Web Page Cache   
GA-Alpharetta, Premier Atlanta-based organization is seeking an experienced Desktop Technical Support consultant for a long-term contract position. In this role, you will provide on-site end-user computing support including: investigating, troubleshooting, and resolving hardware, software, network, and instructional technology incidents. Responsibilities: Provide assistance to schools in troubleshooting IT techn
          Under half of firms use vulnerability assessments      Cache   Translate Page   Web Page Cache   

A study of 2,100 organisations reveals a global divide in how organisations assess cyber risk, with less than half using strategic vulnerability assessments .

Only 48% of organisations polled use mature or moderately mature programs that include targeted and tailored scanning of computer resources based on business criticality as a foundational element of their cyber defence and risk reduction strategies, according to Tenable’s Cyber defender strategies report .

The report uses data science and real-world telemetry data to analyse how organisations are assessing their exposure to vulnerabilities to improve their cyber security posture.

Of those organisations using strategic vulnerability assessments, the study found that only 5% display the highest degree of maturity, with comprehensive asset coverage a cornerstone of their programs.

The “diligent” approach represents the highest level of maturity, achieving near-continuous visibility into where an asset is secure or exposed and to what extent through high assessment frequency.

On the other end of the spectrum, 33% of organisations take a “minimalist” approach to vulnerability assessments, doing the “bare minimum” as required by compliance mandates, thereby increasing the risk of a business-impacting cyber event, the report said. This group represents a lot of enterprises which are exposed to risk and still have some work to do, with critical decisions to make on which key performance indicators to improve first.

A previous study by Tenable revealed that cyber attackers generally have a median seven-day window of opportunity to exploit a known vulnerability, before defenders have even determined they are vulnerable.

The real-world gap is directly related to how enterprises are conducting vulnerability assessments, with smaller gaps and lower risk associated with more strategic and mature approaches, the latest report said.

“In the not too distant future, organisation will either fall into the category of those that rise to the challenge of reducing cyber risk or the category of those who fail to adapt to a constantly evolving and accelerating threat landscape in modern computing environments,” said Tom Parsons, senior director of product management, Tenable.

“This research is a call to action for our industry to get serious about giving the advantage back to cyber defenders, starting with the rigorous and disciplined assessment of vulnerabilities as the basis for mature vulnerability management and ultimately, cyber exposure.”

The research analysed telemetry data for over three months from organisations in more than 60 different countries to identify distinct security maturity styles and strategic insights which can help organisations.

Other strategies of vulnerability assessment

In addition to the “diligent” and “minimalist” approaches, the study identified two other strategies of vulnerability assessment.

The “surveyor” approach is characterised by frequent broad-scope vulnerability assessments, but with little authentication and customisation of scan templates. Of those organisations reviewed, 19% follow this approach, placing them at a low to medium maturity.

The “investigator” approach is characterised by the execution of vulnerability assessments with a high maturity, but assesses only selective assets. Of the organisations reviewed, 45% follow this approach, indicating a solid strategy based on a good scan cadence, targeted scan templates, broad asset authentication and prioritisation.

“Considering the challenges involved in managing vulnerabilities, securing buy-in from management, cooperating with disparate business units such as IT operations, maintaining staff and skills, and the complexities of scale, this is a great achievement and provides a solid foundation upon which to mature further,” the report said.

Across all levels of maturity, the report said organisations benefit from avoiding a “scattershot approach” to vulnerability assessment and instead making strategic decisions and employing more mature tactics such as frequent, authenticated scans to improve the efficacy of vulnerability assessment programs.


          Salesforce Senior Engagement Manager, Director - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Silverline is a Salesforce Platinum Cloud Alliance Partner focused on delivering robust business solutions leveraging cloud-computing platforms....
From Silverline Jobs - Wed, 30 May 2018 06:22:53 GMT - View all Casper, WY jobs
          Amazon partners with L.A. community colleges for cloud computing program      Cache   Translate Page   Web Page Cache   

Starting as early as this fall, students around Los Angeles will have the opportunity to learn to code in one of the biggest software growth areas: cloud computing, an increasingly popular online-based technology that is used for data analytics and file storage.

That’s due to a partnership announced...


          New staff guide to IT resources at UW      Cache   Translate Page   Web Page Cache   
Find computing tools and resources you need for your work with the new Staff Quick-Start Guide. Learn how to connect to the secure, encrypted wireless on campus, and access voicemail, email and files when you’re on the road. Find out how to get software, and decide which productivity platform – UW G Suite or UW...
          Inside Sales Representative - Penguin Computing, Inc. - Fremont, CA      Cache   Translate Page   Web Page Cache   
Make outbound lead follow-up calls to potential and existing customers by telephone and email to qualify leads and sell products and services....
From Penguin Computing, Inc. - Fri, 04 May 2018 10:00:45 GMT - View all Fremont, CA jobs
          hadoop (3.1.1)      Cache   Translate Page   Web Page Cache   
The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.

          microsoft-r-open (3.5.1)      Cache   Translate Page   Web Page Cache   
World’s most powerful programming language for statistical computing, machine learning and graphics as well as a thriving global community of users, developers and contributors.

          3 Chip Stocks to Buy for the Next 3 Years      Cache   Translate Page   Web Page Cache   

3 Chip Stocks to Buy for the Next 3 YearsChip stocks have been among the market’s biggest winners over the past several years. Huge demand tailwinds from the data center, cloud computing, connected device and AI markets have converged on a limited-supply base, creating a “high demand, low supply” dynamic in the chip market wherein chip prices are soaring. Such improvements have driven huge gains in chip stocks. Chip supply won’t remain subdued forever — it is already ramping to meet demand, and there are some reports out there that chip prices may have already peaked.



          Principal Physical Designer - Microsoft - Redmond, WA      Cache   Translate Page   Web Page Cache   
Microsoft Research has been studying quantum computing for several years and has become the world's center of expertise on topological quantum computing. The...
From Microsoft - Thu, 03 May 2018 22:51:00 GMT - View all Redmond, WA jobs
          Growing digital demands – how SMEs can gain the competitive edge with cloud      Cache   Translate Page   Web Page Cache   
Companies can thrive in the face of organisational transformation by deploying a cloud computing solution.
          Senior Bios Engineer - ZT Systems - Austin, TX      Cache   Translate Page   Web Page Cache   
Join us at this critical growth inflection point as we engineer the hardware infrastructure powering a world of cloud computing, cloud storage, artificial...
From ZT Systems - Thu, 31 May 2018 00:20:51 GMT - View all Austin, TX jobs
          Precomputation-based radix-4 CORDIC for approximate rotations and Hough transform      Cache   Translate Page   Web Page Cache   
Vector rotation is an important component of algorithms in digital signal processing and robotics. Often, the rotation does not require very high accuracy. This study presents a lowoverhead sign-precomputation-based architecture for approximate rotation using the coordinate rotation digital computer (CORDIC) algorithm. The proposed architecture is independent of Z-datapath, and involves precomputation of the direction of rotation for each micro-rotation angle. The approach involves selecting the optimal micro-rotation angles from a set of elementary angles in run time. Careful selection and elimination of the redundant micro-rotation angles leads to a maximum of three iterations for a majority of the input angles while also simultaneously reaching within 0:45 (of the desired rotation angle). An field programmable gate array (FPGA) implementation of the proposed rotation mode CORDIC on XC7K70T-3FBG676 Kintex-7 using Xilinx ISE 13.2 achieves roughly 50% reduction in slice-delay product and power-delay product compared to recent designs. An application of approximate rotation to Hough transform-based lane detection is presented. An efficient algorithm for generation of vote addresses in the parameter space is proposed. It is shown that accurate lane detection is possible along with resource savings using the proposed CORDIC. The proposed architecture reduces the number of additions roughly by a factor of 20 compared with the conventional method of computing a parameter for each feature point.
          Design and analysis of a logic model for ultra-low power near threshold adiabatic computing      Cache   Translate Page   Web Page Cache   
The behaviour of the adiabatic logic in the near threshold regime has been analysed in depth in this study. Near threshold adiabatic logic (NTAL) style can perform efficiently using a single sinusoidal power supply which reduces the clock tree management and enhances the energy saving capability. Power dissipation, voltage swing, effect of load, temperature, frequency etc. of NTAL circuits have been detailed here. Extensive CADENCE simulations have been done in 22 nm technology node to verify the efficacy of the proposed model. A power clock has been generated based on a switched capacitor regulator to drive the complex NTAL circuits. Analytical and simulated data match with high accuracy which validates the proposed adiabatic logic style in the near threshold regime. A significant amount of energy can be saved by the adiabatic logic with or without considering the power dissipation of the clock generator.
          ADC Market in Transition: A Sign of the Times      Cache   Translate Page   Web Page Cache   
The modern data center is evolving with the uptake of cloud computing, containers and virtual machines. So is the application delivery controller market. Partners need to know what new challenges their customers face and understand the product landscape in order to provide the solutions that they need.
          EPISODE85 - Financial Services and Cloud      Cache   Translate Page   Web Page Cache   
Podcast with Michael Williams, Cloud Engagement Executive HP Helion on the impact of financial services from cloud computing
          EPISODE88 - Driving Value in Telcos      Cache   Translate Page   Web Page Cache   
In this podcast, Irene Cortes, Cloud Program Manager, HP Helion talks with Shisher Wahie, Cloud Engagement Executive, HP Helion about how cloud computing is impacting telecommunication providers.
          UK Partner OCSL Talks Cloud in Europe      Cache   Translate Page   Web Page Cache   
Iain Mobberley, Chief Technologist, OCSL talks with Stephen Spector, Social Strategist, HPE Helion about the state of cloud computing in Europe and how OCSL is leveraging the HPE Helion family of cloud solutions for their customers.
          State of Hybrid Cloud with Wikibon Analysts      Cache   Translate Page   Web Page Cache   
Bobby Patrick, CMO HPE Helion talks with Wikibon analysts (Dave Vellante and Brian Gracely) about hybrid cloud computing and what trends will emerge in 2016 for cloud computing.
          Latest Cloud Trends with Christian Verstraete      Cache   Translate Page   Web Page Cache   
Joining me for the 100th Podcast of the HPE Helion Podcast is the first guest, HPE Chief Technology Officer Christian Verstraete. We discuss the current trends in cloud computing and how much has changed since our first podcast in June 2013.
          Managing Multiple Clouds in a Hybrid Environment      Cache   Translate Page   Web Page Cache   
Deborah Martin, HPE Helion Marketing talks about use cases for enterprise customers looking at cloud computing solutions.
          Podcast: Will SMB Market Adopt Cloud Computing      Cache   Translate Page   Web Page Cache   
Todd Lyle, Founder and CEO of Duncan, LLC speaks with Stephen Spector, Cloud Evangelist, HPE about the â??oldâ?? concept of Utility Computing and how Cloud Computing is still not achieving that desired state. Extra emphasis in this podcast is spent on the adoption of Cloud Computing in the SMB Market.
          Pushing AI to the Edge: Be Part of the Revolution      Cache   Translate Page   Web Page Cache   

This article is featured in the new DZone Guide to Artificial Intelligence: Automating Decision-Making. Get your free copy for more insightful articles, industry statistics, and more!

Artificial intelligence (AI) surrounds us. It unlocks our phones, creates our shopping list, navigates our commute, and cleans spam from our email. It’s making customers’ lives easier and more convenient. Once you’ve experienced AI in action, it’s difficult to go back. With edge computing becoming a thing, AI on-the-edge is following suit.


          Offer - BEST B.TECH COLLEGE IN UTTARAKHAND, RIT ROORKEE - INDIA      Cache   Translate Page   Web Page Cache   
About the DepartmentThe Department of Computer Science and Engineering at RIT Roorkee is well known for imparting state of the art undergraduate computer education and preparing the students for real world challenges. The undergraduate curriculum provides a strong foundation in all the areas of Computer Science and Engineering. The mission of the Department is to impart high quality technical education and carry out leading-edge professionalism in the discipline of Computer Science and Engineering. The Department is fully equipped to provide state of the art computing facilities to students. It also promotes active industry-institute collaboration by identifying areas of interest and exposing the students to active projects by conducting industrial visits and short term job training programmes.
          Senior Software Development Engineer - Distributed Computing Services (Hex) - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Knowledge and experience with machine learning technologies. We enable Amazon’s internal developers to improve time-to-market by allowing them to simply launch...
From Amazon.com - Thu, 26 Jul 2018 19:20:25 GMT - View all Seattle, WA jobs
          Offer - BEST MCA COLLEGE IN UTTARAKHAND, RIT ROORKEE - INDIA      Cache   Translate Page   Web Page Cache   
About the DepartmentThe Department of Master of Computer Application at RIT Roorkee is well known for imparting state of the art undergraduate computer education and preparing the students for real world challenges. The undergraduate curriculum provides a strong foundation in all the areas of Computer Science and Engineering. The mission of the Department is to impart high quality technical education and carry out leading-edge professionalism in the discipline of Computer Science and Engineering. The Department is fully equipped to provide state of the art computing facilities to students. It also promotes active industry-institute collaboration by identifying areas of interest and exposing the students to active projects by conducting industrial visits and short term job training programmes.
          NETLAB+ Content Additions for Palo Alto Networks Cybersecurity Academy      Cache   Translate Page   Web Page Cache   
NDG is pleased to announce the following additions to our supported curriculum content options for NETLAB+ through our partnership with Palo Alto Networks Cybersecurity Academy. Cybersecurity Gateway labs enable students to build an understanding of the fundamental principles of networking. Through hands-on practice, students will explore the general concepts involved in maintaining a secure network computing […]
          Principal Physical Designer - Microsoft - Redmond, WA      Cache   Translate Page   Web Page Cache   
Microsoft Research has been studying quantum computing for several years and has become the world's center of expertise on topological quantum computing. The...
From Microsoft - Thu, 03 May 2018 22:51:00 GMT - View all Redmond, WA jobs
          Resources, Bugs, Breaches, and Learning Tools - Application Security Weekly #27      Cache   Translate Page   Web Page Cache   

Hardware-based Root of Trust, Small Trusted Computing Base, React v16.4.2, GitHub shows best practices for account security and recoverability, and the cost of JavaScript, and Food for Thought!

Full Show Notes: https://wiki.securityweekly.com/ASW_Episode27

Follow us on Twitter: https://www.twitter.com/securityweekly


          (USA-NY-Rochester) Analyst/Programmer, RDIA Group CTSI      Cache   Translate Page   Web Page Cache   
Opening Full Time 40 hours Grade 053 Clin & Trans Science Institute Schedule 8 AM-5 PM Responsibilities **Organization:** Research Data Integration and Analytics (RDIA) group, Informatics Division of the Clinical and Translational Science Institute, School of Medicine and Dentistry, University of Rochester. The RDIA group provides state-of-the-art integrative biomedical research data management and analytic services for research programs within the University of Rochester Medical Center. **Position Summary:** This position involves the development, evaluation, and testing of web-based database applications, data integrations, and programming to manage workflows for clinical and experimental data and document procedures used. This includes integration with clinical and specimen metadata for several large research centers, as well performing data quality assurance. Tasks also include formatting data for submission to public genomics data repositories. The candidate will work under general supervision of the RDIA Technical Lead, with some latitude for independent judgment, working with a team of other developers and data managers. **Job duties:** Collects and analyzes user requirements and system capabilities: * Meets with principal investigators, lab personnel, and statisticians involved in research studies to understand volume, frequency, and format of data to be collected, data workflows to be supported, and analytic tools to be implement. Adapts existing data management applications to meet project requirements: * Where possible, uses features of LabKey Server system to implement, design, test, and track data collection, workflows, and analysis.This includes use of the wikis, file content, lab assay modules, study, issues, and query modules of LabKey Server. Custom, project specific programming (Java, JavaScript, SQL): * Primarily focused on custom programming for assay and experimental data management and integration. * Builds, evaluates, tests, and maintains custom web-based data collection forms using JavaScript, HTML, LabKey APIs, and in-house-developed libraries. * Builds custom reports using LabKey Server SQL. * Builds, evaluates, tests, and maintains custom LabKey Server modules using Java, LabKey Server APIs, and in-house developed libraries. Data management tasks; Data QC/QA and scripting: * Builds custom external programs using Java, Python, LabKey Client APIs, and SQL to automate data cleaning, data transformation, data QC and reporting. Attends project meetings, meets with supervisors, makes recommendations and attends educational seminars **Requirements:** * Bachelor’s degree in Software Engineering, computer science or related field (Master’s preferred) and 2-3 years of related experience; or an equivalent combination of education and experience. * Experience programming in Java and developing J2EE web applications required. * Experience programming in web technologies (HTML, CSS and JavaScript) required. * Experience with command line Linux environment; basics include directory and file management, file permissions, rsync, grep, awk, sed. * Experience with scripting data transformation in R and/or Python a plus. * Experience using relational databases.Working knowledge of SQL required.DBA experience (especially PostgreSQL) a plus. * Experience with High Performance Computing environment (SGE, PBS, SLURM) a plus. * Experience working with next generation sequencing data and related data formats a plus. * Experience with LabKey Server software a plus. * Excellent attention to details, ability to work and communicate well with multi-disciplinary team is required. * Must be able to work on-site. The University of Rochester is committed to fostering, cultivating and preserving a culture of diversity and inclusion. The University believes that a diverse workforce and inclusive workplace culture enhances the performance of our organization and our ability to fulfill our important missions. The University is committed to fostering and supporting a workplace culture inclusive of people regardless of their race, ethnicity, national origin, gender, sexual orientation, socio-economic status, marital status, age, physical abilities, political affiliation, religious beliefs or any other non-merit fact, so that all employees feel included, equal valued and supported. *EOE Minorities/Females/Protected Veterans/Disabled* *Job Title:* Analyst/Programmer, RDIA Group CTSI *Location:* School of Medicine & Dentistry *Job ID:* 210190 *Regular/Temporary:* Regular *Full/Part Time:* Full-Time
          (USA-CO-Schriever AFB) Systems Administrator 2/3      Cache   Translate Page   Web Page Cache   
At Northrop Grumman we develop cutting-edge technology that preserves freedom and advances human discovery. Our pioneering and inventive spirit has enabled us to be at the forefront of many technological advancements in our nation's history - from the first flight across the Atlantic Ocean, to stealth bombers, to landing on the moon. We continue to innovate with developments from launching the first commercial flight to space, to discovering the early beginnings of the universe. Our employees are not only part of history, they're making history. Northrop Grumman Mission Systems is seeking a System Administrator for a hands-on position in support of an operational customer. This position will be located on-site in a government customer facility at Schriever AFB, in Colorado Springs, CO. The position will require the ability to effectively work alongside software developers, testers, operations specialists throughout the full lifecycle development. This individual will be expected to install and configure entire operating environments from square one to fully functional, possibly including installing and configuring hardware, OS's, applications, databases, data files, and applying Security Technical Integration Guides (STIGs), and connecting and configuring networks. Responsibilities include: + Provide as-needed engineering design solutions to address deficiencies, discrepancies or new requirements related to hardware and communication components + Develop, propose, configure, maintain and troubleshoot network systems + Lead and participate in projects involving the application of network and computer communication technology for both software applications and provide network technical design and support to allow distributed learning lesson/course completion synchronization. + Support architectural trade studies and designing of engineering solutions + Timely installation of IT systems, upgrades, and software. Prepares and maintains project plans associated with assigned technical efforts + Provide real-time troubleshooting support to baseline issues + Troubleshooting and administration / maintenance of mission computing systems + Highly experienced installing and deploying production servers + Perform system preventative maintenance services + Provide on-call (24 hour) Ops support + Provide hardware installation of procured COTS HW and SW + Excellent problem solving, customer service and communication skills ***Position contingent on final adjudicated TS/SCI with CI Polygraph *** _This requisition may be filled at a higher grade based on qualifications listed below_ MSCOMSTR + **Basic Qualifications:** + For a level 2, a Bachelor of Science degree and 3+ years Windows system administration experience or 1 year with a Masters or 7 years in lieu of a degree + For a level 3, a Bachelor of Science degree and6 years Windows system administration experience or 3 years with a Masters or 10 years in lieu of a degree + Hands-on knowledge of Windows Servers, Windows Failover Clusters, Active Directory, TCP/IP, DNS, RAID, FTP and Load Balancers + Hands-on knowledge of virtualization technologies in a production environment + Strong knowledge of switches + Strong knowledge of firewalls + Strong knowledge of STIGs and Security Patch Management + Active Security + Certification + Active DoD Top Secret security clearance with SCI eligibility + Ability to obtain and maintain a Counter Intelligence Polygraph + Ability to obtain and maintain SCI clearance **Preferred Qualifications:** + Active TS/SCI clearance + Prior experience installing, configuring, troubleshooting and maintaining: + Knowledge of Compute Cloud Services, Cloud and Managed Hosting such as Amazon Web Services (AWS) + Knowledge of Splunk or Solar Wiinds + Knowledge of Pure Storage + Knowledge of RedHat + Knowledge of Switches, Routers, Firewalls + Knowledge of Hardware Wall Cross Domain Solution + Knowledge of Scripting language + Knowledge of Oracle Northrop Grumman is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class. For our complete EEO/AA and Pay Transparency statement, please visit www.northropgrumman.com/EEO . U.S. Citizenship is required for most positions.
          (USA-International) Software Engineer 4 (Harrogate, UK)      Cache   Translate Page   Web Page Cache   
At Northrop Grumman, our employees have incredible opportunities to work on revolutionary systems in air and space that impact people's lives around the world today, and for generations to come. Our work preserves freedom and democracy, and advances human discovery and our understanding of the universe. We look for people who have bold new ideas, courage and a pioneering spirit to join forces to invent the future, and have a lot of fun along the way. Our culture thrives on intellectual curiosity, cognitive diversity and bringing your whole self to work - and we have an insatiable drive to do what others think is impossible. Our employees are not only part of history, they're making history. The Rushmore program, located in North Yorkshire England, requires an experienced individual with combined software and system administration experience and who can work in a structured configuration controlled environment. The Software Engineer will provide support and maintenance for Linux based servers and workstations and Storage Area Networks that host a variety of real-time software applications. The specific duties associated with this position are: * Maintain and administer operational computing environments to include RHEL enterprise, ESXi/Virtual Center, Windows AD, and NetApps storage systems * Operate management systems to monitor the performance of computer systems and networks * Support the installation, integration, test, and documentation of new computer and storage systems into the operational baseline * Implement security policies and requirements to protect data, software, and hardware. * Work with Factory and Site personnel to develop enhancements to deployed systems. * Develop tools, utilities, and procedures to support continued operations. * Diagnose computer and software anomalies, document discrepancies, and provide immediate fixes or workaround strategies to restore operations. * Provide on-call 24/7 emergency anomaly response to collect diagnostic data and support restoration of operations. A comprehensive overseas compensation package is offered with this position An initial 2-year tour commitment is required **Basic Qualifications:** * Bachelor's degree in a STEM field (Science/Technology/Engineering/Math) field plus a minimum of9 years of applicable experience in a related field Master's degree with minimum of 7 years * Ability to obtain/retain a TS SCI clearance with current SSBI and Counter Intelligence polygraph **Preferred Qualifications:** * Strong technical and communications skills * Experience integrating and managing Linux-based computers and Storage Area Networks * Current CISSP or Security+ certification * Experience integrating and managing VMS-based systems is a plus Northrop Grumman is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class. For our complete EEO/AA and Pay Transparency statement, please visit www.northropgrumman.com/EEO . U.S. Citizenship is required for most positions.
          (USA-MD-Linthicum) IT Auditor 3      Cache   Translate Page   Web Page Cache   
Northrop Grumman Corporation's Internal Audit organization is seeking an Information Technology (IT) Auditor to join our team of qualified, diverse audit staff. This individual will perform specialized reviews focused on technology areas requiring advanced professional knowledge, experience and techniques, and is responsible for performing internal audit projects in support of the annual Internal Audit plan. Specifically, work performed includes coverage of enterprise functional and operating units, with a focus on technology-related processes (may also include auditing activities under Sarbanes-Oxley). Position can be located in Linthicum, MD or Falls Church, VA. In addition, the IT Auditor 3 is responsible for performing audits in accordance with professional standards, including: Performing internal audit procedures, including preparation of work papers, internal audit reports, management response evaluation, and recommendation (issue) follow-up. Audit areas often include but are not limited to network security, application implementations and upgrades, database reviews, logical access reviews, physical access reviews, firewall and other network device reviews, as well as virtual or cloud computing. Assist in formulating audit objectives and scope of work in the form of an audit program. Assist in design, development and implementation of manual and automated audit methods and techniques to improve auditor productivity. Assist in audit oversight, planning and training, including providing technical support and advice to other less experienced auditors. Detect and characterize significant company issues; identify root cause and provide recommendations for corrective and preventative actions. Provide timely feedback on project performance to team members. Organize work to deliver high quality results on time. Conduct planning research using internal and external sources in framing an audit and identify leading practices. Assist Internal Audit management with periodic internal status reporting, development of the quarterly rolling internal audit plan, and promotion of internal control and corporate governance concepts throughout the enterprise. **Basic Qualifications:** + Bachelor's degree in Manufacturing, Engineering, Technology or related discipline and6 years of related experience or master's degree with4 years of related experience. + Understanding of internal control concepts and experience in applying them to plan, perform, manage, and report on the evaluation of various technology processes, areas, and functions. + Ability to plan, perform, and report on audits utilizing a risk-based approach. + Ability to effectively communicate in both oral and written forms. + Working knowledge of analytical tools to analyze, interpret and organize a wide variety of data, information, and ideas to identify trends and form valid conclusions. + Exhibit the leadership skills needed to persuade management at all levels to accept constructive change. + Ability to obtain U.S. Government SECRET Clearance. + Ability to travel approximately 25% of the time. **Preferred Qualifications:** + Master's degree in relevant technical, business or related field. + CISA, CISSP or equivalent professional designation. + Advanced understanding of ERP systems (i.e. PeopleSoft, SAP), distributed system controls, general computing controls, and policy and procedure reviews. + Understanding of internal auditing standards, COBIT, NIST or other framework requirements, and risk assessment practices. + Familiarity of Aerospace and Defense industry. Northrop Grumman is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class. For our complete EEO/AA and Pay Transparency statement, please visit www.northropgrumman.com/EEO . U.S. Citizenship is required for most positions.
          (USA-TN-Memphis) Manager - Systems Operations (Storage and Virtualization)      Cache   Translate Page   Web Page Cache   
Reporting to the Director-Operations and Research Computing, this position is responsible for the management of technical teams within Information Services (IS) related to enterprise storage, data protection, and virtualization infrastructure. This position oversees system performance and availability, 24 x 7 operational support, systems management and upgrades, data recovery, physical system security, and customer service. The position is responsible for planning, architecture, contracting, policy and procedures, monitoring, auditing, reporting, and budgetary support and oversight of these technical infrastructure services. The position acts as a senior technical advisor for storage and virtualization technology, and provides leadership to IS staff for priority setting, resource utilization, continuous performance improvement, overall technical guidance, and strategic planning. This professional will have experience with enterprise production operations environments and leverage industry best practices in technology and processes to ensure St. Jude Children's Research Hospital systems are architected and supported to peak efficiency. + Bachelor's Degree is required + Master's Degree preferred + Eight (8) years of experience in job specific skills that includes six (6) years in systems, network, or operations management and two (2) years of supervisory or leadership experience required + Experience in team development and project management is preferred + Experience in a multi-vendor information technology environment is preferred + Experience in a health care, biomedical research, or academic environment is preferred + Experience with current enterprise storage solutions, including SAN, NAS, and large-scale parallel filesystems is preferred. + Experience with current enterprise data protection and replication technologies is preferred. + Experience with current enterprise virtualization services using VMware vSphere is preferred. St. Jude is an Equal Opportunity Employer No Search Firms: St. Jude Children's Research Hospital does not accept unsolicited assistance from search firms for employment opportunities. Please do not call or email. All resumes submitted by search firms to any employee or other representative at St. Jude via email, the internet or in any form and/or method without a valid written search agreement in place and approved by HR will result in no fee being paid in the event the candidate is hired by St. Jude. Posted Job Title: Manager - Systems Operations (Storage and Virtualization) WITS Req ID: 39020 Street: 262 Danny Thomas Place
          Efficient Analysis of Multiple Microstrip Transmission Lines With Anisotropic Substrates      Cache   Translate Page   Web Page Cache   
Our aim with this letter is to exhibit the extension of discrete mode-matching method to analyze multilayered microstrip transmission lines with anisotropic dielectric layers. The mathematical formulation is presented to deal with multilayered structures with metallizations in the interfaces. The application is demonstrated by computing propagation constant and characteristic impedance for multilayered microstrip and two-layer coplanar waveguide with uniaxial anisotropic dielectric. The validation of the results has been done by comparing with simulations with ANSYS HFSS and some of them also with open literature. A very good agreement has been observed.
          (USA) Senior Software Engineer - C++/Java      Cache   Translate Page   Web Page Cache   
Senior Software Engineer - C++/Java Job Summary Apply Now + Job:19042-MKAI + Location:US-MA-Natick + Department:Product Development Would you like to enable technical professionals and researchers to spend more time on research and development and less time writing code? In particular, help develop and advance MATLAB’s ability to interface with external languages and object systems such as Java, C++, and Python. Leverage your experience in C++, Java and system level programming to enable seamless integration between MATLAB and commonly used libraries. Responsibilities + Contributing to all activities of software development including requirements analysis, design, implementation, integration, and testing. + Partner with technical marketing and cross functional teams to gather user requirements and assess opportunities. + Develop new product features and improve existing features as part of a strong development team. + Conduct design reviews with peers and advisors. + Work closely with Quality Engineering to develop testing strategies for new features + Use build and debug tools in Windows, Linux and OS X. Minimum Qualifications + A bachelor's degree and 7 years of professional work experience (or a master's degree and 5 years of professional work experience, or a PhD degree) is required. + Experience with C++ + Experience with Java Native Interface + Experience with Java Technologies Additional Qualifications + Core Java + A firm grasp of the Software Development Life Cycle: iterative development, high-quality maintainable code, unit tests. Nice to have + Experience with MATLAB Why MathWorks? It’s the chance to collaborate with bright, passionate people. It’s contributing to software products that make a difference in the world. And it’s being part of a company with an incredible commitment to doing the right thing – for each individual, our customers, and the local community. MathWorks develops MATLAB and Simulink, the leading technical computing software used by engineers and scientists. The company employs 4000 people in 16 countries, with headquarters in Natick, Massachusetts, U.S.A. MathWorks is privately held and has been profitable every year since its founding in 1984. Apply Now Contact usif you need reasonable accommodation because of a disability in order to apply for a position. The MathWorks, Inc. is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics. View TheEEO is the Law posterandits supplement. The pay transparency policy is availablehere. MathWorks participates in E-Verify. View the E-Verify postershere. Apply Now + Job:19042-MKAI + Location:US-MA-Natick + Department:Product Development
          (USA) Revenue, Accounting, and Controls Associate      Cache   Translate Page   Web Page Cache   
Revenue, Accounting, and Controls Associate Job Summary Apply Now + Job:19115-FMCG + Location:US-MA-Natick + Department:Finance and Operations This person will be an integral part of the company’s Finance and Operations Team that has worldwide responsibilities for accurately recording the revenue and deferred revenue transactions in compliance with current software and multi-element revenue recognition accounting guidance (SOP 97-2, 98-9, 81-1). The candidate will be responsible for the “entire” revenue/order cycle beginning with the sales quote and concluding with the acknowledgement of payment. Responsibilities + Maintain cross-functional relationships with Sales, Sales Support, Customer Service, Accounting, Operations, etc. to ensure the accuracy of each business transaction. + Validate each sales quote with the corresponding customer order (P.O.) including license configuration, pricing, discounts, terms and conditions in accordance with The MathWorks licensing standards. Maintain the integrity of all customer, contact, and installed product databases. + Follow all established export control policies regarding end use and end user controls. File shipping export declarations as required meeting government regulatory compliance standards. + Confirm revenue is properly recorded by ensuring monthly cutoffs are adhered to, products are shipped, invoiced and passcodes are generated. + Document the beginning and ending software maintenance dates for future recognition of deferred revenue. + Maintain all pertinent documentation needed to properly invoice the customer including P.O.’s, sales tax exempt certificates, freight requirements, ship to and bill to information, etc. + Ascertain that revenue is booked to the appropriate Sales Territory and work collaboratively with Accounting to ensure proper payments are made to the Sales organization. Administer “commission splits” using proper management channels within Sales and Sales Operations. + Perform required Accounts Receivable functions including, credit, collections, cash receipts, credit card processing, electronic transfers, etc. Manage a customer portfolio by actively contacting customers in order to maintain a DSO of < 45 days. + Generate monthly operational metrics and revenue reporting. Minimum Qualifications + A bachelor's degree is required. + Candidates for this position must be authorized to work in the United States on a full-time basis for any employer without restriction. + Visa sponsorship will not be provided for this position. Additional Qualifications + B.S. required with an accounting, finance, or business concentration + Must be detail oriented, accurate, and able to provide solid audit trails + Must possess excellent communication, organization, and follow-up skills + Must be self-motivated, able to prioritize, and multitask to meet daily deadlines + Proficient in a computerized environment: MS Excel, MS Word, e-mail, internet, etc. + Credit and collection experience a plus + High tech company and transaction database experience a plus + Additional skills include Oracle experience, export compliance knowledge or foreign language a strong plus Why MathWorks? It’s the chance to collaborate with bright, passionate people. It’s contributing to software products that make a difference in the world. And it’s being part of a company with an incredible commitment to doing the right thing – for each individual, our customers, and the local community. MathWorks develops MATLAB and Simulink, the leading technical computing software used by engineers and scientists. The company employs 4000 people in 16 countries, with headquarters in Natick, Massachusetts, U.S.A. MathWorks is privately held and has been profitable every year since its founding in 1984. Apply Now Contact usif you need reasonable accommodation because of a disability in order to apply for a position. The MathWorks, Inc. is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics. View TheEEO is the Law posterandits supplement. The pay transparency policy is availablehere. MathWorks participates in E-Verify. View the E-Verify postershere. Apply Now + Job:19115-FMCG + Location:US-MA-Natick + Department:Finance and Operations
          (USA) Software Engineer- Infrastructure      Cache   Translate Page   Web Page Cache   
Software Engineer- Infrastructure Job Summary Apply Now + Job:18968-MKAI + Location:US-MA-Natick + Department:Product Development We are looking for a versatile, enthusiastic computer scientist or engineer capable of multi-tasking to join the Control & Identification team. You will help develop software tools to facilitate the application of reinforcement learning to practical industrial application in areas such as robotics and other autonomous systems. You will need skills that cross traditional domain boundaries in areas such as machine learning, optimization, object-oriented programming, and graphical user interface design. Responsibilities + Develop and implement new software tools to help our customers apply reinforcement learning to their applications. + Work on improving the integration and deployment of reinforcement learning tools with workflows utilizing GPUs, parallel computing and cloud computing. + Contribute to all aspects of the product development process from writing functional specifications to designing software architecture to implementing software features. + Work with quality engineering, documentation, and usability teams to develop state-of-the-art software tools. Minimum Qualifications + A bachelor's degree and 3 years of professional work experience (or a master's degree) is required. Additional Qualifications In addition, a combination of some of the follow skills is important: + Knowledge of numerical algorithms. + Experience with MATLAB or Simulink. + Experience with machine learning. + Experience with neural networks. + Experience with object-oriented design and programming. + Experience with GPUs and parallel computing. + Experience with IoT and cloud computing is a plus. + Experience with software development lifecycle is a plus. + Experience with other programming languages is a nice to have. Why MathWorks? It’s the chance to collaborate with bright, passionate people. It’s contributing to software products that make a difference in the world. And it’s being part of a company with an incredible commitment to doing the right thing – for each individual, our customers, and the local community. MathWorks develops MATLAB and Simulink, the leading technical computing software used by engineers and scientists. The company employs 4000 people in 16 countries, with headquarters in Natick, Massachusetts, U.S.A. MathWorks is privately held and has been profitable every year since its founding in 1984. Apply Now Contact usif you need reasonable accommodation because of a disability in order to apply for a position. The MathWorks, Inc. is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics. View TheEEO is the Law posterandits supplement. The pay transparency policy is availablehere. MathWorks participates in E-Verify. View the E-Verify postershere. Apply Now + Job:18968-MKAI + Location:US-MA-Natick + Department:Product Development
          (USA) Senior C++ Software Engineer      Cache   Translate Page   Web Page Cache   
Senior C++ Software Engineer Job Summary Apply Now + Job:18952-MCAR + Location:US-MA-Natick + Department:Product Development Come join a core development team and enhance the experience of every Simulink customer. Simulink & Stateflow are the products of choice for engineers doing Model-Based Design. Our customers use our products to: + Model and simulate dynamic systems (e.g., automobiles, airplanes, spacecraft) + Design the algorithms needed to control these systems + Automatically convert these algorithms into code that is used to control the real system Simulink is also an integration platform to support multi-method authoring, multi-domain modeling, multi-vendor integration with scalability. This position involves working on the infrastructure to help customers seamlessly integrate external C/C++ code into Simulink. Responsibilities As part of our core C++ software development team, you'll enhance the capabilities of our customers to model in Simulink. • You will apply your knowledge of C++ programming, data structures, object-oriented design, and user workflows to enhance Simulink's core infrastructure and modeling capabilities. • You will be personally responsible for designing and implementing new product features and working collaboratively with a cross functional team for the release of these features to our customers. Minimum Qualifications + Experience with C++ + A bachelor's degree and 7 years of professional work experience (or a master's degree and 5 years of professional work experience, or a PhD degree) is required. Additional Qualifications + Experience with object oriented design. + Experience with algorithm development. + Experience with BOOST, STL, Design Patterns. + Experience with MATLAB, Simulink a plus. Why MathWorks? It’s the chance to collaborate with bright, passionate people. It’s contributing to software products that make a difference in the world. And it’s being part of a company with an incredible commitment to doing the right thing – for each individual, our customers, and the local community. MathWorks develops MATLAB and Simulink, the leading technical computing software used by engineers and scientists. The company employs 4000 people in 16 countries, with headquarters in Natick, Massachusetts, U.S.A. MathWorks is privately held and has been profitable every year since its founding in 1984. Apply Now Contact usif you need reasonable accommodation because of a disability in order to apply for a position. The MathWorks, Inc. is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics. View TheEEO is the Law posterandits supplement. The pay transparency policy is availablehere. MathWorks participates in E-Verify. View the E-Verify postershere. Apply Now + Job:18952-MCAR + Location:US-MA-Natick + Department:Product Development
          (USA-VA-Richmond) 2019 Summer Internship Program - BDPA 2018      Cache   Translate Page   Web Page Cache   
*2019 Summer Internship Program* *BDPA 2018* Under direct supervision assists in the technical design, development, implementation, and support of the organizations computing environment. Participates in server-related projects as required. Or provides focus and direction on one or more moderately complex customer projects to ensure goals and objectives are accomplished within prescribed time frame and within budgeted expenses allocated. Role may represents a cross section of functional and support areas. Under direct supervision, provide ongoing support for software applications. * Assists in the installation, upgrade, testing, and maintenance of software and hardware in a multiple server and multiple network operating system environments spanning multiple computer sites. * Follows detailed work plans for both installation and continuing maintenance. * Conducts tests and monitors new installations. * Assists with server software monitoring and tuning on a regular basis using applicable monitoring tools. * Assists with producing statistics on performance and is responsible for identifying and resolving system performance problems. * Assists with problem resolution for application software as needed. * Assists with research and testing to determine cause and makes necessary corrections. * Perform installation of software or security patches across the server environment while adhering to change control policies. * Performs other incidental duties as required. Qualifications/Requirements: * Entry level experience in at least one of the following it disciplines e.g. database management systems administration, network administration, systems integration, software development, IT service management, data analytics, IT hardware configuration or web technologies. * Working Knowledge of computing hardware, software and Windows Operating Systems for business support assignments understanding of project analytics and foundation of disciplines in assigned functional area. * Intermediate working knowledge of Microsoft Office Software. Skills required: * Desire and ability to learn the business of IT (overall view of IT), and Project Management. * Strong ability to solve problems by considering available information, prioritizing and making timely decisions; and to draws logical conclusions. * Entry skill level in areas of analysis, statistics, organization, project management, innovation, and creativity. * Entry level oral and written communication skills to gather and translate system requirements, interpret and illustrate data, present reports, and interact with all levels of System management, technical staff, and vendors. * Respond to change in a positive manner. Education/Experience: * Currently pursuing a Bachelor's Degree in Computer Science, Information Systems, or Engineering strongly desired. * A GPA of 3.0 or higher is strongly desired. **Organization:** **Federal Reserve Bank of Richmond* **Title:** *2019 Summer Internship Program - BDPA 2018* **Location:** *VA-Richmond* **Requisition ID:** *256610*
          (USA-TX-College Station) Lead Software Applications Developer      Cache   Translate Page   Web Page Cache   
Job Title Lead Software Applications Developer Agency Texas A&M University Department Division Of Information Technology Proposed Minimum Salary Commensurate Job Location College Station, Texas Job Type Staff Job Description The Software Applications Developer III (Lead Software Applications Developer) is responsible for serving as the technical lead for a specific software development project or service. Required Education and Experience: + Bachelor’s degree or equivalent combination of education and experience. + Five years of software applications developer experience. Required Special Knowledge, Skills and Abilities: + Must be able to work in a collaborative team environment. + Ability to multi-task and work cooperatively with others. + Must have strong interpersonal skills. Other Requirements or Other Factors: + Provide on-call support on nights and weekends as needed. + Hiring restrictions in compliance with System Policy 15.02 Export Controls: Must be a United States citizen, permanent resident, or a person granted asylum or refugee status in accordance with 15 CFR, Part 762; 22 CFR §§122.5, 123.22 and 123.26; and 31 CFR § 501.601. Preferred Education and Experience: + Experience with Test-Driven Development and Continuous Integration. + Experience with ServiceNow development and implementation. Responsibilities: + Approves coding designs. Coordinates the integration of multiple code designs to ensure compatibility. + Answers questions and coordinates the technical guidance and/or training provided to application users. Oversees consistency with design documentation. Participates in the development of system and programming standards. + May coordinate the technical activities of a project team. Completes reports and summaries for management and/or users including status reports, problem reports, progress summaries, and system utilization reports. Serves as a senior member of an information resource team responsible for setting technical direction. Performs all or some of the duties of a Senior Software Applications Developer. + Collaborates with the project leader to develop work plans and time schedules for projects including outlining phases and identifying personnel and computing equipment requirements. + Oversees the process used to review and analyze software documentation and production results to facilitate problem resolution. + Participates in data architecture design. + Creates, evaluates, and approves test plans. + Coordinates the evaluation of software products and programming languages. Makes recommendations based on the evaluation of software products and programming languages for their applicability to the system and/or project. + Participates in training and professional development sessions. Performs other duties as assigned. Instructions to Applicants: Applications received by Texas A&M University must either have all job application data entered or a resume attached. Failure to provide all job application data or a complete resume could result in an invalid submission and a rejected application. We encourage all applicants to upload a resume or use a LinkedIn profile to pre-populate the online application. All positions are security-sensitive. Applicants are subject to a criminal history investigation, and employment is contingent upon the institution’s verification of credentials and/or other information required by the institution’s procedures, including the completion of the criminal history check. Equal Opportunity/Affirmative Action/Veterans/Disability Employer committed to diversity. Howdy and thank you for your interest in a career with Texas A&M University. As the flagship campus of The Texas A&M University System, we are located in College Station, Texas with a student population of more than 64,000 and nearly 9,000 faculty and staff. The Spirit of Aggieland is unmistakable. We are a unique American institution, fostering a culture of friendliness, diversity, compassion and respect for one another. Our unique history and rich traditions make Texas A&M special. From our benefits package and professional development opportunities to our retirement programs, Texas A&M is a great place to work. Your path to a great career starts here! Equal Opportunity/Affirmative Action/Veterans/Disability Employer committed to diversity. If you need assistance in applying for this job, please contact (979) 845-5154. Useful Links: + Benefit Programs + Retirement + Cost of Living Calculator + Dependent Care Programs + Employee Discount Program + Flexible Spending Accounts + University Holidays + Legal Statements + New Employee Orientation + Prospective Employees + Safety & Security Notices + Training & Development + USERRA + Nondiscrimination Notice
          (USA-CA-Herlong Sierra Ordnance Depot) Information Technology Specailist (Customer Service)      Cache   Translate Page   Web Page Cache   
* Videos * Duties Help ## Duties ### Summary Sierra Army Depot is located in Herlong CA, a rural town in Northern California. Herlong is approximately 60 miles north of Reno, NV, and 35 miles south of Susanville, CA. Employees work a 4/10 work week, Mon-Thurs, 0630-1700 hrs. Youth Facilities and daily child care available. The depot serves as an Army Expeditionary Logistical Support Center, (Mass transportation is available at minimal cost.) Surrounding area boasts a full range of outdoor sports/recreation activities. Learn more about this agency ### Responsibilities * Diagnose and resolve problems in response to customer incidents. * Install, configure, and troubleshoot hardware and software. * Respond to Help Desk calls to assist customers with computer hardware or software problems. * Document Help Desk actions into a tracking database. ### Travel Required Occasional travel - You may be expected to travel for this position. ##### Supervisory status No ##### Promotion Potential 9 * #### Job family (Series) 2210 Information Technology Management * Requirements Help ## Requirements ### Conditions of Employment * Appointment may be subject to a suitability or fitness determination, as determined by a completed background investigation. * Must obtain/maintain certifications per DOD directives 8570.01-M, 8140.01, AR 25-2. Position is IT II Level, IAT II Level. Must obtain computing environment (CE) certification IAW DoD 8570.01-M, DOD directive 8140.01 within 6 months of employment. * Position is subject to recall to duty. The employee may be required to work other than normal duty hours, which may include evenings, weekends, and/or holidays. * Position requires the employee to occasionally travel up to 10% of the time (in the most expedient manner) away from the normal duty station. * Incumbent must obtain and maintain a Secret clearance. * Must possess a valid motor vehicle drivers license. ### Qualifications **Who May Apply: US Citizens** In order to qualify, you must meet the education and/or experience requirements described below. Experience refers to paid and unpaid experience, including volunteer work done through National Service programs (e.g., Peace Corps, AmeriCorps) and other organizations (e.g., professional; philanthropic; religious; spiritual; community; student; social). You will receive credit for all qualifying experience, including volunteer experience. Your resume must clearly describe your relevant experience; if qualifying based on education, your transcripts will be required as part of your application. Additional information about transcripts is in this document. **Specialized Experience:** I have one year of specialized experience equivalent to the GS-07 level in the Federal service which includes 1) Installation, configuration, and troubleshooting of hardware and software in response to customer reported issues; 2) Providing guidance to customers with various levels of computer skills on the use of new or existing hardware or software; 3) Performing invetories of equipment to ensure accountability; 4) Performing research to identify new trends in training or IT systems. In addition, I possess IT-related experience demonstrating each of the four competencies as defined: 1) Knowledge of Current Technical Skills - install, configure, upgrade and troubleshoot information technology (IT) systems. 2) Customer Service-Guide customers calling the help desk on how to perform troubleshooting on minor system issues. Provide customers an estimate of when their systems can be repaired. Provide customer with a timeline of performance of task; 3) Oral Communication-Explain to customers what the hardware or software issue are and how to prevent them. Answer customers' questions regarding system issues to minimize delays in work productivity. Interact with customer by personal conversation, emails and phone calls for results; and 4) Problem Solving-Resolve software issues by process of elimination or researching the type of error. Resolve customer hardware issues by checking all connections, checking software and testing hardware. OR I have a Master's or equivalent graduate degree or 2 full years of progressively higher level graduate education leading to such a degree in a field which demonstrates the knowledge, skills, and abilities necessary to do the work of the position, such as: computer science, engineering, information science, information systems management, mathematics, operations research, statistics, or technology management or degree that provided a minimum of 24 semester hours in one or more of the fields identified above and required the development or adaptation of applications, systems or networks. OR I have the experience as described in A AND the education described in B. (Note: You must attach a copy of your transcripts.) You will be evaluated on the basis of your level of competency in the following areas: * Customer Service ### Education **FOREIGN EDUCATION:** If you are using education completed in foreign colleges or universities to meet the qualification requirements, you must show the education credentials have been evaluated by a private organization that specializes in interpretation of foreign education programs and such education has been deemed equivalent to that gained in an accredited U.S. education program; or full credit has been given for the courses at a U.S. accredited college or university. For further information, visit: http://www.ed.gov/about/offices/list/ous/international/usnei/us/edlite-visitus-forrecog.html ### Additional information * Male applicants born after December 31, 1959, must complete a Pre-Employment Certification Statement for Selective Service Registration. * You will be required to provide proof of U.S. Citizenship. * Two year trial/probationary period may be required. * Direct Deposit of Pay is required. * Selection is subject to restrictions resulting from Department of Defense referral system for displaced employees. * If you have retired from federal service and you are interested in employment as a reemployed annuitant, see the information in the Reemployed Annuitant information sheet. * This is a Career Program (CP) 34 position. * You may claim military spouse preference. * Multiple positions may be filled from this announcement. * Salary includes applicable locality pay or Local Market Supplement. * Interagency Career Transition Assistance Program (ICTAP). If you are a Federal employee in the competitive service and your agency has notified you in writing that you are a displaced employee eligible for ICTAP consideration, you may receive selection priority. To receive selection priority for this position, you must: (1) meet ICTAP eligibility criteria; (2) be rated well-qualified for the position with a score of 90 or above; and, (3) submit the appropriate documentation to support your ICTAP eligibility. Additional information about the program is on OPM's Career Transition Resources website. * Further certification my occur up to 90 days after original certification. * **THIS IS A TERM POSITION NOT TO EXCEED ONE YEAR AND ONE DAY. THIS APPOINTMENT MAY BE EXTENDED IN INCREMENTS OF ONE YEAR AND ONE DAY TO A MAXIMUM OF 6 YEARS. EXTENSIONS ARE NOT GUARANTEED AND ARE CONTINGENT UPON BUDGET AND WORKLOAD REQUIREMENTS.** Read more ### How You Will Be Evaluated You will be evaluated for this job based on how well you meet the qualifications above. Once the announcement has closed, a review of your application package (resume, supporting documents, and responses to the questionnaire) will be used to determine whether you meet the qualification requirements listed on this announcement. If you are minimally qualified, your résumé and supporting documentation will be compared against your responses to the assessment questionnaire to determine your level of experience. If, after reviewing your résumé and/or supporting documentation, a determination is made that you have inflated your qualifications and/or experience, you may lose consideration for this position. Please follow all instructions carefully when applying, errors or omissions may affect your eligibility. You should list any relevant performance appraisals and incentive awards in your resume as that information may be taken into consideration during the selection process. If selected, you may be required to provide supporting documentation. Read more ### Background checks and security clearance ##### Security clearance Secret ##### Drug test required No * Required Documents Help ## Required Documents The documents you are required to submit vary based on whether or not you are eligible for preference in federal employment. A complete description of preference categories and the associated required documents is in the Applicant Checklist (External). As described above, your complete application includes your resume, your responses to the online questionnaire, and documents which prove your eligibility to apply.** If you fail to provide these documents, you will be marked as having an incomplete application package and you will not be considered any further.** **1. Your resume:** * Your resume may be submitted in any format and must support the specialized experience described in this announcement. * If your resume includes a photograph or other inappropriate material or content, it will not be used to make eligibility and qualification determinations and you may not be considered for this vacancy. * For qualifications determinations your resume must contain hours worked per week and the dates of employment (i.e., HRS per week and month/year to month/year or month/year to present). If your resume does not contain this information, your application may be marked as incomplete and you will not receive consideration for this position. * For additional information see: What to include in your resume. **2. Other supporting documents:** * Cover Letter, optional * Most recent Performance Appraisal, if applicable * This position has an individual occupational requirement and/or allows for substitution of education for experience. If you meet this requirement based on education you MUST submit a copy of your transcript with your application package or you will be rated ineligible. See: Transcripts and Licenses * This position requires a job-related license or certification. You MUST submit a copy of your license or certification with your application package or you will be rated ineligible. See: Transcripts and Licenses NOTE: Documents submitted as part of the application package, to include supplemental documents, may be shared beyond the Human Resources Office. Some supplemental documents such as military orders and marriage certificates may contain personal information for someone other than you. You may sanitize these documents to remove another person's personal information before you submit your application. You may be asked to provide an un-sanitized version of the documents if you are selected to confirm your eligibility. #### If you are relying on your education to meet qualification requirements: Education must be accredited by an accrediting institution recognized by the U.S. Department of Education in order for it to be credited towards qualifications. Therefore, provide only the attendance and/or degrees from schools accredited by accrediting institutions recognized by the U.S. Department of Education. Failure to provide all of the required information as stated in this vacancy announcement may result in an ineligible rating or may affect the overall rating. * Benefits Help ## Benefits A career with the U.S. Government provides employees with a comprehensive benefits package. As a federal employee, you and your family will have access to a range of benefits that are designed to make your federal career very rewarding. Learn more about federal benefits. Review our benefits Eligibility for benefits depends on the type of position you hold and whether your position is full-time, part-time, or intermittent. Contact the hiring agency for more information on the specific benefits offered. * How to Apply Help ## How to Apply To apply for this position, you must complete the online questionnaire and submit the documentation specified in the **Required Documents** section above. The complete application package must be submitted by 11:59 PM (EST) on 08/21/2018 to receive consideration * To begin, click **Apply**to access the online application. You will need to be logged into your USAJOBS account to apply. If you do not have a USAJOBS account, you will need to create one before beginning the application (https://apply.usastaffing.gov/ViewQuestionnaire/10279599). * Follow the prompts to **select your résumé and/or other supporting documents** to be included with your application package. You will have the opportunity to upload additional documents to include in your application before it is submitted. Your uploaded documents may take several hours to clear the virus scan process. * After acknowledging you have reviewed your application package, complete the Include Personal Information section as you deem appropriate and **click to continue with the application process.** * You will be taken to the online application which you must complete in order to apply for the position. Complete the online application, verify the required documentation is included with your application package, and submit the application. **You must re-select your resume and/or other documents from your USAJOBS account or your application will be incomplete.** * It is your responsibility to verify that your application package (resume, supporting documents, and responses to the questionnaire) is complete, accurate, and submitted by the closing date. Uploaded documents may take up to one hour to clear the virus scan. * Additional information on how to complete the online application process and submit your online application may be found on the USA Staffing Applicant Resource Center. To verify the status of your application, log into your USAJOBS account (https://my.usajobs.gov/Account/Login), all of your applications will appear on the Welcome screen. The Application Status will appear along with the date your application was last updated. For information on what each Application Status means, visit: https://www.usajobs.gov/Help/how-to/application/status/. If you are unable to apply online or need to fax a document you do not have in electronic form, view the following link for information regarding an Alternate Application. **If you submit an inquiry to the e-mail address listed in the Agency Contact Information below please identify the announcement number of the position in the subject line of the e-mail. This will expedite a response to your inquiry.** Read more ### Agency contact information ### Army Applicant Help Desk ##### Phone (000)000-0000 ##### Email USARMY.APG.CHRA-NE.MBX.APPLICANTHELP@MAIL.MIL ##### Address DS-APF-W0MJAA US ARMY DEPOT SIERRA DO NOT MAIL Herlong, CA 96113 US Learn more about this agency ### Next steps If you provided an email address, you will receive an email message acknowledging receipt of your application. Your application package will be used to determine your eligibility, qualifications, and quality ranking for this position. If you are determined to be ineligible or not qualified, your application will receive no further consideration. Read more * Fair & Transparent ## Fair & Transparent The Federal hiring process is setup to be fair and transparent. Please read the following guidance. ### Equal Employment Opportunity Policy The United States Government does not discriminate in employment on the basis of race, color, religion, sex (including pregnancy and gender identity), national origin, political affiliation, sexual orientation, marital status, disability, genetic information, age, membership in an employee organization, retaliation, parental status, military service, or other non-merit factor. * Equal Employment Opportunity (EEO) for federal employees & job applicants Read more ### Reasonable Accommodation Policy Federal agencies must provide reasonable accommodation to applicants with disabilities where appropriate. Applicants requiring reasonable accommodation for any part of the application process should follow the instructions in the job opportunity announcement. For any part of the remaining hiring process, applicants should contact the hiring agency directly. Determinations on requests for reasonable accommodation will be made on a case-by-case basis. A reasonable accommodation is any change to a job, the work environment, or the way things are usually done that enables an individual with a disability to apply for a job, perform job duties or receive equal access to job benefits. Under the Rehabilitation Act of 1973, federal agencies must provide reasonable accommodations when: * An applicant with a disability needs an accommodation to have an equal opportunity to apply for a job. * An employee with a disability needs an accommodation to perform the essential job duties or to gain access to the workplace. * An employee with a disability needs an accommodation to receive equal access to benefits, such as details, training, and office-sponsored events. You can request a reasonable accommodation at any time during the application or hiring process or while on the job. Requests are considered on a case-by-case basis. Learn more about disability employment and reasonable accommodations or how to contact an agency. Read more #### Legal and regulatory guidance * Financial suitability * Social security number request * Privacy Act * Signature and false statements * Selective Service * New employee probationary period This job originated on www.usajobs.gov. For the full announcement and to apply, visit www.usajobs.gov/GetJob/ViewDetails/507307900. Only resumes submitted according to the instructions on the job announcement listed at www.usajobs.gov will be considered. *Open & closing dates:* 08/08/2018 to 08/21/2018 *Service:* Competitive *Pay scale & grade:* GS 9 *Salary:* $57,014 to $74,120 per year *Appointment type:* Term - 1 Year 1 Day *Work schedule:* Full-Time
          2008-06-02      Cache   Translate Page   Web Page Cache   
Internet realty, free WiFi, cell phone TV, micro-payment scam, micro-sub-notebook cloud computing, space alien video, NmG electric car
          Salesforce Solution Architect - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast-paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Mon, 07 May 2018 06:20:29 GMT - View all Casper, WY jobs
          Global Embedded AI Computing Platforms Market insights by Growth, Size, Supply, Demand, Comparative Analysis, Competitive market share forecast      Cache   Translate Page   Web Page Cache   
Global embedded AI computing Platforms market report take overview  of existing efforts by media, research firms, opportunity, development, price trade and others who have attempted to move from the eagle view of the AI industry to categorizing technologies under the grand umbrella. The AI marketplace is positioned to transform the entire embedded system ecosystem with a multitude of AI capabilities such as deep machine learning, image detection and many others. Global Embedded AI...
          Approximate Computing, Intelligent Computing      Cache   Translate Page   Web Page Cache   
Approximate computing could be considered intelligent computing, because it uses energy resources to perform exact computation only when needed and approximate whenever possible.
          Approximate Computing      Cache   Translate Page   Web Page Cache   
This special issue of IEEE Micro explores exciting, new ideas in the vast design space of approximate computing. We present articles that range from programming languages to circuits and cover important application domains such as machine learning and the Internet of Things.
          IAA: Incidental Approximate Architectures for Extremely Energy-Constrained Energy Harvesting Scenarios using IoT Nonvolatile Processors      Cache   Translate Page   Web Page Cache   
Battery-less IoT devices powered through energy harvesting face a fundamental imbalance between the potential volume of collected data and the amount of energy available for processing that data locally. We explore a combination of approximate computing and intermittent computing-incidental approximate architecture to suit nonvolatile processors (NVPs).
          Walking through the Energy-Error Pareto Frontier of Approximate Multipliers      Cache   Translate Page   Web Page Cache   
In this article, we target approximate computing for arithmetic circuits, focusing on the most complex and power-hungry units: hardware multipliers. Driven by the lack of a clear solution on the energy-error efficiency of existing approximate multiplication techniques, we present a new, efficient, and easily applied approximation design, as well as explore the current state-of-the-art design space. We show that the proposed approximation scheme can be equally applied at design time to enable synthesis of customized approximate multiplier circuits and at runtime to support dynamic approximation tuning scenarios. We achieve significant gains-up to 69-percent energy and 64-percent area savings with respect to accurate designs-by proposing hybrid approximation performed by two independent techniques that reduce both the depth (through perforation) and the width (through rounding) of the partial products accumulation tree. The corresponding runtime approximation solution delivers energy gains of up to 47 percent, introducing negligible area. More importantly, we show that design solutions configured through the proposed approach form the Pareto frontier of the energy-error space when considering direct quantitative comparisons with existing state of the art.
          SiMul: An Algorithm-Driven Approximate Multiplier Design for Machine Learning      Cache   Translate Page   Web Page Cache   
The need to support various machine learning (ML) algorithms on energy-constrained computing devices has steadily grown. In this article, we propose an approximate multiplier, which is a key hardware component in various ML accelerators. Dubbed SiMul, our approximate multiplier features user-controlled precision that exploits the common characteristics of ML algorithms. SiMul supports a tradeoff between compute precision and energy consumption at runtime, reducing the energy consumption of the accelerator while satisfying a desired inference accuracy requirement. Compared with a precise multiplier, SiMul improves the energy efficiency of multiplication by 11.6x to 3.2x while achieving 81.7-percent to 98.5-percent precision for individual multiplication operations (96.0-, 97.8-, and 97.7-percent inference accuracy for three distinct applications, respectively, compared to the baseline inference accuracy of 98.3, 99.0, and 97.7 percent using precise multipliers). A neural accelerator implemented with our multiplier can provide 1.7x (up to 2.1x) higher energy efficiency over one implemented with the precise multiplier with a negligible impact on the accuracy of the output for various applications.
          BenQ ZOWIE ANNOUNCES INDIA QUALIFIERS FOR eXTREMESLAND 2018 - CSGO TOURNAMENT      Cache   Translate Page   Web Page Cache   
BenQ showcases ZOWIE’s XL2546 as the official gaming monitor for eXTREMESLAND 2018
by Shrutee K/DNS
New Delhi- BenQ ZOWIE officially announces the Indian Qualifier for the ZOWIE Asia eXTREMESLAND 2018 – CS GO tournament. The ZOWIE Asia eXTREMESLAND is an annual eSports event which aims to take competitive gaming scene in Asia Pacific to a whole new level.  The registrations for India leg will commence from 6th August and tournament kicking off on 13th August where qualifying team from India will get to face off against the best teams in the Asia Pacific region with the tournament concluding in Shanghai, China for $100,000. As an open qualifier, teams from India will be eligible to compete over a 4-week period qualifying rounds, followed by an on-ground finale event for their chance to qualify for the ZOWIE Asia eXTREMESLAND tournament in China.
The Indian qualifiers will include DIRECT INVITE QUALIFIERS (DIQ) for the very first time in the Indian eSports history where top 8 teams will battle against each other for the top 2 seeding at the Indian Finals. Moreover, there will also be One Online Qualifier and 4 Offline qualifiers which will take place in Gaming Zones at Mumbai, Hyderabad, Bhubaneswar, and Jaipur.
The dates for the Indian qualifiers will be:
Direct invite Qualifiers: 13th & 14th August
Online Qualifiers: 18th & 19th August
Mumbai Qualifier25th & 26th August
Hyderabad Qualifier25th & 26th August
Bhubaneswar Qualifier25th & 26th August
Jaipur Qualifier: 25th & 26th August
Indian Finals: 6th to 9th September 
With ZOWIE Asia eXTREMESLAND 2018 tournament, ZOWIE aims to provide the opportunity for new and lesser known teams a chance to prove their talents in an easily accessible, open format.
ZOWIE focuses all of its efforts in providing the best performance for pro- gamers while considering each player’s preference and comfort, as ZOWIE understands the importance of personal preference in professional esports product matters. All the matches of ZOWIE Asia Extremesland 2018 will be played on the award winning BenQ ZOWIE XL2546 monitor that packs native 240Hz frequency and 1 ms response time.
BenQ ZOWIE XL2546 Gaming Monitor comes equipped with BenQ ZOWIE’s exclusive technology - “Dynamic Accuracy” maintains remarkable clarity during in-game movements allowing for a smoother experience. It also features Color Vibrance and Black eQualizer technology, fine-tuned to optimize the color performance and increase the visibility in dark scenes respectively.
Key features include:
Exclusive DyAc™ technology achieves remarkable clarity while providing a different spray feeling
Native 240Hz refresh rate delivers smoothest ever gameplay experience
Built-in Black eQualizer technology brightens dark scenes without over-exposing the already bright areas
To Optimize Gaming Precision, XL2546 is equipped with Color Vibrance which gives you the flexibility to get the color performance.
Shield helps you focus on the game by blocking out distractions
The effortless one-finger height adjustable stand provides personalized viewing angles
S Switch allows you to easily access settings and transfer 3 profiles on the go
Specially designed frame minimizes reflective screen glare
ZOWIE product experience will be available throughout the tournament. For registrations and more details stay tuned to http://zowieEL2018.lxgindia.com. Get ready for a whole lot of intense gaming and excitement!
About BenQ Corporation: Founded on the corporate vision of “Bringing Enjoyment ‘N’ Quality to Life”, BenQ Corporation is a world-leading human technology and solutions provider aiming to elevate and enrich every aspect of consumers’ lives. To realize this vision, the company focuses on the aspects that matter most to people today – lifestyle, business, healthcare and education – with the hope of providing people with the means to live better, increase efficiency, feel healthier and enhance learning. Such means include a delightful broad portfolio of people-driven products and embedded technologies spanning digital projectors, monitors, interactive large-format displays digital cameras and camcorders, mobile computing devices, and lighting solutions. Because it matters.
About ZOWIE: Introduced in late 2008, ZOWIE is a brand dedicated to the development of the best competitive gaming gears available that compliment eSports athletes’ combating performance. From 2015 on, ZOWIE brand was acquired by BenQ Corp to represent the company’s eSports product line that delivers truly competitive experience and enjoyment.

          Magic Leap One AR Goggles      Cache   Translate Page   Web Page Cache   
Be ready to explore a whole new dimension in computing with the Magic Leap One AR Goggles. These goggles unveil an augmented reality system you’ve never seen before. The Creation Edition includes the goggles and an external computer called Lightpack as well as a handheld controller. Together, they help you discover a completely new form of virtual experience. You get to enjoy a smooth and intuitive performance with maximum precision and accuracy. Additionally, there’s a digital light-field, visual perception, high powered... Continue Reading
          Software Engineer - Java/SQL      Cache   Translate Page   Web Page Cache   
WA-Kirkland, We are a leading, global enterprise systems management company! Our breakthrough peer-to-peer distributed computing technology is trusted by hundreds of large enterprises around the world. Our environment and culture incorporate our four core values: Integrity, Excellence, Work Ethic, and Dignity of Labor. As an employer, we offer above-market compensation structures, industry-leading benefits pac
          Cloud Computing Co. To Pay SEC $1.9M Over Revenue Flubs      Cache   Translate Page   Web Page Cache   
...read more

          Back to the future: Hewlett Packard's Dr Fabio Fontana      Cache   Translate Page   Web Page Cache   
Hewlett Packard Enterprise is sticking to the ideals of its nearly 80-year legacy while using new technology to revolutionise computing and bring Silicon Valley to the Middle East. Vice President Dr Fabio Fontana is playing a key role in these plans
          Microsoft, AT&T and Nike on AI      Cache   Translate Page   Web Page Cache   

We’re on the threshold of a new era, where rapid advances in artificial intelligence, the internet of things, cloud computing, and automation will transform how we live and work.

Ethical Corporation has just published a new 40-page briefing that delves into the impact of AI on business and society and I wanted to send this across to you -you can access the report here.

There’s 40-pages of expert response and analysis from the likes of Danone, Nike, Flex, AT&T, PwC, Infosys, Microsoft, Sodexo and many more on:

  • All change: How AI is disrupting business

  • The reskilling challenge: Who will mind the robots?

  • Apocalypse soon? Fears rise of AI arms race

  • AI for good: How tech could transform sustainability

  • Machine learning: Automation case studies

Click here to receive the 40-page briefing


          Serverless Computing Services Market : Popular Trends & Technological advancements to Watch Out for Near Future (2018-2025)      Cache   Translate Page   Web Page Cache   
Serverless Computing Services Market : Popular Trends & Technological advancements to Watch Out for Near Future (2018-2025) Qyresearchreports include new market research report “Global Serverless Computing Services Market Size, Status and Forecast 2018-2025” to its huge collection of research reports. The key players covered in this study AWS Google Alibaba Huawei Dell Boomi IBM Cloud Microsoft Joyent Salesforce Market segment by Application, split into Personal Small Enterprises Middle Enterprises Large Enterprises Download Free

          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sun, 29 Jul 2018 06:18:46 GMT - View all Cheyenne, WY jobs
          Oracle challenges Pentagon’s multibillion-dollar cloud computing contract before bids are even submitted -- The Washington Post      Cache   Translate Page   Web Page Cache   
Yes, because everyone in the IT industry knows Oracle is all about innovation and lower prices...
"Oracle took the unusual step of bringing its protest long before contractors have even submitted bids, alleging that the procurement of what is called the Joint Enterprise Defense Infrastructure (JEDI) has been problematic from the outset. In the bid protest document dated Aug. 6, the company accused the Pentagon of failing to adhere to procurement regulations and pursuing a strategy that will hurt the U.S. military’s technological prowess.

“The technology industry is innovating around next generation cloud at an unprecedented pace and JEDI virtually assures DoD will be locked into legacy cloud for a decade or more,” an Oracle spokeswoman said in a statement Tuesday. “The single-award approach is contrary to [the commercial technology] industry’s multi-cloud strategy, which promotes constant competition, fosters innovation and lowers prices.”"
Oracle challenges Pentagon’s multibillion-dollar cloud computing contract before bids are even submitted -- The Washington Post

          Senior Information Security Consultant - Network Computing Architects, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
With the ability to interface and communicate at the executive level, i.e. CIO's, CTO's and Chief Architects....
From Network Computing Architects, Inc. - Mon, 11 Jun 2018 23:15:53 GMT - View all Bellevue, WA jobs
          Cisco buys Duo Security to address a ‘new’ security perimeter      Cache   Translate Page   Web Page Cache   

Last week, Cisco jumped head first into the identity and access management (IAM) market with its acquisition of Duo Security for over $2.3 billion. Now, I’ve been chatting with Cisco about identity management for many years. Cisco always understood the importance of identity management in the security stack but remained reluctant to jump into this area. 

Why the change of heart? Because cloud and mobile computing have all but erased the network perimeter. These days, mobile users access SaaS and cloud-based applications and never touch internal networks at all. As one CISO told me years ago, “Because of cloud and mobile computing, I’m losing control of my IT infrastructure. To address this change, I’m really forced into gaining more control in two areas: Identity and data security. Like it or not, these two areas are the ‘new’ security perimeters.”

To read this article in full, please click here


          Too Much Information      Cache   Translate Page   Web Page Cache   

Originally posted on: http://brustblog.com/tmurphy/archive/2007/04/12/TooMuchInformation.aspx

I was listening to .NET Rocks last week when Dan Appleman and Richard Campbell decided to start talking about the early days in computers.  I decided to have a little fun and send the show an email with what I remember of the early days of computing.

SURPRISE!

They read it during the show.  Of course it sounds funnier with Richard reading it than when I wrote it.


          Part-time Teaching Opportunities, Applied Computing - Sheridan College - Greater Toronto Area, ON      Cache   Translate Page   Web Page Cache   
Sheridan professors are responsible for developing an effective learning environment for students while respecting their diverse cultural and educational...
From Sheridan College - Wed, 18 Jul 2018 07:46:41 GMT - View all Greater Toronto Area, ON jobs
          Integration Platform as a Service Market Drivers and Challenges New Market to 2025      Cache   Translate Page   Web Page Cache   

New York, NY -- (SBWIRE) -- 08/10/2018 -- With the growing usage of cloud based solutions, cloud integration has become a challenge for enterprises. Many enterprises are demanding secure and reliable cloud integration, which is being offered by solution providers as iPaaS. It allow creation, execution and management of integration workflows among cloud based and on-premises applications and data protocols.

IPaaS is an emerging platform technology that manages integration data flows with the help of tools and technologies. This platform provides integration between multiple clouds and other business applications.

IPaaS is growing significantly due to its features such as server-less architecture. There are many platforms that took few years to take advantage of multi-tier or two-tier architecture. However, in this platform tier is represented through API management and supporting tier services. Moreover, the continuous growth in the usage of ODATA and other API supported by iPaaS is helping this market to grow.

Request For Report Sample @ https://www.persistencemarketresearch.com/samples/13141

Software Defined Security Market: Drivers and Challenges

The major factor driving the adoption of iPaaS is the increasing adoption of iPaaS platforms among small and medium level enterprises for integration. The iPaaS solutions helps in reducing the cost of ownership which enables enterprises to adopt these solutions easily. Moreover, these platforms enable IT worker or consultants to write custom connectors and operate packaged solutions available with the platform or in their various marketplaces to utilize off-the-shelf integration with popular services such as Salesforce, Oracle, Akamai and Other.

The key challenge in the iPaaS market is the lack of data security solutions. Cloud security is still a challenge faced by many enterprises due to less security solutions available and due to the rising organized threats. Many solution providers are providing cloud security solutions but these solutions do not provide complete security due to which there is always a risk of data breach.

iPaaS Market: Segmentation

Segmentation on the basis of Platforms:

Infrastructure platforms

Segmentation on the basis of Services:

Consulting
Implementation and training
Integration service

Key Contracts:

In June 2016, SnapLogic, has launched partner's program to increase the adoption of IPaaS. Within this program, SnapLogic has entered into the partnership with Verizon, HCL, Tech Mahindra and others to drive digital transformation among enterprises.

In June 2015, AgilityWorks has entered into the partnership with Dell Bhoomi Atmosphere to launch managed service for iPaaS in Europe. This partnership is helping Agility Works to improve its client base by redefining customer engagement models with the help of iPaaS.

In July 2014, Scribe Software launched iPaaS to improve its offering for partners and to distribute integration service tailored for their business. This has helped Scribe Software partners to execute projects within given time and deliver work efficiently.

In iPaaS market there are many vendors some of them are Mulesoft, Microsoft, Dell Bhoomi, Fujitsu, IBM, i2Factory, NTT Data, Oracle, Red Hat, Akana, SnapLogic and Others.

Download Table Of Content @ https://www.persistencemarketresearch.com/toc/13141

Regional Overview

Presently, North America is holding the largest market share for iPaaS due to high adoption of cloud computing technologies among enterprises. The adoption of other software defined models such as SaaS, and IaaS is also impacting the market for iPaaS in a positive manner. Companies such as Oracle and IBM are also working towards the development of iPaaS platforms in this market to enhance market opportunities.

In Europe region, the market for iPaaS is witnessing high growth rate due to the increasing demand for cloud integration solutions without increasing hardware components.

The Asia Pacific region is following the Europe region in this iPaaS market. This market is expected to have the highest growth rate in coming years due to the adoption of managed services and the growing adoption of integration services.

The report covers exhaustive analysis on:

iPaaS Market Segments

Market Dynamics
Historical Actual Market Size, 2013 - 2015
Market Size & Forecast 2016 to 2026
Value Chain
Market Current Trends/Issues/Challenges
Competition & Companies involved
Market Drivers and Restraints

Regional analysis for iPaaS Market includes development of these systems in the following regions:

North America
By US
By Canada
Latin America
By Brazil
By Mexico
By Others
Europe
By U.K.
By France
By Germany
By Poland
By Russia
Asia Pacific
By Australia and New Zealand (ANZ)
By Greater China
By India
By ASEAN
By Rest of Asia Pacific
Japan
Middle East and Africa
By GCC Countries
By Other Middle East
By North Africa
By South Africa
By Other Africa

The report is a compilation of first-hand information, qualitative and quantitative assessment by industry analysts, inputs from industry experts and industry participants across the value chain. The report provides in-depth analysis of parent market trends, macro-economic indicators and governing factors along with market attractiveness as per segments. The report also maps the qualitative impact of various market factors on market segments and geographies.

Report Highlights:

Detailed overview of parent market
Changing market dynamics of the industry
In-depth market segmentation
Historical, current and projected market size in terms of value
Recent industry trends and developments
Competitive landscape
Strategies of key players and product offerings
Potential and niche segments/regions exhibiting promising growth
A neutral perspective towards market performance
Must-have information for market players to sustain and enhance their market footprint

For more information on this press release visit: http://www.sbwire.com/press-releases/integration-platform-as-a-service-market-drivers-and-challenges-new-market-to-2025-1026810.htm

Media Relations Contact

Abhishek Budholiya
Marketing Head
Telephone: 1-800-961-0353
Email: Click to Email Abhishek Budholiya
Web: https://www.persistencemarketresearch.com/market-research/microbial-identification-market.asp

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000


          Quantum Computing Market Positive Outlook for Industry Opportunities & Trends 2025      Cache   Translate Page   Web Page Cache   

New York, NY -- (SBWIRE) -- 08/10/2018 -- Optimization and management of information, harvested within an organization or from different parts of the world entails, entails the employment of efficient computing processes to handle ever-expanding data workloads. To achieve this, enterprises or government-backed organizations are adopting quantum computing, and processing data in qubits. Through quantum computing, companies can search files across large databases instantaneously, assessing data quicker than conventional computing.

Persistence Market Research's new study on global market for quantum computing emphasizes the monumental impact of rising need for high-level computing on the market's growth. In 2017, the global quantum computing market is anticipated to be valued at US$ 2.7 Bn. Governments of developed and developing countries are investing in quantum computing to accelerate their research operations, while commercialization of quantum computing is also gaining traction. On the grounds of such drivers, more than US$ 23 Bn revenues are anticipated to be reaped through adoption of quantum computing across the globe by the end of 2025. During this decadal forecast period, the global market for quantum computing is expected to expand exponentially at a stellar CAGR of 30.9%.

Request For Report Sample @ https://www.persistencemarketresearch.com/samples/14758

Global Quantum Computing Market – Forecast Highlights

The study on global quantum computing market has analyzed the market on the basis of components – hardware and software. With development of cloud-enabled quantum computing platform, new software are expected to transform the market's expansion even further. The demand for effective quantum computing hardware is also expected to witness lucrative boost, although manufacturers will be put to test while developing hardware with flexible computing abilities.

While sales of quantum computing hardware are presently dominating the market with more than 90%, their global revenue share is expected to drop to 84% by the end of 2025
The significant decline in quantum computing hardware will be balanced by surging adoption of quantum computing software, revenues from which are anticipated to reflect fastest CAGR of 42.3%

Top Players in Global Quantum Computing Market

According to the study, the competitive landscape of global quantum computing market is divided into two tiers of computing vendors. With more than 90% share, tier 1 quantum computing vendors are companies developing hardware, while the remaining 10% of the market is governed by developers of quantum computing software applications. The top five players in the global quantum computing market are profiled below:

Intel Corporation,
Microsoft Corporation,
Google Inc. (Alphabet Inc.),
D-Wave Systems Inc., and
IBM Corporation

Companies such as 1QB Information Technologies Inc., QC Ware Corp., and QbitLogic are observed to be leading developers of quantum computing software applications. Other key players in the global quantum computing market include Rigetti Computing., Anyon Systems Inc., Cambridge Quantum Computing Ltd., IDQ, IonQ Inc., Quantum Circuits, Inc., Alibaba Quantum Computing Laboratory, Nokia Bell Labs, Hewlett Packard, Booz Allen Hamilton Inc., Toshiba Research Europe Ltd., USC Lockheed Martin Quantum Computation Center, , QuantumCTek Co., Ltd, SeeQC, SPARROW
QUANTUM A/S, QxBranch, and Qubitekk, Tokyo Quantum Computing.

Download Table Of Content @ https://www.persistencemarketresearch.com/market-research/quantum-computing-market/toc

For more information on this press release visit: http://www.sbwire.com/press-releases/quantum-computing-market-positive-outlook-for-industry-opportunities-trends-2025-1026805.htm

Media Relations Contact

Abhishek Budholiya
Marketing Head
Telephone: 1-800-961-0353
Email: Click to Email Abhishek Budholiya
Web: https://www.persistencemarketresearch.com/market-research/microbial-identification-market.asp

#source%3Dgooglier%2Ecom#https%3A%2F%2Fgooglier%2Ecom%2Fpage%2F%2F10000


          Associate - Mid Level - CommercialTMT      Cache   Translate Page   Web Page Cache   
Job description RoleJob Title Associate (Mid Level) - CommercialTMT (Dubai)Locations DubaiArea of Expertise Private PracticeInternational law firm - Dubai - PQE 2-4 The main responsibilities will be-Carrying out research on TMT legal issues including telecoms and media regulation cloud computing software development and licensing content
          Associate Director of Housing Operations Housing and Residential Education - The New School - New York, NY      Cache   Translate Page   Web Page Cache   
Advise the Director of upgrades and new features of the Starrez, Banner and Touchnet software programs. Excellent computing skills including use of all major...
From The New School - Thu, 19 Jul 2018 00:02:15 GMT - View all New York, NY jobs
          Online Shopping India - Buy mobiles, laptops, cameras, books, watches, apparel, shoes and e-Gift Cards. Free S - Rs. 231      Cache   Translate Page   Web Page Cache   
Amazon.com, Inc., doing business as Amazon, is an American electronic commerce and cloud computing company based in Seattle, Washington, that was founded by Jef...
          Senior Bios Engineer - ZT Systems - Austin, TX      Cache   Translate Page   Web Page Cache   
Join us at this critical growth inflection point as we engineer the hardware infrastructure powering a world of cloud computing, cloud storage, artificial...
From ZT Systems - Thu, 31 May 2018 00:20:51 GMT - View all Austin, TX jobs
          Salesforce Solution Architect - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast-paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Mon, 07 May 2018 06:20:29 GMT - View all Casper, WY jobs
          Event - August 22, 2018 - MediaCentral | Cloud UX - NYC      Cache   Translate Page   Web Page Cache   

August 22, 2018 - MediaCentral | Cloud UX - NYC

Join Avid and T2 Computing for this exciting panel discussing MediaCentral | Cloud UX to see how cloud-based solutions can help teams search, browse, access, edit, collaborate, and publish content from everywhere to anyone, any time.

When:  August 22, 2018

Time:  5:30pm — 9:00pm

Where:  Penthouse 45 432 W 45th Street (btwn 9th and 10th) New York, NY 10036

Panel Speakers:  

-  Richard Duke, Cloud Solutions Architect (Avid)

-  Justin Karpowich, Territory Account Manager (Avid)

-  Peter Price, Director of Technology (T2 Computing)

Moderated by Charlie McCormick, Director of Enterprise Sales (T2 Computing)

Panel to be followed by a demonstration of MediaCentral | Cloud UX that will include Editorial On Demand, Shared Library On Demand and Microsoft Cognitive Services. 

Join us for a great evening of food, drinks and networking, and hear how this open platform can help address challenges that you are facing.

Register now!!!!!!!!

Questions, Let me know.

Marianna


          Comment on 0330 CRS Class LR by Kunst Heidelberg      Cache   Translate Page   Web Page Cache   
Thanks for your post. I also think that laptop computers are becoming more and more popular nowadays, and now tend to be the only type of computer employed in a household. It is because at the same time that they’re becoming more and more very affordable, their computing power is growing to the point where they are as powerful as desktop coming from just a few in years past.
          Apple-Patent: Das iPhone soll den Personalausweis ablösen      Cache   Translate Page   Web Page Cache   
Das Portemonnaie kann bald zuhause bleiben, denn vielleicht werden wir schon bald mit dem Smartphone nicht nur bezahlen, sondern uns auch ausweisen. Apple arbeitet offenbar bereits am elektronischen Personalausweis für iOS. Das iPhone könnte dann auch den Zutritt zum Kino oder zur Diskothek ermöglichen. Dieser Artikel wurde einsortiert unter Apple iPhone, Mobile Computing, Apple, Apple iOS 12.

          Communication & Computing Specialist - Qalipu First Nation - Corner Brook, NL      Cache   Translate Page   Web Page Cache   
The position also develops, maintains and improves the information technology capacity of the Band by monitoring on-going needs and addressing capacity issues....
From Career Beacon - Tue, 31 Jul 2018 18:38:59 GMT - View all Corner Brook, NL jobs
          Episode 143: Serverless now just means “programming”      Cache   Translate Page   Web Page Cache   
After some rumination, Coté thins that the people backing “serverless” are just wangling to make it mean “doing programming with containers on clouds.” That is, just programming. At some point, it meant an event based system hosted in public clouds (AWS Lamda). Also, we discuss Cisco buying Duo, potential EBITA problems from Broadcom buying CA, and robot pizza. Of course, with Coté having just moved to Amsterdam, there’s some Amsterdam talk. Sponsored by Datadog This episode is sponsored by Datadog and this week they Datadog wants you to know about Watchdog. Watchdog automatically detects performance problems in your applications without any manual setup or configuration. By continuously examining application performance data, it identifies anomalies, like a sudden spike in hit rate, that could otherwise have remained invisible. Once an anomaly is detected, Watchdog provides you with all the relevant information you need to get to the root cause faster, such as stack traces, error messages, and related issues from the same timeframe. Sign up for a free trial (https://www.datadog.com/softwaredefinedtalk) today at https://www.datadog.com/softwaredefinedtalk. Relevant to your interests Everyone’s favorite Outlook feature, now in G Suite (https://techcrunch.com/2018/07/30/google-calendar-makes-rescheduling-meetings-easier/). Do we know what “serverless” is yet? Someone named that got some funding (https://techcrunch.com/2018/07/30/serverless-inc-lands-10-m-series-a-to-build-serverless-developers-platform/). Related, Istio 1.0 (https://www.theregister.co.uk/2018/07/31/istio_sets_sail_as_red_hat_renovates_openshift_container_ship/): “It is aiming to be a control plane, similar to the Kubernetes control plane, for configuring a series of proxy servers that get injected between application components. It will actually look at HTTP response codes and if an app component starts throwing more than a number of 500 errors, it can redirect the traffic.” MUST BE THIS HIGH TO RIDE (https://k1k1chan.com/post/590832918/do-not-want)! Follow-up: Brenon at 451 says (https://blogs.the451group.com/techdeals/ma/broadcom-cant-get-there-from-here/) Broadcom is gonna have to sell off some stuff to make it’s margin targets. The mainframe profits are too high, while distributed is low enough to throw the margins out of whack. So, sell off distributed to Micro Focus? To PE BMC? Or a bad analysis. Austin Regional Clinic is in Apple Health records. Pretty nifty that it sucks them all in...sort of. Robots make your pizza (https://www.barrons.com/articles/softbank-may-invest-up-to-750-million-in-robotic-pizza-startup-zume-1533753812). Featured in that OKR book. For real. AWS: still makes lots of money, market-leader by revenue (https://www.geekwire.com/2018/state-cloud-amazon-web-services-bigger-four-major-competitors-combined/). See also Gartner on the topic (https://www.gartner.com/newsroom/id/3884500): “The worldwide infrastructure as a service (IaaS) market grew 29.5 percent in 2017 to total $23.5 billion, up from $18.2 billion in 2016, according to Gartner, Inc. Amazon was the No. 1 vendor in the IaaS market in 2017, followed by Microsoft, Alibaba, Google and IBM.” Gartner estimates that AWS is ~4 times as big as the next, in 2017. Tibco might be sold off (https://www.bloomberg.com/news/articles/2018-08-03/vista-equity-is-said-to-weigh-sale-of-software-maker-tibco): “Vista took Tibco private in 2014 in a deal valued at about $4.3 billion including debt. The company, based in Palo Alto, California, makes software that clients use to collect and analyze data in industries from banking to transportation. It currently has about $2.9 billion of debt, according to data compiled by Bloomberg.” Cisco Announces Intent to Acquire Duo Security, $2.35bn (https://duo.com/about/press/releases/cisco-announces-intent-to-acquire-duo-security). What’s this ABN e.dentifier thing (https://nl.wikipedia.org/wiki/E.dentifier)? Apprenda shuts down (https://www.timesunion.com/business/article/Troy-based-Apprenda-stopping-operations-investor-13111235.php). SASSY (https://www.networkworld.com/article/2848762/cloud-computing/hitting-them-where-they-work.html)! Conferences, et. al. Sep 24th to 27th - SpringOne Platform (https://springoneplatform.io/), in DC/Maryland (crabs!) get $200 off registration with the code S1P200_Cote. Also, check out the Spring One Tour - coming to a city near you (https://springonetour.io/)! (http://devopstalks.com/devops.html)- DevOps Talks Sydney August 27-28 - John Willis, Nathen Harvey! (http://devopstalks.com/devops.html) Cloud Expo Asia October 10-11 (https://www.cloudexpoasia.com/cloud-asia-2018) DevOps Days Singapore October 11-12 (https://www.devopsdays.org/events/2018-singapore/) DevOps Days Newcastle October 24-25 (https://devopsdaysnewy.org/) DevOps Days Wellington November 5-6 (https://www.devopsdays.org/events/2018-wellington/) Listener Feedback Lindsay from London got a sticker an tell us: “Really enjoy the podcast, just the right level of humour, sarcasm and facts for a cynical Brit like me.” SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Buy some t-shirts (https://fsgprints.myshopify.com/collections/software-defined-talk)! DISCOUNT CODE: SDTFSG (40% off) Send your name and address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a sticker. Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Brandon: Masters of Doom (https://www.audible.com/pd/Bios-Memoirs/Masters-of-Doom-Audiobook/B008K8BQG6?qid=1533849060&sr=sr_1_3&ref=a_search_c3_lProduct_1_3&pf_rd_p=e81b7c27-6880-467a-b5a7-13cef5d729fe&pf_rd_r=754XS4GQGN71K8XCBNWW&). Matt: Deadpool 2 (https://www.imdb.com/title/tt5463162/). If you liked the first, you’ll like the second. Coté: 1980’s Action Figure tumblr (https://1980sactionfigures.tumblr.com/) - now that I have fast Internet, tumblr is workable. Mask, Cops, sweet Dune figures (https://www.networkworld.com/article/2848762/cloud-computing/hitting-them-where-they-work.html), generic GI Joe figures. Dutch Internet (https://www.ziggo.nl/alles-in-1/max/), son! SHIT DOG!
          Sr. Hardware Design Engineer - Microsoft - Redmond, WA      Cache   Translate Page   Web Page Cache   
RTL optimization skills. Microsoft Research has been studying quantum computing for several years and has become the world's center of expertise on topological...
From Microsoft - Thu, 03 May 2018 22:51:07 GMT - View all Redmond, WA jobs
          Senior Product Marketing Manager - Redmond, WA 98073      Cache   Translate Page   Web Page Cache   
Are you passionate about how cloud computing is creating new opportunities for customers in healthcare, life sciences, and education? Do you want to help frame the cloud...
          Communication & Computing Specialist - Qalipu First Nation - Corner Brook, NL      Cache   Translate Page   Web Page Cache   
The position also develops, maintains and improves the information technology capacity of the Band by monitoring on-going needs and addressing capacity issues....
From Career Beacon - Tue, 31 Jul 2018 18:38:59 GMT - View all Corner Brook, NL jobs
          Online Shopping India - Buy mobiles, laptops, cameras, books, watches, apparel, shoes and e-Gift Cards. Free S -       Cache   Translate Page   Web Page Cache   
Amazon.com, Inc., doing business as Amazon, is an American electronic commerce and cloud computing company based in Seattle, Washington, that was founded by Jef...
          Online Shopping India - Buy mobiles, laptops, cameras, books, watches, apparel, shoes and e-Gift Cards. Free S -       Cache   Translate Page   Web Page Cache   
Amazon.com, Inc., doing business as Amazon, is an American electronic commerce and cloud computing company based in Seattle, Washington, that was founded by Jef...
          Online Shopping India - Buy mobiles, laptops, cameras, books, watches, apparel, shoes and e-Gift Cards. Free S - Rs. 55      Cache   Translate Page   Web Page Cache   
Amazon.com, Inc., doing business as Amazon, is an American electronic commerce and cloud computing company based in Seattle, Washington, that was founded by Jef...
          Solutions Architect - NVIDIA - Washington State      Cache   Translate Page   Web Page Cache   
Assist field business development in through the enablement process for GPU Computing products, technical relationship and assisting machine learning/deep...
From NVIDIA - Fri, 20 Apr 2018 08:02:03 GMT - View all Washington State jobs
          Embedded ML Developer - Erwin Hymer Group North America - Virginia Beach, VA      Cache   Translate Page   Web Page Cache   
NVIDIA VisionWorks, OpenCV. Game Development, Accelerated Computing, Machine Learning/Deep Learning, Virtual Reality, Professional Visualization, Autonomous...
From Indeed - Fri, 22 Jun 2018 17:57:58 GMT - View all Virginia Beach, VA jobs
          Solutions Architect, Accelerated Computing - NVIDIA - Santa Clara, CA      Cache   Translate Page   Web Page Cache   
Assist field business development in through the enablement process for GPU Computing products, technical relationship and assisting machine learning/deep...
From NVIDIA - Tue, 24 Jul 2018 07:56:24 GMT - View all Santa Clara, CA jobs
          sales, affiliates and resellers      Cache   Translate Page   Web Page Cache   
Scalahosting is recruiting affiliates and resellers for innovative cloud computing product we’ve just lunch, which allows website owners to afford their own managed ssd cloud server in the price range of shared hosting (more info at: https://www.scalahosting.com/cloud-servers.html)... (Budget: $10 - $30 USD, Jobs: Web Hosting, Website Design)
          Cognitive Computing - ICCC 2018      Cache   Translate Page   Web Page Cache   



Cognitive Computing - ICCC 2018: Second International Conference, Held as Part of the Services Conference Federation, SCF 2018, Seattle, WA, USA, June 2...

          HPE Research      Cache   Translate Page   Web Page Cache   

An infographic project with HPE on technology, computing science, and data.
          Senior Information Security Consultant - Network Computing Architects, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
With the ability to interface and communicate at the executive level, i.e. CIO's, CTO's and Chief Architects....
From Network Computing Architects, Inc. - Mon, 11 Jun 2018 23:15:53 GMT - View all Bellevue, WA jobs
          10 things you need to know in markets today      Cache   Translate Page   Web Page Cache   

china's flag

Good morning! Here's what you need to know in markets on Friday.

1. Asian stock markets fell on Friday amid heightened global trade tensions, while currency markets were whipsawed by a searing sell off in Russia's rouble and as economic worries sent the Turkish lira tumbling. Washington said it would impose fresh sanctions because it had determined that Moscow had used a nerve agent against a former Russian agent and his daughter in Britain, which the Kremlin denies.

2. The pound's extended slump continued on Friday as it fell to a fresh low against the dollar. Sterling is down 0.27% against the dollar to $1.2792 at 7.30 a.m. BST (2.30 a.m. ET), marking a fresh 11-month low.

3. Russia would consider it an economic war if the United States imposed a ban on banks or a particular currency, Prime Minister Dmitry Medvedev said on Friday, the TASS state news agency said. "I would not like to comment on talks about future sanctions, but I can say one thing: If some ban on banks' operations or on the use of one or another currency follows, it would be possible to clearly call it a declaration of economic war," Medvedev said.

4. Investors are pulling billions of dollars out of Europe. Investors have pulled $35 billion from European equities this year and $51 billion from European funds, according to Barclays data.

5. Mobile chipmaker Qualcomm is settling an antitrust case brought against it by Taiwan regulators by paying T$2.73 billion ($89 million), the island's Fair Trade Commission said on Friday. The commission said Qualcomm also agreed to bargain in good faith with other chip and phone makers in patent-licensing deals.

6. A divided federal appeals court on Thursday ordered the US Environmental Protection Agency to ban a widely-used pesticide that critics say can harm children and farmers. The 2-1 decision by the 9th US Circuit Court of Appeals in Seattle overturned former EPA commissioner Scott Pruitt's March 2017 denial of a petition by environmental groups to ban the use of chlorpyrifos on food crops such as fruits, vegetables, and nuts.

7. Blockchain company Soluna plans to build a 900-megawatt wind farm to power a computing center in Dakhla in the Morocco-administered Western Sahara, its chief executive John Belizaire said in an interview.Work on the initial off-grid phase will start in 2019 and complete a year later, with the possibility of connecting the site to the national grid, Belizaire told Reuters.

8. Ryanair is bracing for its biggest-ever one-day strike on Friday with pilots based in five European countries set to walk out, forcing the cancellation of about one in six of its daily flights at the height of the holiday season. Ryanair, which averted widespread strikes before Christmas by agreeing to recognize unions for the first time in its 30-year history, has been unable to quell rising protests since over slow progress in negotiating collective labor agreements.

9. Mercedes-Benz sports utility vehicles built in Tuscaloosa, Alabama, are being checked for potential problems by Shanghai customs authorities, Daimler confirmed on Thursday. Mercedes-Benz GLE and GLS models, built in the United States between May 4 and June 12, 2018, have a brake issue which poses a "safety risk," according to a Chinese customs document circulating on Chinese social media.

10. Dropbox reported its second-ever earnings as a public company on Thursday. The company beat Wall Street's expectations on revenue and earnings per share but the stock slid on news of Chief Operating Officer Dennis Woodside's impending departure.

Join the conversation about this story »

NOW WATCH: How a Wall Street chief strategist's Costco shopping experience explains the biggest misconception about global trade


          Thermodiffusion In Multicomponent Mixtures Thermodynamic Algebraic And Neuro Computing Models       Cache   Translate Page   Web Page Cache   
Dcument Of Thermodiffusion In Multicomponent Mixtures Thermodynamic Algebraic And Neuro Computing Models
          Teaching the programmers of tomorrow      Cache   Translate Page   Web Page Cache   
Teaching the programmers of tomorrowAugust 9, 2018

By Madeleine O'Keefe

Many middle school students spend their summers swimming at the beach, playing sports outside or hanging out with their friends. But this past June, 25 seventh and eighth grade girls spent two days at the U.S. Department of Energy’s (DOE) Argonne National Laboratory learning how to code.

This was the second year of the laboratory’s CodeGirls@Argonne camp, designed to immerse young girls in computer science before they enter high school and introduce them to potential career paths in science, technology, engineering and mathematics (STEM). Researchers from the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy Office of Science User Facility, helped the camp bring computer science to a population that’s often underrepresented in the field.

“It is important for girls — especially those in the very influenceable middle school years — to see role models who look like them, who talk like them and who they can easily relate to,” said Jini Ramprakash, a CodeGirls volunteer and Deputy Director of the ALCF. “They can visualize what success looks like for them in this profession.”

The camp is completely free — though a waitlist is started after the first 25 applications have been received — and there is no coding experience required. “Some of them have never done any coding before, some of them are high-flyers in coding,” said Kelly Sturner, Argonne Learning Center Instructor in the Department of Educational Programs, who designed and organized the camp. “But there is something for everybody here.”

“Sometimes, when you think of coding, you think of just some guys sitting in their basement,” said Paige Brehm, a Master’s co-op student in Educational Programs. “But it's way more than just that.” Having studied chemistry as an undergraduate, Brehm had her own doubts about being a female in a STEM field, and that experience informed her work when she and Sturner organized CodeGirls for the first time last year.

Throughout the two days of the camp, the middle schoolers learned about the many facets of computer science and working in STEM. The girls were shown videos that taught them about the history of women in computing, and they discussed the challenges that come with entering a male-dominated field. They participated in a variety of programming-related activities, including using block code to navigate a Lego EV3 Mindstorm robot through a complicated maze. “The goal is really to get the kids to think like computer scientists,” said Sturner.

One popular activity leveraged a drawing code called “Turtle,” a drawing package that introduced the girls to the basics of the Python programming language. Sturner and Brehm said the campers expressed interest in learning text-based coding languages in addition to the drag-and-drop block style that the LEGO robot requires. Python is the perfect introduction since it’s easy to learn and many scientists at Argonne use it in their everyday work.

Sturner said Turtle taught the girls how to make lines, squares, spirals, change colors and more. Once they had grasped the basics, the campers were set loose with the code. One girl said, “I liked learning many new things and being in control of what was happening, to make new things in front of my eyes.”

“It was really nice to give them opportunities to see that there's different ways to code,” said Brehm.

Another highlight of the two days was when the campers got to meet and interview five women scientists who work at Argonne, including some from the ALCF.

The campers split up into small groups so that each girl could get to know their assigned researcher on a more personal level. The scientists answered their questions, which ranged from biographical (“Where did you go to college?”) to practical (“How many hours do you work every day?”).

For example, the five girls interviewing Ramprakash asked about her favorite part of being a scientist. Ramprakash responded by telling them about the joy and satisfaction of finding the solution to a problem that has taken a long time to solve.

Liza Booker answered her group’s questions about how she uses Python in her job as a User Experience Analyst with the ALCF. “It was very enlightening to talk with the girls about computer science, their current studies and college,” she said. “I think opportunities like CodeGirls are valuable so that girls can see and engage with visual representations of women in male-dominated fields.”

The campers were also treated to a tour of the ALCF’s machine room and visualization lab in Argonne’s Theory and Computing Sciences Building. Ramprakash and ALCF User Experience Specialist Haritha Som taught the group about Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q system, and then took them onto the floor of the machine room to see the supercomputer itself. The girls wanted to know things like, “How is a supercomputer made?” and “How do you store the data?” and “Why is it so cold in here?”

“I'm always amazed at how smart, intelligent and curious girls in elementary school can be,” Ramprakash said. “I agreed to help out [with CodeGirls] because I thoroughly enjoy interacting with these kids and learn so much through their questions and conversations every time.”

Next, ALCF computer scientists Joseph Insley and Silvio Rizzi showed the campers around the Visualization Lab. Insley talked about how using high-quality scientific visualizations is often the best way to gain insight into the massive datasets that are produced by ALCF supercomputers. He showed high-resolution images on the lab’s enormous tiled display, explaining the value of seeing all the fine details and the ability to use the screens as a collaborative and interactive space. Rizzi then passed out 3-D glasses and showed visualizations using a “passive stereo” screen — essentially the same technology as in a 3-D movie theater. Seeing data rendered in 3-D provides additional depth and perspective that can lead to better insight and understanding of the data, according to Insley.

“The visualizations, the large display and cool tech often have a bit of a ‘Wow!’ factor,” said Insley. “With the CodeGirls, and other groups like them, we explain that underlying all of the cool stuff they saw is a bunch of computer code. Someone had to program that. Why not them?”

Sturner hopes to see CodeGirls alumnae participating in the high school-level summer coding camp, a program that Educational Programs runs in partnership with ALCF, when they are old enough. But the main goal of these two days is to inspire and empower the girls to give computer science a try — and perhaps lead them back to Argonne one day.

It may very well have the desired effect. At the end of the camp, one girl reported, “I think I actually am going to consider becoming a scientist when I grow up.”

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.

The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.

Contact Us

For more information, contact Brian Grabowski at media@anl.gov or (630) 252-1232.

Connect


          Cambridge International As And A Level Computing Coursebook 1st Edition       Cache   Translate Page   Web Page Cache   
Dcument Of Cambridge International As And A Level Computing Coursebook 1st Edition
          Sr Build & Release Engineer - Secure Computing - General Electric - Evendale, OH      Cache   Translate Page   Web Page Cache   
Works independently and contributes to the immediate team and to other teams across business. GE is the world's Digital Industrial Company, transforming...
From GE Careers - Sat, 26 May 2018 10:20:53 GMT - View all Evendale, OH jobs
          Online Shopping India - Buy mobiles, laptops, cameras, books, watches, apparel, shoes and e-Gift Cards. Free S -       Cache   Translate Page   Web Page Cache   
Amazon.com, Inc., doing business as Amazon, is an American electronic commerce and cloud computing company based in Seattle, Washington, that was founded by Jef...
          Cron as a file system      Cache   Translate Page   Web Page Cache   
9p I read The Styx Architecture for Distributed Systems over a decade ago. The central idea of the paper is that “representing a computing resource as a form of file system, [makes] many of the difficulties of making that resource available across the network disappear”. By resource they mean any resource. For example in the Plan9 window system 8½, windows and even the mouse are implemented as a files; similarly, in Inferno and Plan9 the interface to the TCP/IP network is presented as a file system hierarchy.
          Senior Software Development Engineer - Distributed Computing Services (Hex) - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
Knowledge and experience with machine learning technologies. We enable Amazon’s internal developers to improve time-to-market by allowing them to simply launch...
From Amazon.com - Thu, 26 Jul 2018 19:20:25 GMT - View all Seattle, WA jobs
          Cobra: A Modern & Refined CLI Commander      Cache   Translate Page   Web Page Cache   
Go is the perfect language to develop command line applications. Go has a few advantages that really set it apart from other languages: Single binary Very fast execution time, no interpreter needed Go is awesome! Cross platform support Command line based applications are nearly as old as computing itself but this doesn’t mean that they haven’t evolved. Traditional cli applications used flags to manage the different behaviors an application could perform.
          Go at CoreOS      Cache   Translate Page   Web Page Cache   
Go at CoreOS When we launched the CoreOS project we knew from the very beginning that everything we built would be written in Go. This was not to make a fashion statement, but rather Go happened to be the perfect platform for reaching our goals – to build products that make distributed computing as easy as installing a Linux distro. It’s been almost 10 years since a new Linux distro hit the scene, and during that time Python was the language of choice.
          Magic Leap One Creator Edition mixed reality headset goes on sale at $2,295      Cache   Translate Page   Web Page Cache   
The Magic Leap One has a headset, a computing device connected to the headset and a hand-held controller. Read More... Another awesome article by All Facebook - Source: Tech2 - http://tech.firstpost.com/feed
          Plant Assistant - Pete Lien & Sons, Inc - Frannie, WY      Cache   Translate Page   Web Page Cache   
Complex Computing and Cognitive Thinking Y. An hourly employee who at the direction of the Plant Operator provides plant operational support duties as requested...
From Pete Lien & Sons, Inc - Wed, 08 Aug 2018 23:21:23 GMT - View all Frannie, WY jobs
          Just when you thought spam was dead, it’s back and worse than ever      Cache   Translate Page   Web Page Cache   

Spam emails might seem like an outdated way to spread malware, but in 2018 they are proving to be the most effective attack vector thanks to new techniques and tricks.

The post Just when you thought spam was dead, it’s back and worse than ever appeared first on Digital Trends.


          What’s new in Julia: Version 1.0 is here      Cache   Translate Page   Web Page Cache   
After nearly a decade in development, Julia, an open source, dynamic language geared to numerical computing, reached its Version 1.0 production release status on August 8, 2018. The previous version was the 0.6 beta.Julia, which vies with Python for sc...
          Soluna to build 900 MW wind power plant in Morocco      Cache   Translate Page   Web Page Cache   
Morocco has attracted investment in solar and wind power as part of a goal to generate 52 percent of its electricity from renewable energies. Blockchain company Soluna plans to build a 900-megawatt wind farm to power a computing center in Dakhla in the Morocco-administered Western Sahara, its chief executive John Belizaire said in an interview. … Continue reading Soluna to build 900 MW wind power plant in Morocco
          Salesforce Solution Architect - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast-paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Mon, 07 May 2018 06:20:29 GMT - View all Casper, WY jobs
          Field Application Engineer (GPU) - Seattle - 56925 - Advanced Micro Devices, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Thu, 28 Jun 2018 07:32:28 GMT - View all Bellevue, WA jobs
          Field Application Engineer ( Data Center) - Seattle -56141 - Advanced Micro Devices, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Fri, 22 Jun 2018 07:32:56 GMT - View all Bellevue, WA jobs
          Field Applications Engineer - 56928 - Advanced Micro Devices, Inc. - Morrisville, PA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Fri, 06 Jul 2018 07:33:38 GMT - View all Morrisville, PA jobs
          Field Applications Engineer (GPU) - 67945 - Advanced Micro Devices, Inc. - Santa Clara, CA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Mon, 23 Jul 2018 19:32:41 GMT - View all Santa Clara, CA jobs
          Field Application Engineer (Data Center) - Santa Clara - 56924 - Advanced Micro Devices, Inc. - Santa Clara, CA      Cache   Translate Page   Web Page Cache   
Solutions Architects help AMD drive a new era of computing into the datacenter by engaging with key end users and independent software partners to demonstrate...
From Advanced Micro Devices, Inc. - Thu, 26 Apr 2018 01:39:11 GMT - View all Santa Clara, CA jobs
          STORIA E CULTURA GRECA ANTICA      Cache   Translate Page   Web Page Cache   
https://www.aneddoticamagazine.com/wp-content/uploads/Athènes_Parthénon.jpg

Athènes_Parthénon


La storia greca inizia con una catastrofe (crollo del mondo miceneo e del suo sistema), attraversa un arco luminoso di diversi secoli, inventando la civiltà occidentale, e termina con un’altra catastrofe (la guerra del Peloponneso tra Sparta ed Atene). Non è che prima non ci fosse nulla, e nemmeno che dopo non ci sarà più nulla: ma la fase più originale e creativa sta in questi limiti. Prima c’era la civiltà cretese, sintesi delle civiltà orientali dei grandi fiumi, Tigre Eufrate Nilo; dopo ci sarà la sontuosa civiltà ellenistica, che media tra il mondo classico e noi.

Diversi sono i “doni”, che i greci hanno fatto alla civiltà occidentale, doni tanto vitali, che sono in atto anche ai nostri tempi, e senza i quali molto di ciò che oggi caratterizza i nostri tempi non ci sarebbe stato e non ci sarebbe. Non solo, ma tendono a diffondersi ulteriormente. LA DEMOCRAZIA, cioè il potere alla maggioranza, è un’invenzione del tutto originale della Grecia, dal nome fino ai suoi principi costitutivi. La legge, la norma non è più il prodotto di una gentile concessione del dio (come era stato anche in Grecia fino al miceneo, 1200 a.C. circa, ed era così ancora nel mondo che circondava la Grecia), che si manifesta attraverso gli atti di un monarca, dio anche lui o discendente o prediletto del dio, ma la norma è l’esito di un confronto e di un dibattito TRA UGUALI, all’inizio (epoca arcaica, 900-500 a.C. circa) limitatamente ad un gruppo ristretto di migliori (àristoi, quindi aristocrazia), e poi esteso alla massa (demos, quindi democrazia): possibilità e diritto uguali per tutti di dire tutto e di parlare di tutto, e poi diritto di elettorato attivo e passivo (ma sono esclusi schiavi e donne). La realtà della democrazia nella Storia oscilla tra l’ideale platonico (“la chiamano democrazia, ma è un’aristocrazia – di mente e di animo – con l’approvazione delle masse”), e la gestione dei demagoghi, primo dei quali e superiore a tutti Pericle, pronti ad orientare le decisioni dell’assemblea (organo supremo) verso decisioni a loro utili, con l’uso spregiudicato dei mezzi di formazione delle opinioni, oratoria giornali TV…….

LA SCRITTURA ALFABETICA. Non l’hanno inventata loro, ma i fenici. I greci hanno avuto una particolare capacità di appropriarsi di prodotti culturali altrui, rielaborarli in modo del tutto originali, e poi trasmetterli all’umanità. L’alfabeto greco, ripreso da etruschi e romani, oggi tende alla diffusione universale, perché è lo strumento fondamentale dell’inglese e di internet. Strumento semplice (lo si impara a cinque anni), estremamente duttile e di facile uso. La democrazia lo richiedeva: la partecipazione aveva bisogno vitale della conoscenza, la quale per lungo tempo si è trasmessa per via orale quanto ai prodotti letterari, ma poi aveva bisogno dello scritto quanto a decreti, leggi, proclami, insomma il diritto. Verba volant scripta manent. Non solo la scrittura alfabetica presero dagli altri, ma una gran quantità di altri prodotti culturali e materiali: in tal senso la democrazia si rivelò un formidabile moltiplicatore di effetti. Infatti la possibilità per TUTTI di partecipare a TUTTO moltiplicò le probabilità di sviluppi successivi, dato che in teoria ogni singolo individuo era nel diritto e nella possibilità di elaborare e proporre la propria soluzione, e non un numero ristretto di teste: più cresce il numero dei pensatori, più la soluzione eccellente è possibile.

Anche la MONETA i greci la copiarono, esattamente dai lidi, e, passando per l’attività dei cambiavalute e degli usurai (banche), ha dato luogo a quella merce particolare – la moneta e la sua economia – che noi chiamiamo finanza, e che tanto bene sta facendo alla nostra vita.

Dagli egizi impararono la STATUARIA e la PITTURA, ma poi le hanno sviluppate in modo del tutto originale, dando luogo ad una interminabile stagione di primato artistico dei greci, e ad un’idea dell’arte figurativa, trasmessa all’Europa ed al mondo mediante i romani, con un rapporto di discendenza diretta fino ai giorni nostri.

Del tutto greca è l’invenzione della FILOSOFIA, come attività di ricerca della verità mediante la logica e la razionalità. Quasi mai si negano le verità rivelate e religiose, ma anche queste sono interpretate alla luce di una analisi logica e razionale. Un nome tra i tanti e su tutti, Platone, straordinario pensatore ineguagliato anche ai giorni nostri, nonché straordinario poeta. La sete di sapere, che produce la filosofia, porta anche all’avvio di un inimmaginabile processo di sviluppo delle scienze fisiche mediche e matematiche.

Fondamentale è poi il contributo dei greci nella storia del teatro, e non si va lontano dal vero, quando si dice che è una loro invenzione. Non che in altre culture mancasse, perché il recitare è per così dire una caratteristica del genere umano, ma la struttura e la maniera greca hanno caratterizzato non solo l’antichità, ma poi sono state un inevitabile riferimento per tutto il teatro occidentale, o per uniformarsi o per differenziarsi, ma sempre con il teatro greco ci si è dovuti misurare. Senza tralasciare alcune opere teatrali, ancora oggi trascinanti, come l’Edipo re o l’Antigone ed anche l’Elettra di Sofocle, o la Medea di Euripide, o la trilogia dell’Orestea di Eschilo, o le Nuvole di Aristofane.

Si può qui non fare cenno ai grandi autori di LETTERATURA? Il primo, il più decisivo fra tutti, Omero, ha informato di sé l’intera cultura greca, come dire occidentale. Il poeta è l’inventore del cinema: sa trovare le parole giuste, affinché le immagini create dalla sua mente attraversino l’etere, colpiscano l’udito degli ascoltatori, e si ricompongano nella loro mente, le medesime che erano nella testa del poeta. Sono personaggi, atti, gesta pensati dal poeta, trasmessi con il tramite delle parole, e riformatisi nella mente dell’uditore, il quale, stando seduto in un qualche luogo, si abbandona alla narrazione del cantore, lascia il corpo là dov’è, ma con la mente naviga nello spazio e nel tempo, per poi tornare alla fine del racconto, felice per l’esperienza spirituale vissuta.al medesimo obiettivo mirava l’inventore della STORIOGRAFIA, Erodoto, con la sua monumentale ricerca di storia usi costumi e cultura del mondo esterno ai greci, Italia compresa. Un’opera godibilissima, per la gran massa di notizie e curiosità, che mi permetto di suggerire come lettura sotto l’ombrellone.

Poi è arrivato Pericle, uomo di grande spessore, ma affossatore della democrazia, incarnazione sublime dell’Uomo della Provvidenza, che tanto affascina, illude e delude le masse, che poi avranno tempo per leccarsi le ferite. L’uomo della provvidenza, come lo si vagheggia, quello che – chissà perché ? – si dedicherà al bene della Nazione non c’è mai stato: è saggio rimboccarsi le maniche, e dare ognuno il proprio contributo, e non comportarsi da “idioti”, come la lingua greca chiama coloro che si astengono dalla partecipazione politica, consegnando se stessi la propria vita e quella dei figli al ciarlatano ed al piazzista di turno. Se non finisce in tragedia (come è stato per noi con la guerra) è già un miracolo.

Le nostre radici sono là, in quel mondo greco uscito rinnovato e vitale dalla catastrofe del mondo miceneo (terremoti? Siccità? Invasioni?), ed artefice di una cultura di cui siamo impregnati anche noi moderni, e spesso non ce ne rendiamo conto. Fa male all’anima, quello che stanno combinando gli esaltati assassini e sanguinari dell’ISIS, con la demolizione delle tracce archeologiche ed artistiche del mondo antico, quelle statue profanate con il tritolo ed il martello pneumatico, quelle colonne abbattute. Loro, i nuovi barbari, hanno capito o solo intuito l’importanza di quelle testimonianze storiche, e le vogliono abolire. Spetta a noi differenziarci da loro, recuperare le radici della nostra Storia, irrobustirle, dare loro nuova vita e visibilità, ed andare cauti con la frenesie del “cambiamento”. Si dà infatti un’equazione priva di logica, secondo cui il cambiamento è costituzionalmente positivo. Signori miei! E se si cambia in peggio, è proprio il caso di cambiare?


 


di Fulvio Marino


Fulvio Marino


“Càpita nella vita sia personale che comunitaria di avere la sensazione di aver fatto una corsa troppo veloce, di esere andati troppo avanti, al punto di non percepire più dove ci si trovi. Allora è saggio fermarsi, sedersi, e parlare con se stessi. Ed in questo ci aiutano le voci dei grandi del passato, e qui da noi sono stati veramente grandi e veramente tanti. Si recuperano così le radici del nostro essere, e si riprende slancio, dopo aver meditato un pochino e riflettuto. Ed è questo ciò che intenderei fare, offrire a chi li apprezza spunti di riflessione, angoli di astrazione dal presente, che proprio gratificante non è. Spero di fare cosa gradita.” 


Fulvio Marino


il pifferaio tragico fulvio marino


Acquista il libro




 


Related Post














Aneddotica Magazine - Collaborative Blog since 2012 https://www.aneddoticamagazine.com/it/storia-e-cultura-greca-antica/
          “To Him Who Has, More Will Be Given…”: A Realist Review of the OHSAS18001 Standard of OHS Management      Cache   Translate Page   Web Page Cache   

“To Him Who Has, More Will Be Given…”: A Realist Review of the OHSAS18001 Standard of OHS Management

Madsen, C. U., Kirkegaard, M. L., Hasle, P. & Dyreborg, J. aug. 2018 Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018). Springer, (Advances in Intelligent Systems and Computing, Bind 821).

Publikation: Bidrag til bog/antologi/rapport/konference proceedingKonferenceartikel i proceedingForskningpeer review

The OHSAS18001 standard is now the most widely adopted management systems for occupational health and safety worldwide. The standard is intended to support companies in attaining a higher health and safety standard. However, there is limited knowledge on how this standard in fact is working in practice and thus can improve health and safety at work.

In order to investigate how the OHSAS18001 standard is working in practice, we identified the main mechanisms assumed to be actively involved in the successful implementation and management of the standard, by using a framework inspired by a realist methodology. In line with this methodology, we assessed how the context of the adopting organizations impinges on the identified mechanisms and synthesized the findings into useful knowledge for practitioners and fellow researchers alike.

The starting point for the analytical process is the program theories that we identified in the standard and supplementary materials from key stakeholders. Thus we analyze how key stakeholders and policymakers expect the standard or program theory to work when it is implemented in an organizational setting. The three program theories (PT) we identified are: An ‘operational’ PT, a ‘compliance’ PT, and an ‘institutional’ PT.

Then we compared these ‘assumed’ program theories to how the OHSAS18001 actually worked in real-life settings. We identified four so-called context-mechanism-outcome configurations by reviewing available empirical studies and by extracting knowledge from them. These CMO-configurations are: ‘Integration’, ‘learning’, ‘motivation’ and ‘translation’. This analytical approach means that our paper provides both i -depth understanding of the assumed program theories behind the OHSAS18001standard and understanding of the actual mechanisms of certified management systems in occupational health and safety management in various context presented by the included implementation studies.
OriginalsprogEngelsk
Titel Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018)
ForlagSpringer
Publikationsdatoaug. 2018
StatusUdgivet - aug. 2018
PublikationsartForskning
Peer reviewJa
NavnAdvances in Intelligent Systems and Computing
Volume/Bind821
ISSN2194-5357

          Systems Developer (Servers Storage) - Internal Applicants Only      Cache   Translate Page   Web Page Cache   
Role Description We have a challenging and rewarding role that has become available within the Computing Services department. This is an additional role within the Computing Services Servers and Storage team. The existing team of 5 is responsible for the development and maintenance of a wide range of IT services..
          Public cloud – myth vs reality      Cache   Translate Page   Web Page Cache   

Public Cloud is now the most identifiable example of cloud computing, where a service provider such as AWS or Microsoft Azure, makes resources such as virtual machines, applications or storage on a shared platform, available to a public audience via the internet.

The post Public cloud – myth vs reality appeared first on Legal Futures.


          Pearson MyLab Programming with Pearson eText Standalone Access Card for The Practice of Computing using Python 3rd Edition      Cache   Translate Page   Web Page Cache   

Pearson MyLab Programming with Pearson eText Standalone Access Card for The Practice of Computing using Python 3rd Edition

The post Pearson MyLab Programming with Pearson eText Standalone Access Card for The Practice of Computing using Python 3rd Edition appeared first on VIP Outlet.


          NASTRAN training at SGSITS by industry experts      Cache   Translate Page   Web Page Cache   
Nastran is a multidisciplinary structural analysis application used by engineers to perform static, dynamic, and thermal analysis across the linear and nonlinear domains, complemented with automated structural optimization and award winning embedded fatigue analysis technologies, all enabled by high performance computing. SGSITS, a leading engineering institute of M.P and Dauto engineering Pvt....
          Facilities Systems Administrator - Newgistics, Inc. - Grapevine, TX      Cache   Translate Page   Web Page Cache   
Achieve functional expertise with computing systems in use at Newgistics while acquiring and maintaining current knowledge of relevant product offerings and...
From Newgistics, Inc. - Fri, 13 Jul 2018 15:58:28 GMT - View all Grapevine, TX jobs
          Communication & Computing Specialist - Qalipu First Nation - Corner Brook, NL      Cache   Translate Page   Web Page Cache   
The position also develops, maintains and improves the information technology capacity of the Band by monitoring on-going needs and addressing capacity issues....
From Career Beacon - Tue, 31 Jul 2018 18:38:59 GMT - View all Corner Brook, NL jobs
          Researchers help close security hole in popular encryption software      Cache   Translate Page   Web Page Cache   

Researchers help close security hole in popular encryption software
Analysis of the AM-modulated signal showing the portion relevant to the security of the encryption software. Credit: Georgia Tech

Cybersecurity researchers at the Georgia Institute of Technology have helped close a security vulnerability that could have allowed hackers to steal encryption keys from a popular security package by briefly listening in on unintended "side channel" signals from smartphones.

The attack, which was reported to software developers before it was publicized, took advantage of programming that was, ironically, designed to provide better security. The attack used intercepted electromagnetic signals from the phones that could have been analyzed using a small portable device costing less than a thousand dollars. Unlike earlier intercept attempts that required analyzing many logins, the "One & Done" attack was carried out by eavesdropping on just one decryption cycle.

"This is something that could be done at an airport to steal people's information without arousing suspicion and makes the so-called 'coffee shop attack' much more realistic," said Milos Prvulovic, associate chair of Georgia Tech's School of Computer Science. "The designers of encryption software now have another issue that they need to take into account because continuous snooping over long periods of time would no longer be required to steal this information."

The side channel attack is believed to be the first to retrieve the secret exponent of an encryption key in a modern version of OpenSSL without relying on the cache organization and/or timing. OpenSSL is a popular encryption program used for secure interactions on websites and for signature authentication. The attack showed that a single recording of a cryptography key trace was sufficient to break 2048 bits of a private RSA key.

Results of the research, which was supported in part by the National Science Foundation, the Defense Advanced Research Projects Agency (DARPA), and the Air Force Research Laboratory (AFRL) will be presented at the 27th USENIX Security Symposium August 16th in Baltimore.

After successfully attacking the phones and an embedded system board―which all used ARM processors―the researchers proposed a fix for the vulnerability, which was adopted in versions of the software made available in May.

Side channel attacks extract sensitive information from signals created by electronic activity within computing devices during normal operation. The signals include electromagnetic emanations created by current flows within the devices computational and power-delivery circuitry, variation in power consumption, and also sound, temperature and chassis potential variation. These emanations are very different from communications signals the devices are designed to produce.


Researchers help close security hole in popular encryption software
Milos Prvulovic and Alenka Zajic use tiny probe near the phone to captures the signal that is digitized by a radio receiver to accomplish the side channel attack. Credit: Allison Carter, Georgia Tech

In their demonstration, Prvulovic and collaborator Alenka Zajic listened in on two different Android phones using probes located near, but not touching the devices. In a real attack, signals could be received from phones or othermobile devices by antennas located beneath tables or hidden in nearby furniture.

The "One & Done" attack analyzed signals in a relatively narrow (40 MHz wide) band around the phones' processor clock frequencies, which are close to 1 GHz (1,000 MHz). The researchers took advantage of a uniformity in programming that had been designed to overcome earlier vulnerabilities involving variations in how the programs operate.

"Any variation is essentially leaking information about what the program is doing, but the constancy allowed us to pinpoint where we needed to look," said Prvulovic. "Once we got the attack to work, we were able to suggest a fix for it fairly quickly. Programmers need to understand that portions of the code that are working on secret bits need to be written in a very particular way to avoid having them leak."

The researchers are now looking at other software that may have similar vulnerabilities, and expect to develop a program that would allow automated analysis of security vulnerabilities.

"Our goal is to automate this process so it can be used on any code," said Zajic, an associate professor in Georgia Tech's School of Electrical and Computer Engineering. "We'd like to be able to identify portions of code that could be leaky and require a fix. Right now, finding these portions requires considerable expertise and manual examination."

Side channel attacks are still relatively rare, but Prvulovic says the success of "One & Done" demonstrates an unexpected vulnerability. The availability of low-cost signal processing devices small enough to use in coffee shops or airports could make theattacks more practical.

"We now have relatively cheap and compact devices―smaller than a USB drive―that are capable of analyzing these signals," said Prvulovic. "Ten years ago, the analysis of this signal would have taken days. Now it takes just seconds, and can be done anywhere―not just in a lab setting."

Producers of mobile devices are becoming more aware of the need to protect electromagnetic signals of phones, tablets and laptops from interception by shielding their side channel emissions. Improving the software running on the devices is also important, but Prvulovic suggests that users of mobile devices must also play a security role.

"This is something that needs to be addressed at all levels," he said. "A combination of factors―better hardware, better software and cautious computer hygiene―make you safer. You should not be paranoid about using your devices in public locations, but you should be cautious about accessing banking systems or plugging yourdevice into unprotected USB chargers."

In addition to those already mentioned, the research involved Monjur M. Alam, Haider A. Khan, Moutmita Dey, Nishith Sinha and Robert Callen, all of Georgia Tech.


          Technology Sales Supervisor - Staples - Burbank, CA      Cache   Translate Page   Web Page Cache   
This individual also acts as the team lead within the department – driving computing sales, training and coaching team members on selling techniques, and...
From Staples - Tue, 22 May 2018 10:12:48 GMT - View all Burbank, CA jobs
          Senior Information Security Consultant - Network Computing Architects, Inc. - Bellevue, WA      Cache   Translate Page   Web Page Cache   
With the ability to interface and communicate at the executive level, i.e. CIO's, CTO's and Chief Architects....
From Network Computing Architects, Inc. - Mon, 11 Jun 2018 23:15:53 GMT - View all Bellevue, WA jobs
          Zebra: Retailers compelled to go digital      Cache   Translate Page   Web Page Cache   
Retailers and manufacturers are being compelled to embrace hand-held mobile computing devices and real-time data analytics to bring the online-to-offline experience to digital users, says a US-based tracking and printing technologies firm.
          Innovation Without Limits - Your Guide to High Performance Computing in the Cloud      Cache   Translate Page   Web Page Cache   

Accelerate your research and development with Intel Xeon–powered compute instances from AWS that provide scalability and agility not attainable on-premises, so you can innovate without limits. With flexible, unhindered access to infrastructure capacity, along with easy access to numerous, Intel-optimized software libraries via the AWS Marketplace, your researchers, scientists, engineers, and creative professionals can rapidly produce high-value answers to complex questions.

Get your results to market faster, simplify operations, and save money with flexible, configurable AWS HPC solutions that are proven to drive results for companies large and small in nearly every industry. Download the eBook to learn more.


          High Performance Computing with AWS and Intel      Cache   Translate Page   Web Page Cache   

AWS and Intel allow your engineers and scientists to innovate and accelerate results on virtually unlimited cloud resources without the cost of procuring, deploying, and managing HPC infrastructure. With a flexible, scalable platform, global availability, and a large catalog of cloud-optimized software libraries, AWS and Intel help your team collaborate securely so they can unleash their creativity and productivity.

By moving HPC workloads to AWS, you’ll enjoy the flexibility of pay-as-you-go pricing options, procuring only the capacity you need for the duration that it’s needed, so your IT solution parallels both the compute demands of your team and the financial team’s budget.

Get the brief which provides use cases in genomics, life sciences, energy and engineering. Download now.


          Adjunct Instructor, Adult Basic Education-English as a Second Language (ESL) - Laramie County Community College - Laramie, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products... $23.19 an hour
From Laramie County Community College - Thu, 02 Aug 2018 00:37:52 GMT - View all Laramie, WY jobs
          IC Resources Ltd: Engineering Manager (FPGA/High Speed Background)      Cache   Translate Page   Web Page Cache   
£80000 - £100000 per annum: IC Resources Ltd: Engineering Manager (FPGA/High Speed Background), Competitive Salary, Essex, £80k-£100kBased in Essex, our client is developing computing equipment for a wide range of applications.The position would suit an Engineering Manager with previous experience in Essex
          IC Resources Ltd: Graduate Verification Engineer       Cache   Translate Page   Web Page Cache   
Competitive salary: IC Resources Ltd: Graduate Verification EngineerBristolSalary Level; Very CompetitiveSuperb opportunities exist for talented Electronics / Computing Graduates to join this fast expanding technical leader based in Bristol. You will be working as a Graduate Verification Engi Bristol
          The Insight Partners Added “Micro Data Center Market to 2025 – Global Analysis and Forecasts by Type, Solution and Application” To Its Research Database.      Cache   Translate Page   Web Page Cache   
Micro data centers have emerged with the development of edge computing applications in retail, industrial and various other industries. With IoT market expanding continuously at a breathtaking pace, it is set to impact on the adoptions of micro data centers by various industry verticals and the major customers are expected to be small and medium […]
          How did the TimeHop data breach happen?      Cache   Translate Page   Web Page Cache   

In July 2018, TimeHop, in a very transparent manner, discussed the breach of their service which affected approximately 21 million records, some of which included personal identifying information (PII) such as name, email, phone number, and date of birth, while others contained variants.

Reviewing the sequence of events, we see that a trusted insider placed the company’s data at risk when their employee credentials were used by a third-party to log into TimeHop’s Cloud Computing Environment. 

How the intruder obtained the employee’s log-in credentials is unknown.

To read this article in full, please click here


          Wearable Electronics Market Anticipated to Grow at a Significant Pace by 2020      Cache   Translate Page   Web Page Cache   
Wearable Electronics Market Anticipated to Grow at a Significant Pace by 2020 Wearable Electronics are minute electronics devices worn by the consumer which enable wireless networking and mobile computing. The word “wearable technology” refers to any electronic device or product which can be worn by a person to add computing in his

          Microsoft threatens to kick off Gab.ai from Azure over hate speech      Cache   Translate Page   Web Page Cache   
  icrosoft has threatened to pull its webhosting services for social network Gab after complaints over anti-Semitic posts that incited genocide and torture of Jews. The tech giant’s cloud-computing division, Azure, said it would act in 48 hours if Gab will not remove two “malicious” posts that led to the complaint, a Gab.ai post says. […]
          IS&T RCS Tutorial - Introduction to BU’s Shared Computing Cluster       Cache   Translate Page   Web Page Cache   

IS&T RCS Tutorial - Introduction to BU’s Shared Computing Cluster

This tutorial will introduce Boston University’s Shared Computing Cluster (SCC) in Holyoke, MA. This Linux cluster has more than 16,000 processors and over 4.2 petabytes of storage available for Research Computing by students and faculty on the Charles River and BUMC campuses. A very large number of software packages for programming, mathematics, data analysis, plotting, statistics, visualization, and domain-specific disciplines are available as well on the SCC. You will get a general overview of the SCC and the facility that houses it and then a hands-on introduction covering connecting to and using the SCC for new users. This tutorial will cover a few basic Linux commands but we strongly encourage people to also take our more extensive “Introduction to Linux” tutorial. There will also be ample time for questions of all types about the SCC. Those who wish can bring their own laptops and we will help you with installing the software you need to effectively connect to and use the SCC. Others will use the Windows machines in the room.

12:15pm on Thursday, September 6th 2018

2 Cummington Mall; Room 107

http://www.bu.edu/phpbin/training/register/index.php?admingroup_id=43


          Top 5 Ways Artificial Intelligence Will Affect the Ecommerce Industry - Finoit      Cache   Translate Page   Web Page Cache   
By siyacarla    In Science & Hi-Tech    58 minutes ago
Among many emerging technologies, one of the most interesting ones is AI. Its potential exceeds everything anyone could ever expect. This is made possible by numerous technological advancements, as well as the increase in the computing power now accessible to a high number of IT businesses. Some people, like Elon Musk, see AI as a potential danger to the human kind, but those predictions are decades in the future. For now, AI development is going to help improve business processes in various industries, one of them being the ecommerce industry. Let’s take a look at several ways in which AI is going to affect the ecommerce industry. 1 Personalization 2 Increased sales 3 Image search with outstanding precision 4 Better merchandise management 5 Voice shopping assistants
Tags: artificial, intelligence, ecommerce, marketing, mistakes, failures, ideas

          Salesforce Senior Engagement Manager, Director - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Silverline is a Salesforce Platinum Cloud Alliance Partner focused on delivering robust business solutions leveraging cloud-computing platforms....
From Silverline Jobs - Wed, 30 May 2018 06:22:53 GMT - View all Casper, WY jobs
          Remote Senior Professional Services Project Manager      Cache   Translate Page   Web Page Cache   
A managed cloud computing company needs applicants for an opening for a Remote Senior Professional Services Project Manager. Must be able to: Manage the project producing a result which is capable of achieving the benefits defined in the Statement of Work Deliver defined concurrent work packages and medium sized projects Provide the expertise needed to address the end to end customer journey Qualifications for this position include: Ability to troubleshoot issues and brainstorm solutions Bachelor's degree from an accredited institution Solid understanding of project management processes and frameworks Demonstrated application of project management principles/theories/framework required
          New Details Leak on Intel ‘Whiskey Lake’ 14nm Mobile CPUs      Cache   Translate Page   Web Page Cache   
8th Gen Intel Core S-series Die

Intel's new Whiskey Lake mobile CPUs will have much higher boost clocks than previous mobile chips.

The post New Details Leak on Intel ‘Whiskey Lake’ 14nm Mobile CPUs appeared first on ExtremeTech.


          The Culture File Weekly: Slow Computing Special      Cache   Translate Page   Web Page Cache   
Do you use your phone or does it use you? Among those exploring a new life/tech balance in this special edition are UCSF's Robert Lustig, a neuroendocrinologist and expert in addiction, and Lindsay Ems, who studies the Amish often highly nuanced attitudes technology.
          New Details Leak on Intel ‘Whiskey Lake’ 14nm Mobile CPUs      Cache   Translate Page   Web Page Cache   
8th Gen Intel Core S-series Die

Intel's new Whiskey Lake mobile CPUs will have much higher boost clocks than previous mobile chips.

The post New Details Leak on Intel ‘Whiskey Lake’ 14nm Mobile CPUs appeared first on ExtremeTech.


          Julia 1.0 Programming Language Released      Cache   Translate Page   Web Page Cache   
Julia, the LLVM-based, speed-focused, dynamic and optional typing, full-featured programming language focused on numerical computing has reached the version 1.0 milestone...
          Salesforce Senior Engagement Manager, Director - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Silverline is a Salesforce Platinum Cloud Alliance Partner focused on delivering robust business solutions leveraging cloud-computing platforms....
From Silverline Jobs - Wed, 30 May 2018 06:22:53 GMT - View all Casper, WY jobs
          Corporate and Financial Risk Management MSc      Cache   Translate Page   Web Page Cache   
<p>Understand the main aspects of risk management in businesses – including quantitative analysis, regulation, implementation and management structure.</p> <p>We emphasise the application of mathematics to financial risk quantification, analysis and description. Our MSc is taught by established researchers and specialists in their fields.</p> <p>You’ll benefit from training in computing and explore a range of topics from programming to statistics to management.</p> <p>Why choose this course?</p> <p>This course builds on Sussex’s strong foundation of interdisciplinary study – it’s taught by the Department of Mathematics and the&nbsp;Department of&nbsp;Business and Management.</p> <p>Our courses are designed to meet employer demands – you’ll gain the knowledge and skills to compete effectively in the fast-paced world of work.</p> <p>You have a choice between practice- and research-oriented study, working with experts in their fields.</p>
          Microsoft threatens to kick off Gab.ai from Azure over hate speech      Cache   Translate Page   Web Page Cache   
  icrosoft has threatened to pull its webhosting services for social network Gab after complaints over anti-Semitic posts that incited genocide and torture of Jews. The tech giant’s cloud-computing division, Azure, said it would act in 48 hours if Gab will not remove two “malicious” posts that led to the complaint, a Gab.ai post says. […]

          Ο επόμενος καμένος θα είσαι εσύ      Cache   Translate Page   Web Page Cache   

Το screenshot που βλέπετε είναι από το επίσημο "Ειδικό Σχέδιο Έκτακτων Αναγκών Δασικών Πυρκαγιών" της Αποκεντρωμένης Διοίκησης Αττικής (εδώ). Πρόκειται για ένα κείμενο σε άθλια Ελληνικά, γεμάτο ανορθογραφίες, ασυνταξίες και νοηματικά χάσματα. Υπογραφή: Σπυρίδων Κοκκινάκης - Συντονιστής Αποκεντρωμένης Διοίκησης Αττικής

Από: capital.gr
Του Θάνου Τζήμερου 
 
pin

Κι αυτό το screenshot είναι από το βιογραφικό της Περιφερειακής Συμβούλου Γιάννας Τσούπρα, υπεύθυνης για την πολιτική προστασία της Αττικής, όπως παρουσιάστηκε στο επίσημο site των υποψηφίων της παράταξης Δούρου , (εδώ).
πιν

Όπως βλέπετε, είναι κάτοχος... BAGELOR(!) with(!) computing (από ένα ΙΕΚ που αυταποκλήθηκε College και χορηγεί πτυχία ενός πανεπιστημίου που δεν λειτουργεί εδώ και χρόνια -αλλά αυτό είναι μια άλλη ιστορία...) και έχει έντονη… πολίτικη (!) δράση (με μπόλικο σκόρδο και κύμινο, μάλλον). Δεν σχολιάζω την άθλια εμφάνιση του κειμένου, είναι ψιλά γράμματα αυτά για συριζαίους.
Μα, θα μου πεις, έχουν σχέση αυτά με την τραγωδία στο Μάτι; Φυσικά και έχουν. Τεράστια! Αποδεικνύουν ότι άνθρωποι αμόρφωτοι ή αδιάφοροι ή και τα δύο μαζί (αλλά προσκολλημένοι στο "όραμα” της καρέκλας - δες τα βιογραφικά και των δύο), έχουν αναλάβει την προστασία σου. Φυσικά, δεν υπονοώ ότι αυτοί οι δύο λειτουργοί φέρουν ακέραια την ευθύνη. Δειγματοληπτικώς τους επέλεξα.  Διότι κάποιος που δεν είναι σε θέση να κρίνει αν αυτά που γράφει και υπογράφει στέκουν, κατ’ αρχάς από πλευράς Ελληνικών, με ποια προσόντα θα τα μελετήσει ως προς την ουσία τους; Δεν είναι τυχαίο πως και η ουσία τους είναι εξίσου άθλια. Εδώ και 6 χρόνια, αρθρογραφώντας, αλλά και από το βήμα του Περιφερειακού Συμβουλίου Αττικής (εδώ "προφητική” ομιλία μου λίγες μέρες πριν τις πυρκαγιές) έχει μαλλιάσει η γλώσσα μου να λέω ότι ΑΥΤΟ το Δημόσιο, με την υπάρχουσα δομή και τις υπάρχουσες διαδικασίες, είναι απολύτως αδύνατον να διεκπεραιώσει με επιτυχία το παραμικρό, ακόμα κι αν στελεχωνόταν από κλώνους του Αϊνστάιν. Είναι χαρακτηριστικό ότι η Περιφέρεια Αττικής για να στείλει 5 εκατ. ευρώ κατεπείγουσα ενίσχυση στους σεισμοπαθείς της Κεφαλλονιάς, μετά από απόφαση που ελήφθη παμψηφεί στο Περιφερειακό Συμβούλιο αμέσως μετά τους σεισμούς του 2014, χρειάστηκε τριάμισι χρόνια!

Όταν μιλάμε για ανεπάρκεια και ανικανότητα, συνήθως αναφερόμαστε σε ανθρώπους. Μας βολεύει αυτό γιατί υπονοεί ότι αν αποστρατεύσεις ή καταψηφίσεις τον ακατάλληλο και έρθει ο κατάλληλος όλα θα διορθωθούν. Δυστυχώς δεν είναι έτσι. Το πρόβλημα του κράτους είναι πρόβλημα θεσμών, δομών, νόμων και διαδικασιών. Από τύχη ζούμε. Θα ξανακαούμε και θα ξαναπνιγούμε, είναι απολύτως βέβαιο. Εσύ, εγώ ή και οι δύο. Δες γιατί.

Είσαι, ας πούμε, δήμαρχος. Σε παίρνει τηλέφωνο ο Μήτσος που περνούσε με το αγροτικό από την Άνω Ραχούλα και σου λέει: "Δήμαρχε, έπιασε φωτιά σε κάτι βάτα, δίπλα από το χωράφι του Θανάση κι ο αέρας τη σπρώχνει προς τον πευκώνα”. Τι κάνεις; Δηλαδή τι προβλέπεται από τα σχέδια έκτακτης ανάγκης και από τις δεκάδες εγκυκλίους των "υπεύθυνων” Δημόσιων Φορέων ότι πρέπει να κάνεις;

Ας το παρακολουθήσουμε παρέα, για να φρίξουμε μαζί. Πρώτον, πρέπει να κατατάξεις τη φωτιά σε μια από τις παρακάτω πέντε κατηγορίες: τοπική μικρής έντασης, τοπική μεγάλης έντασης, περιφερειακή μικρής έντασης, περιφερειακή μεγάλης έντασης, γενική. Ναι, δεν είναι πλάκα. Αυτό προβλέπει ο "Ξενοκράτης”, (εδώ) χρησιμοποιώντας την, ακατάλληλη για την περίπτωση, απολογιστικού χαρακτήρα λέξη "καταστροφή” για να χαρακτηρίσει ένα φαινόμενο σε εξέλιξη. Γιατί αυτό; Διότι σε κάθε περίπτωση καταστροφής προβλέπεται να ενεργοποιηθούν διαφορετικά "θεσμικά όργανα”. Ερώτηση, έχοντος στοιχειώδη νόηση: πώς μπορώ να ξέρω πώς θα εξελιχθεί η φωτιά; Όλες ξεκινούν ως μικρής έντασης τοπικές! Το πώς θα εξελιχθούν θα το μάθουμε στο τέλος. Άρα ποιο το νόημα της κατηγοριοποίησης; Αυτή την ερώτηση δεν την έκανε ΚΑΝΕΝΑΣ "υπεύθυνος”, 15 χρόνια τώρα, που υπάρχει το ανεπαρκέστατο, χωρίς ούτε μισό "διά ταύτα”, εντελώς θεωρητικό κείμενο ΠΕΡΙΓΡΑΦΗΣ καταστροφών που ονομάζεται "Ξενοκράτης” και όλοι το επικαλούνται ως τυφλοσούρτη ΑΝΤΙΜΕΤΩΠΙΣΗΣ καταστροφών.

Το ότι ο "Ξενοκράτης” είναι ένα πλαίσιο περιγραφής και τίποτε άλλο το αναγνωρίζει και η Γενική Γραμματεία Πολιτικής Προστασίας, στης οποίας το site, ΣΗΜΕΡΑ, (εδώ)
διαβάζουμε τα εξής ανατριχιαστικά:

"Προβλέπεται:
• Η δημιουργία συστήματος επικοινωνίας και ροής πληροφοριών μεταξύ όλων των εμπλεκομένων υπηρεσιών και παραγόντων στη διαχείριση των κρίσεων.
• Το εν λόγω σχέδιο αποτελεί ένα βασικό πλαίσιο σχεδιασμού, βάσει του οποίου ανατίθεται η κατάρτιση των ειδικών ανά κίνδυνο σχεδίων στα καθ’ ύλη αρμόδια υπουργεία. Ήδη βρίσκεται σε εξέλιξη διαδικασία συγκρότησης ομάδων εργασίας στα υπουργεία, με πρωτοβουλία της Γενικής Γραμματείας Πολιτικής Προστασίας, προκειμένου να αναβαθμίσουν τα ειδικά σχέδια ανά κίνδυνο.
Μέσα από τα ειδικά σχέδια που θα εκπονηθούν από τις ομάδες εργασίας, μπορεί να δοθούν ειδικότερες οδηγίες ή απαιτήσεις σχεδίασης προς τις Περιφέρειες και τις Νομαρχιακές Αυτοδιοικήσεις για την εκ μέρους της σύνταξη σχεδίων.” 


Ο "Ξενοκράτης” είναι νόμος του 2003. Είμαστε στον Αύγουστο του 2018 και το ελληνικό κράτος το οποίο σε αφαιμάσσει έως θανάτου με φόρους που υποτίθεται ότι χρηματοδοτούν τις λειτουργίες του, δεν έχει δημιουργήσει, ως όφειλε, σύστημα επικοινωνίας και ροής πληροφοριών. Ακόμα... βρίσκεται σε εξέλιξη, μαζί με τα ειδικά σχέδια που ΘΑ εκπονηθούν! "Έχουν δρομολογηθεί” που λέει και η διαχρονική πασοκική διάλεκτος.

Πάμε όμως στις επόμενες ενέργειες του δημάρχου. Για να προσδιορίσει, ο έρμος, το είδος της απειλούμενης καταστροφής, άρα και τον μηχανισμό δράσης, θα πρέπει να συγκαλέσει το "Συντονιστικό Τοπικό Όργανο Πολιτικής Προστασίας.” Υπάρχει τέτοιο; Κάνε ένα τεστ. Πάρε τηλέφωνο στον Δήμο σου και ρώτα. Στην απίθανη περίπτωση που σου απαντήσουν καταφατικά, ρώτα ποιοι το αποτελούν και ποιο είναι το διάγραμμα ενεργειών σε περίπτωση πυρκαγιάς, σεισμού και πλημμύρας – για να περιοριστούμε στις συνηθέστερες απειλές καταστροφών. Θα γελάσεις. Αλλά και θα κλάψεις πικρά. (Ευτυχώς έχεις ακόμα αυτή τη δυνατότητα. Οι καμένοι της μαύρης Δευτέρας δεν την έχουν πια.) Ο "Ξενοκράτης” αναθέτει την ευθύνη στους Δήμους και στις Περιφέρειες για τη σύσταση αυτού του "οργάνου”, χωρίς να προβλέπει κανέναν έλεγχο και καμία ποινή σε περίπτωση που δεν το κάνουν. Έτσι, ευθύς εξ αρχής, η ζωή σου εξαρτάται από την αίσθηση του καθήκοντος που διαθέτει κάθε φορέας τοπικής αυτοδιοίκησης, χωρίς η κεντρική κυβέρνηση να εγγυάται το παραμικρό. Ας πούμε όμως ότι ο δήμαρχος είναι "σπασίκλας” και όντως έχει δημιουργήσει το εν λόγω όργανο. Σύμφωνα με τις οδηγίες της Γενικής Γραμματείας Πολιτικής Προστασίας, (εδώ), αυτό το όργανο πρέπει να αποτελείται από:

1. Τον Δήμαρχο
2. Δύο μέλη του δημοτικού συμβουλίου (ένας από τη μειοψηφία)
3. Ειδικευμένα στελέχη Πολιτικής Προστασίας της Αποκεντρωμένης Διοίκησης και της Περιφερειακής Ενότητας
4. Εκπρόσωπο του στρατιωτικού διοικητή της Περιοχής (Σ.Σ. το επίσημο κείμενο, στα μονίμως άθλια Ελληνικά της Δημόσιας Διοίκησης λέει: Εκπρόσωπος του στρατιωτικού διοικητή της Περιοχής ή… εκπρόσωπός του!)
5. Τον Αστυνομικό Διοικητή
6. Τον προϊστάμενο της Δημοτικής Αστυνομίας
7. Εκπρόσωπο της Λιμενικής Αρχής
8. Τον Διοικητή της Πυροσβεστικής
9. Τον προϊστάμενο των Τεχνικών Υπηρεσιών του Δήμου
10. Εκπρόσωπο του Δασαρχείου
11. Εκπροσώπους εθελοντικών οργανώσεων του Δήμου
12. Γραμματέα που θα κρατάει πρακτικά

15 χρόνια τώρα, δεν βρέθηκε ούτε ένας υπηρεσιακός ή αιρετός παράγων του Δημοσίου (ή και δημοσιογράφος...) να ρωτήσει πόσος χρόνος θα χρειαστεί α) για να εντοπιστούν όλοι αυτοί β) για να συνεδριάσουν με φυσική παρουσία και γ) για να καταλήξουν σε συμπέρασμα, όταν η φωτιά με 10 μποφόρ θα δρασκελάει τα βουνά;” 

Να εύχεσαι λοιπόν ο δήμαρχος να ΜΗΝ έχει δημιουργήσει το ανάλογο όργανο και να προσπαθήσει να πάρει αποφάσεις επί τόπου (παρανομώντας) μιλώντας στο κινητό με τον Μήτσο. Γιατί αν κινηθεί by the book, μέχρι να συνεδριάσει το "όργανο" θα έχεις γίνει κάρβουνο.

Ας συνεχίσουμε όμως το διαστροφικό σενάριο. Συγκαλείται το όργανο, συζητάει (μετά από μία; από δύο ώρες;) και διαπιστώνει το αναμενόμενο: η φωτιά έχει ξεφύγει από τα όρια του Δήμου και έχει γίνει περιφερειακή. Μόλις ειδοποιηθεί (υπηρεσιακώς) η Περιφέρεια θα πρέπει να αποτιμήσει την κατάσταση για να αποφασίσει αν η φωτιά είναι Περιφερειακή μικρής έντασης, δηλαδή αρκούν τα μέσα της Περιφέρειας, ή μεγάλης έντασης, δηλαδή χρειάζεται συνδρομή και άλλων φορέων. Για να το διαπιστώσει αυτό θα πρέπει να συνεδριάσει το Συντονιστικό Όργανο Πολιτικής Προστασίας (Σ.Ο.Π.Π.) της Περιφέρειας, στο οποίο συμμετέχουν οι εξής:

1. Ο Αντιπεριφερειάρχης της Περιφερειακής Ενότητας, ως Πρόεδρος
2. Δύο μέλη του Περιφερειακού Συμβουλίου (ένα από τη μειοψηφία)
3. Ο Πρόεδρος ή οριζόμενος εκπρόσωπος της Περιφερειακής Ένωσης Δήμων (Π.Ε.Δ)
4. Ο Προϊστάμενος της Διεύθυνσης Πολιτικής Προστασίας της Αποκεντρωμένης Διοίκησης
5. Ο Προϊστάμενος της Διεύθυνσης Πολιτικής Προστασίας της Περιφέρειας
6. Ο Προϊστάμενος Τμήματος Πολιτικής Προστασίας της Περιφερειακής Ενότητας
7. Ο Στρατιωτικός Διοικητής της περιοχής ή εκπρόσωπός του
8. Ο Διευθυντής της Αστυνομικής Διεύθυνσης της Περιφερειακής Ενότητας
9. Ο Λιμενάρχης, σε όσες Περιφερειακές Ενότητες υφίσταται Λιμενική Αρχή
10. Ο Διοικητής Πυροσβεστικής Υπηρεσίας της έδρας της Περιφερειακής Ενότητας
11. Ο Προϊστάμενος Διεύθυνσης Συντονισμού και Επιθεώρησης Δασών της οικείας Αποκεντρωμένης Διοίκησης
12.  Ο Προϊστάμενος Τεχνικών Έργων της Περιφερειακής Ενότητας
13. Ο Προϊστάμενος Διεύθυνσης Δημόσιας Υγείας και Κοινωνικής Μέριμνας της Περιφερειακής Ενότητας
14. Οριζόμενος εκπρόσωπος του Περιφερειακού Εθνικού Συστήματος Υγείας
15. Εκπρόσωποι εθελοντικών οργανώσεων Πολιτικής Προστασίας
16. Γραμματέας που θα κρατάει πρακτικά

Είναι όλοι αυτοί που είδες στη σύσκεψη της μαύρης Δευτέρας. Σύσκεψη που ξεκίνησε στις 20:30 όταν η φωτιά είχε ξεκινήσει στις 16:50. Δεν χρειάζεται να γράψω περισσότερα για να καταλάβεις ότι μέχρι να ειδοποιηθούν (ούτε καν να συνεδριάσουν) όλοι αυτοί, η φωτιά θα έχει φτάσει στη θάλασσα, θα έχει σβήσει και πλέον θα μετράμε πτώματα. Ή θα μας μετράν ανάμεσα στα πτώματα, γιατί από καθαρή σύμπτωση, ξαναλέω, δεν βρεθήκαμε στη Μαραθώνος το απόγευμα της μαύρης Δευτέρας, και από καθαρή σύμπτωση βρέθηκαν όσοι βρέθηκαν.

Θα συνεχίσω, όμως, αγαπητέ αναγνώστη, για να έχεις ολόκληρη την εικόνα τού τι σημαίνει ελληνική δημόσια διοίκηση. Ας υποθέσουμε λοιπόν ότι όλοι συναντήθηκαν, συσκέφθηκαν, κατέληξαν σε ένα σχέδιο κι ότι η καταστροφή δεν έχει ολοκληρωθεί. Όμως, καθώς το φαινόμενο είναι σε εξέλιξη, θα προκύπτουν συνεχώς νέα στοιχεία και ενδεχομένως απρόοπτα, τα οποία θα απαιτούν προσαρμογή ή και τροποποίηση του σχεδίου. Ποιος έχει το γενικό πρόσταγμα; Ποιος δηλαδή δίνει διαταγές; Και πώς αυτές μεταφέρονται στους τελικούς αποδέκτες; Μέσω υπηρεσιακών ασυρμάτων; Μέσω κινητών τηλεφώνων; Ούτε ο "Ξενοκράτης" ούτε κάποια από τις δεκάδες σχετικές εγκυκλίους το ορίζουν. Παλιότερα, οι κρατικές υπηρεσίες χρησιμοποιούσαν φορητούς ασυρμάτους VHF. Για τους Ολυμπιακούς Αγώνες πήραμε το εξελιγμένο ΤΕTRA. Έληξαν οι αγώνες, σταματήσαμε να πληρώνουμε το TETRA και επιστρέψαμε στο απηρχαιωμένο VHF το οποίο μπορούν να ακούν οι πάντες. Κι όταν λέμε οι πάντες, εννοούμε και οι εγκληματίες, οι οποίοι εφοδιάσθηκαν με ασυρμάτους και παρακολουθούσαν τους διαλόγους της αστυνομίας, συχνά παρεμβαίνοντας για... να κάνουν πλάκα! Πέρυσι τον Οκτώβριο πήραμε το αγγλικό σύστημα SEPURA, ασφαλές και προηγμένο υποτίθεται (οι επιτελείς βλέπουν σε πραγματικό χρόνο τη θέση κάθε περιπολικού) με ετήσιο κόστος λειτουργίας 1.600.000 ευρώ. Όμως στις πυρκαγιές της Δευτέρας δεν λειτούργησε! Και οι αστυνομικοί μιλούσαν με τα προσωπικά τους κινητά, όπου υπήρχε σήμα! Γιατί δεν λειτούργησε το SEPURA; Ας το ψάξει ο εισαγγελέας.

Ας υποθέσουμε όμως ότι λειτουργούσε. Το SEPURA το έχει μόνο η αστυνομία. Όχι η Πυροσβεστική, ούτε οι υπηρεσιακοί παράγοντες των Δήμων και της Περιφέρειας. Αυτοί θα έπρεπε να επικοινωνούν μεταξύ τους στο κλασικό VHF. Πώς θα συντονίζονταν οι μεν με τους δε; Υπήρχαν διαθέσιμοι ασύρματοι; Ποιος τους έχει, ποιος τους συντηρεί και ποιος τους μοιράζει σε ποιους; Είχαν ποτέ ξαναχειριστεί, Τσούπρες και λοιποί, κάτι ανάλογο; Όχι! Γι΄αυτό χρησιμοποίησαν τα κινητά τους. Είχαν καταχωρημένους από πριν τους αριθμούς των κινητών των υπολοίπων; Πολύ αμφιβάλλω.

Συνεχίζω την υποθετική ιστορία ελληνικής διοικητικής παράνοιας, για δύο λόγους. Ο πρώτος είναι για να καταδείξω τι σημαίνει "κρατική ετοιμότητα" και πόσες χιλιάδες λεπτομέρειες περιλαμβάνει αυτή, λεπτομέρειες που εξαρτώνται από την υπηρεσιακή επάρκεια εκατοντάδων εμπλεκομένων, κάτι που στην Ελλάδα είναι σενάριο επιστημονικής φαντασίας. Κι όλοι ξέρουμε ότι μια αλυσίδα είναι τόσο γερή, όσο ο πιο αδύναμος κρίκος της. Κάτι να μην πάει καλά, "κρεμάει" όλα τα υπόλοιπα. Ο δεύτερος είναι για να καταλάβουμε επιτέλους ότι η διαχείριση της καθημερινότητας του πολίτη, τουλάχιστον σε επίπεδο τοπικής αυτοδιοίκησης είναι 100% πρακτικού, τεχνικού χαρακτήρα. Οι πολιτικές, τάχα μου, διαφορές είναι το πρόσχημα που έχει επινοήσει το πολιτικό σύστημα στο σύνολό του για να σου ρίχνει στάχτη στα μάτια και να μεταφέρει τη συζήτηση σε πεδία δήθεν ιδεολογικής αντιπαράθεσης, ώστε να αποφύγουν το τσεκάρισμα εκεί που παίρνουν μηδέν άπαντες. Πόσες ασκήσεις ετοιμότητας για αντιμετώπιση πυρκαγιάς έκανες Σγουρέ; Πόσες έκανες Δούρου; Καμία; Πάτος και οι δύο! Μαύρο και στους δυο σας! Αυτό θέλουν να αποφύγουν. Και αναλώνονται σε μνημονιακο-αντιμνημονιακές κοκκορομαχίες.

Ας πούμε λοιπόν ότι και αυτό το πρόβλημα δεν υπήρχε. Μένει το ουσιαστικότερο. Ποιος διατάζει τι; Ποιος δίνει εντολή εκκένωσης; Ποια στιγμή; Η επίσημη απάντηση είναι: όλοι και κανένας. Ναι, έτσι ακριβώς. Διότι η σοσιαλίζουσα αντίληψη για τη δομή του κράτους δεν αποδέχεται τη διοικητική πυραμίδα: ανώτερος – κατώτερος. Είναι καπιταλιστικά αυτά. Ο "Καλλικράτης" για παράδειγμα (άλλο μνημείο ηλιθιότητας που έχουμε για "βαρύ γιατρικό") αναφέρει στο άρθρο 4, στις "Σχέσεις δήμων και περιφερειών”: "Μεταξύ των δύο βαθμών τοπικής αυτοδιοίκησης δεν υφίστανται σχέσεις ελέγχου και ιεραρχίας, αλλά συνεργασίας και συναλληλίας, οι οποίες αναπτύσσονται βάσει του νόμου, κοινών συμφωνιών, καθώς και με το συντονισμό κοινών δράσεων.” Άρα, αν ο δήμαρχος έχει 4 χρόνια να μαζέψει τα ξερόκλαδα και έχει μετατρέψει το δάσος (το άλσος, την παιδική χαρά...) σε μπαρουταποθήκη, δεν μπορεί κανένα όργανο της Περιφέρειας να του πει "φάε ένα πρόστιμο στο δοξαπατρί για να μάθεις να κάνεις σωστά τη δουλειά σου”, διότι βρίσκονται σε σχέσεις... συναλληλίας και συνεργασίας! Κι αν δεν βρίσκονται; Κι αν δεν συνεργάζονται; Κι αν ο ένας δεν θέλει να δει τον άλλον ούτε ζωγραφιστόν; Αλλά ας πούμε ότι το ξεπερνούμε κι αυτό και συνεργάζονται. Πώς θα υποχρεώσει η Δούρου τον Ψινάκη να καθαρίσει το δάσος; Ποιος θα υποχρεώσει την Δούρου να καθαρίσει το Πεδίο του Άρεως; Κι αυτά υπό νορμάλ συνθήκες. Σε συνθήκες έκτακτης ανάγκης όταν απειλούνται περιουσίες και ζωές ποιος θα πάρει τη δύσκολη απόφαση; Είναι σαν το ερώτημα ποιος θα δοκιμάσει για τρίποντο στο τελευταίο δευτερόλεπτο. Αν είναι βαριά τότε η μπάλα, πού να δεις πόσο βαρύ είναι το τηλέφωνο, αν η απόφαση είναι, κυριολεκτικά, ζήτημα ζωής ή θανάτου. Κι όταν μάλιστα ο αρμόδιος δεν έχει την παραμικρή ικανότητα, γνώση ή εμπειρία για να την πάρει. Δεν είναι δηλαδή ένας Διαμαντίδης της πολιτικής, αλλά ένας φαφλατάς του χαβαλέ που δεν έχει ρίξει ποτέ ούτε ένα σουτ αλλά έπεισε τους οπαδούς να τον... επιβάλουν ως βασικό παίκτη!

Έτσι λοιπόν θα περιμένει την επιφοίτηση του Αγίου Πνεύματος, αλλά κι αυτό δεν ξέρουμε με τι κριτήρια λειτουργεί, αφού άφησε να καούν καμιά εκατοστή άνθρωποι, αλλά σταμάτησε τη φωτιά θαυματουργικώς λίγο πριν το εικονοστάσι ενός σπιτιού όπως μάθαμε από εκστασιασμένα πρωτοσέλιδα αρκετών εφημερίδων.

Πόσες πιθανότητες δίνεις, φίλε αναγνώστη, να βρίσκεσαι στο λάθος σημείο και να γλυτώσεις με αυτόν τον κρατικό "μηχανισμό”;

Τι θα έπρεπε, λοιπόν, να γίνει; Οι λύσεις προσφέρονται από την τεχνολογία. Άμεσες, πρακτικές και κυρίως αποτελεσματικές. Είμαστε σε επίπεδο κινδύνου 4 ή 5; Θα ισχύουν τα εξής: δορυφόρος ή έστω drones παρακολουθούν επί 24ώρου βάσεως το σύνολο της χώρας. Μόλις εντοπιστεί εστία φωτιάς, αποστέλλεται, αυτομάτως, μήνυμα – χωρίς ανθρώπινη εμπλοκή – στο κέντρο Επιχειρήσεων της Πυροσβεστικής με τις συντεταγμένες του σημείου. Σε dt απογειώνονται πυροσβεστικά αεροσκάφη. Σε μια χώρα - χερσόνησο, που το πιο ηπειρωτικό της σημείο απέχει ελάχιστα λεπτά πτήσης από τη θάλασσα, η όλη διαδικασία μέχρι την οριστική κατάσβεση μπορεί να διαρκέσει το πολύ 15 λεπτά. Σε περίπτωση που κάτι πάει στραβά και η φωτιά ξεφύγει, στα κινητά όλων των κατοίκων της επικίνδυνης περιοχής αποστέλλεται SMS με ολιγόλογες αλλά ΣΑΦΕΙΣ οδηγίες: "Πυρκαγιά! Κίνδυνος θάνατος! Εκκενώστε την περιοχή. Κατευθυνθείτε προς ΕΚΕΙ." 

Η τεχνολογία υπάρχει, χρησιμοποιείται εδώ και χρόνια από πολλά κράτη και μπορεί να σώσει και το περιβάλλον και τις περιουσίες και τις ζωές. Χρειάζεται όμως μια προϋπόθεση: να είναι σοβαρό το ίδιο το κράτος. Δηλαδή, όχι μόνο να μην φοβάται την τεχνολογία αλλά συνεχώς να ψάχνεται για το τι νεότερο μπορεί να προσφέρει αυτή προς όφελος των πολιτών. Αν χρειάζεται σύσκεψη "υπευθύνων" αυτή να γίνεται σε διαδικτυακή πλατφόρμα, χωρίς φυσική παρουσία, (με διαδικασία προσδιορισμένη εκ των προτέρων και σχετικές ασκήσεις ετοιμότητας που θα πρέπει να διεξάγονται συχνά), για κέρδος χρόνου. Είναι αυτονόητο ότι δεν νοείται δημόσιος λειτουργός (υπηρεσιακός ή αιρετός) χωρίς εξοικείωση με την πληροφορική. Είναι αυτονόητο ότι οι αρμόδιοι θα βρίσκονται σε επιφυλακή ανάλογα με το επίπεδο κινδύνου και είναι αυτονόητο ότι τα υλικά μέσα: αεροσκάφη, πυροσβεστικά οχήματα, δίκτυα, ασύρματοι κ.λπ. θα είναι καλοσυντηρημένα και σε πλήρη επιχειρησιακή ετοιμότητα. Είναι αυτονόητο ότι σε θέσεις ευθύνης θα βρίσκονται σοβαροί άνθρωποι με επαρκή νοημοσύνη για να μην στέλνουν τους οδηγούς από τη Μαραθώνος στα στενά του Ματιού και να τους καίνε σαν τα ποντίκια. Είναι αυτονόητο ότι ως αιρετοί δεν θα ψηφίζονται οι κάτοχοι... μπάγκελορ. Είναι αυτονόητο, πάνω απ’ όλα, ότι τα σχετικά σχέδια Πολιτικής Προστασίας δεν θα είναι αυτά τα βλακώδη πολυάνθρωπα, δαιδαλώδη, χρονοβόρα, άρα απολύτως αναποτελεσματικά που έχουμε μέχρι σήμερα. Κι είναι επίσης αυτονόητο ότι οι σχετικές συζητήσεις για την επάρκεια όλων των προηγούμενων θα γίνονται ΠΡΙΝ τις καταστροφές, σε υπηρεσιακό επίπεδο και όχι στα τηλεοπτικά πάνελς. 

Ποια από αυτά τα αυτονόητα υπάρχουν στη χώρα της φαιδράς πορτοκαλέας; Και τι αυτονόητο περιμένεις να δημιουργηθεί όταν ΟΛΟ το πολιτικοκομματικό σύστημα, ευθυνόφοβο, γραφειοκρατικό και πελατειακής αντίληψης, ουδέποτε ασχολήθηκε με τον μεγάλο ασθενή της χώρας, το Δημόσιο και τις διαδικασίες του; Ξαναγυρνάμε λοιπόν στο θεμελιώδες ερώτημα με το αυγό και την κότα. Ποιος θα διαπιστώσει ότι ο "Ξενοκράτης" και όλες οι κανονιστικές διατάξεις της Πολιτικής Προστασίας είναι για τα σκουπίδια; Η Τσούπρα; Ο Κοκκινάκης; Η Δούρου; Ο Σκουρλέτης; Ο Τόσκας; Ο Τσίπρας; Με ποια διανοητικά εργαλεία; Ή μήπως οι προηγούμενοι, που φιλοδοξούν να γίνουν επόμενοι, μολονότι χαμπάρι δεν πήραν από το 2003 μέχρι το 2015 (παρά την τραγωδία του 2007) για την ουσία του προβλήματος; Και ποιος θα ορίσει νοήμονες αιρετούς; Ο λαός που έστειλε τον Τσίπρα στο Μαξίμου και τον Γλέζο (και τη Σακοράφα, την Κούνεβα κ.λπ.) στο Ευρωκοινοβούλιο; Ή μήπως ο άλλος... λαός που έστειλε τον Ζαγοράκη; Φαντάζομαι να έχεις αντιληφθεί ότι η ψήφος σου – όποτε και όπως ζητιέται από μια ανάπηρη και στρεβλή δημοκρατία – μπορεί να σημάνει ζωή ή θάνατο. Δικό σου και των παιδιών σου.

Στις πυρκαγιές του 2007, ανάμεσα στα 63 θύματα, υπήρχε μια μάνα που κάηκε μαζί με τα 4 παιδιά της. Όλοι κλάψαμε με λυγμούς τότε, κι όλοι σκεφτήκαμε ότι δεν υπάρχει χειρότερη μοίρα σε έναν άνθρωπο από το να βλέπει τα παιδιά του να γίνονται κάρβουνο μαζί του. Ανάμεσα σ΄ αυτούς που το σκέφτηκαν ήταν και οι καμένοι της μαύρης Δευτέρας. Και οι παππούδες των διδύμων. Κι ανάμεσα σ΄ αυτούς που σκεφτόμαστε το ίδιο για τα θύματα της μαύρης Δευτέρας είμαστε εγώ κι εσύ, αγαπητέ αναγνώστη, ζωντανοί κατά σύμπτωση. Ελπίζω να αντιλαμβάνεσαι τι εννοώ.

Υ.Γ. 1 Γιατί "Ξενοκράτης"; Ποτέ δεν κατάλαβα πώς ένας φιλόσοφος, διευθυντής της Ακαδημίας του Πλάτωνα, σχετίζεται με σχέδιο έκτακτης ανάγκης. Μήπως υπονοεί ο ποιητής πως θα πρέπει να φύγουμε σε... ξένο κράτος για να αισθανθούμε ασφαλείς;

Υ.Γ. 2 Το αντικείμενο αυτού του άρθρου είναι η εγκληματική ανοησία των προβλεπόμενων διαδικασιών. Γι’ αυτό δεν αναφέρομαι καθόλου στο θέμα της αυθαίρετης δόμησης, που είναι ένα σχετιζόμενο μεν αλλά εντελώς διαφορετικό κεφάλαιο και επίσης στο θέμα των εκατομμυρίων ευρώ που έχουν δοθεί κατά καιρούς σε συστήματα ηλεκτρονικής προστασίας από τις φωτιές αλλά κανένα δεν λειτουργεί – άλλο τεράστιο κεφάλαιο διαχρονικής διακομματικής "ρεμούλας". Τα σχολιάζω και τα δύο εδώ

Υ.Γ. 3 Έχει ενδιαφέρον η συμπαιγνία του πολιτικού συστήματος, σε επίπεδο Περιφέρειας Αττικής μετά τις φωτιές. Η Δούρου όρισε να συζητηθεί το θέμα την Πέμπτη 26/7, στο τελευταίο Περιφερειακό Συμβούλιο της σεζόν. Διαπράττοντας, βέβαια, μέγιστη απρέπεια, καθώς το τοποθέτησε τελευταίο (14ο) στην ημερήσια διάταξη. Ο Σγουρός εγγράφως αλλά και η παράταξη της ΝΔ (προφορικώς) ζήτησαν, σε σύσκεψη των επικεφαλής των παρατάξεων του Περιφερειακού Συμβουλίου, που συγκάλεσε εκτάκτως η Δούρου, ΝΑ ΜΗΝ συζητηθεί το θέμα την Πέμπτη, ΧΩΡΙΣ να επιμείνουν στο να συζητηθεί ας πούμε την Τρίτη 31/7. Γιατί; Διότι ήξεραν ότι ούτε οι ίδιοι θα περνούσαν καλά, καθώς ούτε ο Σγουρός ούτε η ΝΔ είχαν σχέδιο Πολιτικής Προστασίας και συμμετείχαν όλοι στο όργιο των νομιμοποιήσεων. Έτσι έδωσαν "πάσα" στη Δούρου να αναβάλει το Περιφερειακό Συμβούλιο επ' αόριστον, εν όψει Αυγούστου. Άλλο που δεν ήθελε! Μετά από μια βδομάδα, για να σώσουν τα προσχήματα, ζήτησαν να συζητηθεί το θέμα σχεδόν Δεκαπενταύγουστο, ξέροντας ότι η Δούρου θα το απορρίψει καθώς ΟΛΟΙ οι Σύμβουλοι είναι σε διακοπές. Όταν μάλιστα και ο Ιούλιος περπάτησε με ελάχιστους Συμβούλους παρόντες στις συνεδριάσεις - με έναν ή και κανέναν σύμβουλο να εκπροσωπεί την παράταξη της ΝΔ. Σικέ είναι όλα, για το θεαθήναι και για τα προσχήματα. Στην εν λόγω σύσκεψη ο γράφων δεν προσεκλήθη μολονότι επικεφαλής παράταξης με δύο συμβούλους που εκπροσωπούν το 3% των πολιτών της Αττικής. Γιατί; Γιατί έτσι! Η επίσημη εξήγηση, από πλευράς Δούρου, ήταν ότι η σύσκεψη ήταν... άτυπη, οπότε δεν ήταν υποχρεωμένη να καλέσει όλους τους επικεφαλής. Ξύπνησε δηλαδή το Ρενάκι ένα πρωί και είπε "Ρε συ, αντί να πιω μόνη μου καφέ, δεν φωνάζω καλύτερα τους επικεφαλής των παρατάξεων, να δοκιμάσουμε και τα κουλουράκια με κανέλλα που έφτιαξα χθες με τα χεράκια μου; By the way θα πούμε και δυο κουβέντες για τις φωτιές. Ας μην φωνάξω τον Τζήμερο - είναι μυστήριος αυτός, επιμένει σε θεσμούς και νομικά πλαίσια και διαδικασίες και πληροφορική και τεχνολογίες, θα μας χαλάσει την ατμόσφαιρα. Ασ’ τον καλύτερα απ’ έξω.” Έτσι λειτουργεί η δημοκρατία μας, αγαπητέ αναγνώστη. Θα συνεχίσει να λειτουργεί έτσι; Από σένα εξαρτάται. 

*Ο κ. Θάνος Τζήμερος είναι πρόεδρος του κόμματος "Δημιουργία, ξανά!" και περιφερειακός σύμβουλος Αττικής

          Silicon Valley Pulls Plug On InfoWars      Cache   Translate Page   Web Page Cache   
Twitter refuses to follow Apple, Facebook, YouTube and Spotify in ditching the conspiracy theory channel run by Alex Jones, saying he has not broken its rules. Plus, we visit security conferences Def Con and Black Hat in Las Vegas, and we meet Vector, a new home robot that aims to capture people's imagination in a way other devices have failed to. Presented by Rory Cellan-Jones, with BBC tech reporter Chris Foxx, and special guest Kate Bevan, editor of Which? Computing. (Image: InfoWars founder Alex Jones speaking during a rally in support of presidential candidate Donald Trump near the Republican National Convention in Cleveland, Ohio, in 2016, Credit:REUTERS/ Lucas Jackson).
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sun, 29 Jul 2018 06:18:46 GMT - View all Cheyenne, WY jobs
          Nokia phones brings Rapid Renewal of Smartphone Portfolio       Cache   Translate Page   Web Page Cache   
Three new Nokia smartphones running Android OreoTM further build on hallmark design and deliver quality you can rely on
by Shrutee K/DNS
New Delhi, 9 August 2018 - HMD Global, the home of Nokia phones, has continued to rapidly renew its portfolio of Nokia Android smartphones announcing Nokia 5.1, Nokia 3.1 (variant: storage/RAM as 3GB/32GB) and Nokia 2.1 in India. Offering access to the latest Google services, such as the Google Assistant, the trio of smartphones continues to deliver a pure, secure and up-to-date Android experience with Android One and Android Go, combined with the premium craftsmanship and design expected from a Nokia smartphone and the performance to match. Nokia 5.1, Nokia 3.1 and Nokia 2.1 will be available 12th August onwards across top mobile retailers and online on Paytm Mall and Nokia.com/phones.
Ajey Mehta, Vice President and Country Head, India, HMD Global, says: “We are encouraged by the response that we are getting to our products. Our consumers tell us they love their Nokia smartphones on Android. It is our constant endeavour to enhance the experience to better suit the everyday needs of our fans.  Every single detail on a Nokia smartphone has been designed with consumers in mind, which is why we are delighted to introduce these refined smartphones that deliver a dramatic step up on performance, continue to drive the most premium design elements to price points accessible to everyone and deliver the class-leading quality that you expect from us.
“With this range, we deliver larger screens, enhanced performance across our range with processor upgrades offering up to 50% higher performance while maintaining the perfect balance with power consumption and stunning designs – all in a segment where consumers often need to compromise. With our renewed portfolio, you can now enjoy a premium smartphone experience without paying a premium on the price. Each phone radiates effortless style with meticulous attention to detail and no matter your budget, you will be able to find a Nokia smartphone that is right for you.”
Nokia 5.1: A timeless classic refined
Continuing with the classic design of the previous generation, the new Nokia 5.1 is understated, compact and effortlessly stylish. It gets its structural integrity from a single block of 6000 series aluminium, refined through a rigorous 33 stage process of machining, anodising and polishing to give an exquisite satin finish and feel in the hand. The new Nokia 5.1 packs a 0.3-inch bigger display in a 2mm narrower body and precise attention to the finest details like harmonising the rounded edges on the screen bezel with the corners of the phone to offer a compact, pocketable experience. Nokia 5.1 comes with a higher resolution 5.5-inch Full HD+ display in 18:9 aspect ratio, making watching your favourite content – be it browsing the web, watching your favourite shows, sharing funny memes or gaming - a delightful experience.
Powered by a 2.0 GHz MediaTek Helio P18 octa-core processor, Nokia 5.1 delivers a smoother all-round performance that is 40% faster and more powerful than the previous generation so you can create, edit and multitask effortlessly. You can capture more detail of what matters in your life with its upgraded 16MP rear camera with phase detection auto-focus and wide-angle front camera. Nokia 5.1’s fingerprint sensor has been relocated to the back of the phone so you can unlock it with your index finger or leave your wallet at home.
The new Nokia 5.1 will come in three classic colours: Copper, Tempered Blue and Black (available a few weeks later) and will be available in the storage/RAM of 3GB/32GB starting 12th August at a recommended best buy price of INR 14,499.
Nokia 3.1: The perfect harmony of materials and performance
Nokia 3 has been the most successful model in the line-up of Nokia smartphones and our biggest franchise to date. Today, we’re announcing the variant with storage/RAM option of 3GB/32GB and a few weeks back we had announced the launch of variant with storage/RAM of 2GB/16GB.
The new Nokia 3.1 now forges a rich connection between materials with a stunning design and delivers the performance to match, making it more attractive than ever before. The beautifully curved screen melts into the slim CNC’d aluminium sides with a dual diamond cut to deliver a perfect harmony of materials. Just the right size for single hand use, our most affordable 18:9 smartphone with 5.2-inch HD+ display gives you more content at one glance, while the 2.5D curved display is protected by damage resistant Corning® Gorilla® Glass to keep it beautiful for longer.
The Nokia 3.1 runs MediaTek 6750, an octa-core chipset, giving you twice the processor cores and a 50% performance boost on the previous generation so your phone can keep up with you. Featuring an upgraded 13MP main camera with auto focus, Nokia 3.1 captures the memories that you’ll want to relive over and over. Thanks to its full set of sensors usually only found on premium phones, the Nokia 3.1 lets you make the most out of popular AR apps like Pokémon Go and capture the whole scene with panoramic imaging.
The new Nokia 3.1 storage/RAM variant of 3/32 will come in three colours: Blue/Copper, Black/Chrome and White/Iron; and will be available starting 12th August at a recommended best buy price of INR 11,999.
Nokia 2.1: The 2-day battery life smartphone gets even better
Serving long-lasting entertainment needs for consumers who are always on the go, Nokia 2.1 comes with a 2-day battery life, a large 5.5-inch HD screen and dual front-facing stereo speakers. The Nokia 2.1’s huge 4,000mAh battery now charges even faster so you can get back up and running even more quickly than before. With its HD display almost 20% bigger than the original, you can enjoy high-definition videos on the go while the dual speakers with bespoke 3D formed stainless steel detail give you an amazing stereo sound.
Offering the quality and style you expect from a Nokia phone, Nokia 2.1’s Nordic design and metallic accents guarantee that you will stand out from the crowd. Its sleek, rounded and ergonomically designed, inherently coloured polycarbonate back keeps your phone safe, vibrant and robust against scratches. The upgraded Qualcomm™ Snapdragon® 425, 64-bit Mobile Platform gives fans the 50% faster and smoother performance they asked for with fast switching between apps. You can capture the action wherever you are with the Nokia 2.1’s 5MP front-facing and 8MP rear camera with auto focus.
The new Nokia 2.1 will come in three metallic colours: Blue/Copper, Blue/Silver and Grey/Silver and will be available starting 12th August at a recommended best buy price of INR 6,999.
Pure, secure and up-to-date Android experiences across the range
Together with Nokia 8 Sirocco, Nokia 7 plus and Nokia 6.1, Nokia 5.1 and Nokia 3.1 also join the Android One family, delivering an experience designed by Google that is smart, secure and simply amazing. Nokia smartphones with Android One offer more storage and battery life out of the box, as well as the latest AI-powered innovations from Google to help you stay ahead of the game every day. Nokia 5.1 and Nokia 3.1 will receive three years of monthly security patches and two years of OS updates, as guaranteed in the Android One programme. This puts them among the most secure phones out there, always up to date with the latest Google services like the Google Assistant and Google Photos with free unlimited high-quality photo storage. Meanwhile, Nokia 2.1 comes with Android Oreo™ (Go edition), designed for smartphones with 1GB RAM or less, giving you a smooth Android experience, more storage out of the box and consuming less data. All three phones are ready for Android P.
Anne Laurenson, Director, Android Partnerships at Google, says: ''People all over the world look for smartphones that fit their needs and Android’s mission has always been to bring the power of computing to everyone. Part of that is ensuring a great experience across the broadest range of devices. It's great to see HMD Global taking a leading role in that mission by launching the Nokia 2.1 running on Android Oreo™ (Go edition), as well as having two phones joining the Android One family. We have worked closely to combine Google’s latest software innovations with HMD Global’s expertise in quality hardware, so the Nokia 5.1 and Nokia 3.1 can bring the smart, secure, and simply amazing Android One experience to everyone."
Availability
Nokia 5.1 will be available 12th August onwards at a recommended best buy price of INR 14,499.
Nokia 3.1 will be available 12th August onwards at a recommended best buy price of INR 11,999.
Nokia 2.1 will be available 12th August onwards at a recommended best buy price of INR 6,999.
All the devices would be available across top mobile retailers. They will also be available online on Nokia.com/phones and Paytm Mall. Consumers can avail exciting offers as below.
Offers
Consumers can buy Nokia 5.1, Nokia 3.1 and Nokia 2.1 with the following offers from our partners:
Consumers buying any of these products from a retail outlet by scanning the Paytm Mall QR code will get 10% cashback on recharges and bill payments on Paytm.
Consumers using ICICI Bank Credit or Debit Card will get a 5% cashback on buying a Nokia 3.1 or Nokia 5.1.
Idea and Vodafone consumers will get two exciting offers. On a recharge of INR 149 pack, they will get 1GB data/day, unlimited calls and 100 SMS/day for 28 days; additionally, consumers upgrading from 2G or 3G phone to the new Nokia 3.1 will get 1GB data/day for 28 days. On a recharge of INR 595, consumers will get unlimited calls, 100 SMS/day, 18GB data for 6 months.
About HMD Global
Headquartered in Espoo, Finland, HMD Global Oy is the home of Nokia phones. HMD designs and markets a range of smartphones and feature phones targeted at a range of consumers and price points. With a commitment to innovation and quality, HMD is the proud exclusive licensee of the Nokia brand for phones and tablets. For further information, see www.hmdglobal.com. 
Nokia is a registered trademark of Nokia Corporation. Android, Android One, Google and Google Photos are trademarks of Google LLC; Oreo is a trademark of Mondelez International, Inc. group. Qualcomm and Snapdragon are trademarks of Qualcomm Incorporated, registered in the United States and other countries. Qualcomm Snapdragon is a product of Qualcomm Technologies, Inc. and/or its subsidiaries. All product names, tradenames and registered trademarks are property of their respective owners.

          Sr Firmware Engineer      Cache   Translate Page   Web Page Cache   
MN-Brooklyn Park, Designs, develops, tests, documents, operates and maintains software and firmware components and computing systems software to be applied to and integrated with mechanical and electrical systems. Delivers and/or manages projects assigned and works with other stakeholders to achieve desired results. May act as a mentor to colleagues or may direct the work of other lower level professionals. The maj
          Global Network Slicing Market 2018, Gearing Remarkable Growth by 2026      Cache   Translate Page   Web Page Cache   
Global Network Slicing Market is accounted for $120.34 million in 2017 and is expected to reach $726.27 million by 2026 growing at a CAGR of 22.1% during the forecast period. Some of the key factors that drive the growth of the market include growing requirement for high speed internet and large network coverage, rising demand for broadband services over mobile network, high growth rate in mobile data traffic volumes and virtualization of networks. However, lack of edge computing resources...
          Technical Architect (Salesforce experience required) Casper - Silverline Jobs - Casper, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sat, 23 Jun 2018 06:15:28 GMT - View all Casper, WY jobs
          Technical Architect (Salesforce experience required) Cheyenne - Silverline Jobs - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Company Overview: Do you want to be part of a fast paced environment, supporting the growth of cutting edge technology in cloud computing? Silverline...
From Silverline Jobs - Sun, 29 Jul 2018 06:18:46 GMT - View all Cheyenne, WY jobs
           Embedded Computing Needs Hardware-Based Security       Cache   Translate Page   Web Page Cache   
Embedded Computing Needs Hardware-Based Security

Embedded systems are in a profound transition: from physically isolated, autonomous devices to Internet-connected, accessible devices. Designers are learning—often to their dismay—that the mutation requires far more than just gluing a network interface onto the bus and adding an Internet Protocol stack. In many ways, these Internet-aware designs are coming to look less like traditional […]

Embedded systems are in a profound transition: from physically isolated, autonomous devices to Internet-connected, accessible devices. Designers are learning—often to their dismay—that the mutation requires far more than just gluing a network interface onto the bus and adding an Internet Protocol stack. In many ways, these Internet-aware designs are coming to look less like traditional embedded systems and more like miniaturized enterprise data centers.

Much data-center technology—multitasking, multiprocessing, and fast private networks, for example—is already familiar to designers of large embedded systems, albeit on a far smaller scale. But one data-center technology—system security—may prove novel. Yet the same needs that shape data-center security architectures magically appear in embedded systems–once you connect them to the Internet. Unlike compute, storage, or connectivity requirements though, the demands of security don’t diminish much when you scale the system down from a warehouse-sized data center to a connected embedded device.

Data Center Security

So what is it that data centers—and connected embedded systems–need in the way of security? First, they need to protect themselves from external attacks and internal subversion by their own applications. This means providing a protected envelope in which any attempt to read or write code or data will be authenticated before it is performed. It also means that all system code and data—for operating systems, hypervisors, management, or maintenance—must be strongly encrypted when it is in storage or in transit outside that trusted envelope.

Second, data centers must support the security needs of their applications. Apps may provide transport-layer security (TLS, once known as secure socket layer, or SSL) for their clients, or they may use public-key authentication and encryption. They may also require authenticated and encrypted inter-process communication and storage, often using symmetric-key cryptography. They will look to the data center for key management and, often, crypto algorithm acceleration.

There are three common elements to all of these needs. They all require a secure, accelerated environment (Figure 1) in which to execute cryptographic algorithms. They need a safe way to create, store, send, and receive cryptographic keys. And, in order to create strong keys, they need a true random number generator based on a physical source of entropy.

Figure 1. Extreme measures are necessary to protect encryption keys and codes.

Crypto algorithms need a special environment for two reasons. First, they must be kept secure from corruption and monitoring. They are the ideal point of attack in the data center. Second, they can place an unsupportable computing burden on application CPUs, driving up latency in just the places where apps are most latency-sensitive. Both of these arguments suggest a physically secure proprietary hardware accelerator.

The problem of cryptographic key management presents similar issues. Secret keys must of course be kept secret. Less obviously, public keys must be protected from tampering. If a hacker can substitute a key she created for a public key you obtained from a certificate authority, you will authenticate messages from the hacker instead of genuine messages. These concerns preclude allowing unencrypted keys to ever be in server memory or storage. In fact some experts argue that they preclude allowing even encrypted keys into shared memory.

The random number problem is more mathematical. In order to generate a new key, you start with a random number. If the number is not truly random, but follows a statistical pattern, you have just narrowed the space in which an attacker must search to discover the key. Software random-number generators, though, can only approximate a genuinely random distribution. The poorer the approximation, the easier it will be for an attacker to find the key through directed trial and error. So ideally, you would get your random number by sampling a truly random physical process, such as delay-line jitter, RF noise or semiconductor junction noise. There is strong motivation to have a hardware-based random-number generator.

The HSM

These considerations led vendors to develop, and most data centers to install, a specialized appliance called a hardware security module (HSM). In either board or box form factor, the HSM meets the requirements outlined above, with several distinctive features.

First, the HSM is physically tamper-resistant, in much the same manner as a smartcard. The package may be designed to resist penetration, voltage manipulation, thermal attacks, and even examination by x-rays or ion beams. Such events should trigger the module to delete internal memory. Ideally, the module should also block side-channel attacks such as differential power analysis.

Second, the HSM should provide proprietary hardware for crypto algorithm acceleration, key storage, and random-number generation.

Third, the HSM must have a highly restrictive, bullet-proof firewall. The device should only respond to authenticated requests for a small number of pre-defined actions, such as to encrypt or decrypt a string or to create, read, write, or apply a key. Private or secret keys should only be readable under rigorous conditions, and only in encrypted form. Two special functions, key back-and up and restore, usually to a smartcard, and firmware update, must be very carefully controlled, ideally by multi-party authentication involving at least one trusted human.

By providing multiple levels of security, from external tamper protection to strong encryption of internal data and code, the HSM becomes so hard to hack that for most attackers it just isn’t worth the bother (Figure 2). Sadly, in practice it usually isn’t worth the bother because some other part of the data center is much more vulnerable. In any case, the HSM establishes the foundation on which the rest of the data-center security architecture is constructed.

.Figure 2. Full security requires multiple layers of defenses.

Understandably, HSM vendors are uninterested in describing the architectures of their modules. But it is possible to make some generalizations about just what is in a typical box-level HSM (Figure 3).

Figure 3. A typical HSM has a relatively simple structure.

The tamper resistance functions require hardware support, including motion, capacitive, radiation, voltage, and temperature sensors. There will be a secure microcontroller, ideally with in-line encryption/decryption on the memory and I/O interfaces. It will be the job of this MCU to monitor the sensors and supervise the other functions of the HSM. It will also read some sort of analog device to get a seed for random number generation. The MCU should of course also be secure against side-channel attacks.

In addition, there should be secure memory for key storage. Ideally this should be a custom device resistant to scanning from outside and instantly erasable when intrusion is detected. But the very large amount of memory that may be necessary for key storage and for buffers for encryption and decryption tasks in a data center may make DRAM the only practical solution, and the security features will have to be incorporated into the DIMMs.

Since the firewall is so restrictive, it can probably be implemented in a hardware state machine, relieving the MCU of some overhead and reducing the risk of a successful attack on the MCU software. And last but not least, our HSM will include a crypto algorithm accelerator. This would usually be a hardware data path optimized for the necessary encryption and authentication algorithms.

But there is a problem in that last statement. There are dozens of key-exchange, authentication, and encryption algorithms in wide use. Murphy’s Law dictates that a data center will have to support a large subset of them, plus some proprietary algorithms dreamed up by apps developers. Covering all these needs with a fixed hardware accelerator might mean either accelerating only very primitive operations, as in a large bank of multiply-accumulators, and shifting a lot of the work back onto the MCU, or else building a very complex—and very hard to verify—reprogrammable state machine. If the latter approach is taken, there will immediately be pressure from data center managers to make the accelerator more general and more accessible to users for application acceleration. HSM vendors must balance these desires against the absolute need to keep the accelerator verifiable during the design process and secure during operation. Some security experts, though, argue that user programmability and security are fundamentally incompatible. If you want the accelerator to be incorruptible, you must define and verify its functions at design time.

The custom hardware—primarily the crypto datapath—could be done in an easy ASIC, but it would require special attention to ensure that differential power attacks could glean no information from the ASIC’s supply rails, and that the circuitry was protected against voltage and temperature exploits—not unsimilar to the precautions you would take designing a smartcard chip. With these provisions, a secure MCU core could be included in the ASIC as well, if the design team had the necessary expertise or access to appropriate intellectual property (IP). ARM, for example, is now offering a tamper-resistant line of processor IP cores based on the Cortex*-M architecture and called SecureCore. These might prove adequate if the heavy lifting of the crypto algorithms stays in the accelerator.

This custom design could also be done in an FPGA. But use of an FPGA raises some new issues. Most FPGAs are volatile and configured at power-up from an external memory. This boot process can be protected by encryption, and vendors provide for that. Also, most FPGAs have limited or no mixed-signal capabilities, so it might be impossible to integrate the range of sensor inputs required for tamper detection without external analog-to-digital converters (ADCs), which would themselves add to the attack surface and have to be protected. There are exceptions to both the need for external configuration ROM and lack of mixed-signal circuits, but the exceptions tend to be smaller devices, such as the Intel® MAX® 10 device family.

FPGAs also introduce some new opportunities. Because the accelerator datapath would be run-time reconfigurable, the crypto accelerator could be reconfigured for each algorithm family as needed, bypassing the dilemma of flexibility versus security. Additionally, there has been some work in creating entropy sources in FPGAs for use by true random number generators.

All of these implementation options raise another important question. With so many ways to implement the HSM, how can a user know how secure a particular device actually is? The answer is independent certification. The main standard used for HSMs, Federal Information Processing Standard (FIPS) 140-2, was created by the US National Institute of Standards and Technology (NIST). FIPS 140-2 defines four levels of security, ranging from just an unprotected crypto engine on the weak end to an engine and storage subsystem fully enclosed by intrusion and tamper resistant or detecting hardware on the strong end. Each individual design must be certified by a third-party lab recognized under a certification program jointly operated by NIST and Canada’s Communications Security Establishment.

HSMs may also be evaluated at the product level under the international Common Criteria for Information Technology Security Evaluation (glibly known as CC), ISO 15408. This certification process is also done by recognized third-party labs. But unlike FIPS 140-2, which evaluates the overall actual security level of the HSM, CC evaluation in effect only checks that claims submitted by the vendor are supportable. This approach, which may or may not involve actual testing of the product, has been used to, for example, get various versions of Microsoft Windows certified under the CC. So it is pretty much up to the user to determine what was actually certified, at what level, and what the implications are for their own use case.

An Embedded HSM?

The challenges that brought HSMs to the data center are now present in edge computing, with a few important differences. Embedded systems are likely to use just a few crypto algorithms compared to the plethora a data center would face. And similarly, connected embedded systems probably would need to manage far fewer keys than a data center. Both of these differences could simplify the one big problem with bringing HSMs to embedded systems.

That problem is scale. Depending on capabilities and level of security, data-center HSMs cost from hundreds to thousands of dollars. The range in size from PCIe* cards to pizza-sized boxes. For an edge-computing rack full of servers that is not a serious problem. But for a more typical embedded system, supposed to fit into a small box or onto a circuit board inside a mechanical assembly, it is a non-starter. There is a clear need for HSM technology to scale down from the pizza box to chip level without compromising functionality or security.

But is this feasible? Technically, the answer appears to be yes. As we have seen, all the functions of an HSM could in principle be absorbed into an ASIC or FPGA, with the exception of some sensors and the more mechanical elements of physical intrusion detection. And MCU vendors have already offered pieces of a full solution, including secure software-execution modes, on-chip private memories, and limited crypto accelerators. As one report observed, even ordinary smartcard hardware could be used as a reasonably secure but very limited HSM. So an embedded design team with the requisite skills and motivation should be able to produce a chip-level HSM.

But such a project would face several serious challenges. The requisite skills include secure processor and memory design, a good grasp of cryptography, and experience with physical tamper protection. That’s not a common skill set in embedded design teams. The design should get FIPS 140-2 certification. But that can be an expensive and time-consuming process, as can ISO 15408, running into the hundreds of thousands of dollars and months of delays. And all this work could only be amortized across the relatively tiny volumes of the embedded system under design.

Most serious, perhaps would be a less tangible challenge: convincing management to take system security seriously enough to ignore halfway measures and undertake an HSM chip design. Unfortunately, there is still in management a great deal of magical thinking about the threats facing connected embedded systems, even in applications like power generation and transportation where the potential for damage is vast.

But there is another way. Perhaps it is time for a semiconductor vendor, with its far broader market and great access to specialized expertises, to undertake a FIPS 140-2 certified HSM chip. At some point, after a few more high-profile attacks on too-important and too-vulnerable physical plants, further progress in edge computing may require it.


       Contact Us  |  New User  |  Site Map  |  Privacy  |  Legal Notice 
        Copyright © 1995-2016 Altera Corporation, 101 Innovation Drive, San Jose, California 95134, USA
Update feed preferences

           Embedded Computing on the Edge       Cache   Translate Page   Web Page Cache   
Embedded Computing on the Edge

Embedded computing has passed—more or less unscathed—through many technology shifts and marketing fashions. But the most recent—the rise of edge computing—could mean important new possibilities and challenges. So what is edge computing (Figure 1)? The cynic might say it is just a grab for market share by giant cloud companies that have in the past struggled […]

Embedded computing has passed—more or less unscathed—through many technology shifts and marketing fashions. But the most recent—the rise of edge computing—could mean important new possibilities and challenges.

So what is edge computing (Figure 1)? The cynic might say it is just a grab for market share by giant cloud companies that have in the past struggled in the fragmented embedded market, but now see their chance. That theory goes something like this.

Figure 1. Computing at the network edge puts embedded systems in a whole new world.

With the concept of the Internet of Things came a rather naïve new notion of embedded architecture: all the embedded system’s sensors and actuators would be connected directly to the Internet—think smart wall switch and smart lightbulb—and all the computing would be done in the cloud. Naturally, this proved wildly impractical for a number of reasons, so the gurus of the IoT retreated to a more tenable position: some computing had to be local, even though the embedded system was still very much connected to the Internet.

Since the local processing would be done at the extreme periphery of the Internet, where IP connectivity ended and private industrial networks or dedicated connections began, the cloud- and network-centric folks called it edge computing. They saw the opportunity to lever their command of the cloud and network resources to redefine embedded computing as a networking application, with edge computing as its natural extension.

A less cynical and more useful view looks at edge computing as one facet of a new partitioning problem that the concurrence of cloud computing, widespread broadband access, and some innovations in LTE cellular networks have created. Today, embedded systems designers must, from requirements definition on through the design process, remember that there are several very different processing sites available to them (Figure 2). There is the cloud. There is the so-called fog. And there is the edge. Partitioning tasks and data among these sits has become a necessary skill to the success of an embedded design project. If you don’t use the new computing resources wisely, you will be vulnerable to a competitor who does—not only in terms of features, performance, and cost advantages to be gained, but in consideration of the growing value of data that can be collected from embedded systems in operation.

.Figure 2. Edge computing offers the choice of three different kinds of processing sites.

The Joy of Partitioning

Unfortunately, partitioning is not often a skill embedded-system designers cultivate. Traditional embedded designs employ a single processor, or at worst a multi-core SoC with an obvious division of labor amongst the cores.

But edge computing creates a new scale of difficulty. There are several different kinds of processing sites, each with quite distinct characteristics. And the connections between processors are far more complicated than the nearly transparent inter-task communications of shared-memory multicore systems. So, doing edge computing well requires a rather formal partitioning process. It begins with defining the tasks and identifying their computing, storage, bandwidth, and latency requirements. Then the process continues by characterizing the compute resources you have available, and the links between them. Finally, partitioning must map tasks onto processors and inter-task communications onto links so that the system requirements are met. This is often an iterative process that at best refines the architecture and at worst turns into a protracted, multi-party game of Whack-a-Mole. It is helpful, perhaps, to look at each of these issues: tasks, processing and storage sites, and communications links, in more detail.

The Tasks

There are several categories of tasks in a traditional embedded system, and a couple of categories that have recently become important for many designs. Each category has its own characteristic needs in computing, storage, I/O bandwidth, and task latency.

In any embedded design there are supervisory and housekeeping tasks that are necessary, but are not particularly compute- or I/O- intensive, and that have no hard deadlines. This category includes most operating-system services, user interfaces, utilities, system maintenance and update, and data logging.

A second category of tasks with very different characteristics is present in most embedded designs. These tasks directly influence the physical behavior of the system, and they do have hard real-time deadlines, often because they are implementing algorithms within feedback control loops responsible for motion control or dynamic process control. Or they may be signal-processing or signal interpretation tasks that lie on a critical path to a system response, such as object recognition routines behind a camera input.

Often these tasks don’t have complex I/O needs: just a stream or two of data in and one or two out. But today these data rates can be extremely high, as in the case of multiple HD cameras on a robot or digitized radar signals coming off a target-acquisition and tracking radar. Algorithm complexity has traditionally been low, held down by the history of budget-constrained embedded designs in which a microcontroller had to implement the digital transfer function in a control loop. But as control systems adopt more modern techniques, including stochastic state estimation, model-based control, and, recently, insertion of artificial intelligence into control loops, in some designs the complexity of algorithms inside time-critical loops has exploded. As we will see, this explosion scatters shrapnel over a wide area.

The most important issue for all these time-critical tasks is that the overall delay from sensor or control input to actuator response be below a set maximum latency, and often that it lies within a narrow jitter window. That makes partitioning of these tasks particularly interesting, because it forces designers to consider both execution time—fully laden with indeterminacies, memory access and storage access delays—and communications latencies together. The fastest place to execute a complex algorithm may be unacceptably far from the system.

We also need to recognize a third category of tasks. These have appeared fairly recently for many designers, and differ from both supervisory and real-time tasks. They arise from the intrusion of three new areas of concern: machine learning, functional safety, and cyber security. The distinguishing characteristic of these tasks is that, while each can be performed in miniature with very modest demands on the system, each can quickly develop an enormous appetite for computing and memory resources. And, most unfortunately, each can end up inside delay-sensitive control loops, posing very tricky challenges for the design team.

Machine learning is a good case in point. Relatively simply deep-learning programs are already being used as supervisory tasks to, for instance, examine sensor data to detect progressive wear on machinery or signs of impending failure. Such tasks normally run in the cloud without any real-time constraints, which is just as well, as they do best with access to huge volumes of data. At the other extreme, trained networks can be ported to quite compact blocks of code, especially with the use of small hardware accelerators, making it possible to use a neural network inside a smart phone. But a deep-learning inference engine trained to detect, say, excessive vibration in a cutting tool during a cut or the intrusion of an unidentified object into a robot’s planned trajectory—either of which could require immediate intervention—could end up being both computationally intensive and on a time-critical path.

Similarly for functional safety and system security, simple rule-based safety checks or authentication/encryption tasks may present few problems for the system design. But simple often, in these areas, means weak. Systems that must operate in an unfamiliar environment or must actively repel novel intrusion attempts may require very complex algorithms, including machine learning, with very fast response times. Intrusion detection, for instance, is much less valuable as a forensic tool than as a prevention.

Resources

Traditionally, the computing and storage resources available to an embedded system designer were easy to list. There were microcontroller chips, single-board computers based on commercial microprocessors, and in some cases boards or boxes using digital signal processing hardware of one sort or another. Any of these could have external memory, and most could attach, with the aid of an operating system, mass storage ranging from a thumb drive to a RAID disk array. And these resources were all in one place: they were physically part of the system, directly connected to sensors, actuators, and maybe to an industrial network.

But add Internet connectivity, and this simple picture snaps out of focus. The original system is now just the network edge. And in addition to edge computing, there are two new locations where there may be important computing resources: the cloud, and what Cisco and some others are calling the fog.

The edge remains much as it has been, except of course that everything is growing in power. In the shadow of the massive market for smart-phone SoCs, microcontrollers have morphed into low-cost SoCs too, often with multiple 32-bit CPU cores, extensive caches, and dedicated functional IP suited to a particular range of applications. Board-level computers have exploited the monotonically growing power of personal computer CPU chips and the growth in solid-state storage. And the commoditization of servers for the world’s data centers has put even racks of data-center-class servers within the reach of well-funded edge computing sites, if the sites can provide the necessary space, power, and cooling.

Recently, with the advent of more demanding algorithms, hardware accelerators have become important options for edge computing as well. FPGAs have long been used to accelerate signal-processing and numerically intensive transfer functions. Today, with effective high-level design tools they have broadened their use beyond these applications into just about anything that can benefit from massively parallel or, more importantly, deeply pipelined execution. GPUs have applications in massively data-parallel tasks such as vision processing and neural network training. And as soon as an algorithm becomes stable and widely used enough to have good library support—machine vision, location and mapping, security, and deep learning are examples—someone will start work on an ASIC to accelerate it.

The cloud, of course, is a profoundly different environment: a world of essentially infinite numbers of big x86 servers and storage resources. Recently, hardware accelerators from all three races—FPGAs, GPUs, and ASICs—have begun appearing in the cloud as well. All these resources are available for the embedded system end-user to rent on an as-used basis.

The important questions in the cloud are not about how many resources are available—there are more than you need—but about terms and conditions. Will your workload run continuously, and if not, what is the activation latency? What guarantees of performance and availability are there? What will this cost the end user? And what happens if the cloud platform provider—who in specialized application areas is often not a giant data-center owner, but a small company that itself leases or rents the cloud resources—suffers a change in situation? These sorts of questions are generally not familiar to embedded-system developers, nor to their customers.

Recently there has been discussion of yet another possible processing site: the so-called fog. The fog is located somewhere between the edge and the cloud, both physically and in terms of its characteristics.

As network operators and wireless service providers turn from old dedicated switching hardware to software on servers, increasingly, Internet connections from the edge will run not through racks of networking hardware, but through data centers. For edge systems relying on cloud computing, this raises an important question: why send your inter-task communications through one data center just to get it to another one? It may be that the networking data center can provide all the resources your task needs without having to go all the way to a cloud service provider (CSP). Or it may be that a service provider can offer hardware or software packages to allow some processing in your edge-computing system, or in an aggregation node near your system, before having to make the jump to a central facility. At the very least you would have one less vendor to deal with. And you might also have less latency and uncertainly introduced by Internet connections. Thus, you can think of fog computing as a cloud computing service spread across the network and into the edge, with all the advantages and questions we have just discussed.

Connections

When all embedded computing is local, inter-task communications can almost be neglected. There are situations where multiple tasks share a critical resource, like a message-passing utility in an operating system, and on extremely critical timing paths you must be aware of the uncertainly in the delay in getting a message between tasks. But for most situations, how long it takes to trigger a task and get data to it is a secondary concern. Most designs confine real-time tasks to a subset of the system where they have a nearly deterministic environment, and focus their timing analyses there.

But when you partition a system between edge, fog, and cloud resources, the kinds of connections between those three environments, their delay characteristics, and their reliability all become important system issues. They may limit where you can place particular tasks. And they may require—by imposing timing uncertainty and the possibility of non-delivery on inter-task messages—the use of more complex control algorithms that can tolerate such surprises.

So what are the connections? We have to look at two different situations: when the edge hardware is connected to an internet service provider (ISP) through copper or fiber-optics (or a blend of the two), and when the connection is wireless (Figure 3).

Figure 3. Tasks can be categorized by computational complexity and latency needs.

The two situations have one thing in common. Unless your system will have a dedicated leased virtual channel to a cloud or fog service provider, part of the connection will be over the public Internet. That part could be from your ISP’s switch plant to the CSP’s data center, or it could be from a wireless operator’s central office to the CSP’s data center.

That Internet connection has two unfortunate characteristics, from this point of view. First, it is a packet-switching network in which different packets may take very different routes, with very different latencies. So, it is impossible to predict more than statistically what the transmission delay between two points will be. Second, Internet Protocol by itself offers only best-effort, not guaranteed, delivery. So, a system that relies on cloud tasks must tolerate some packets simply vanishing.

An additional point worth considering is that so-called data locality laws—which limit or prohibit transmission of data outside the country of origin—are spreading around the world. Inside the European Union, for instance, it is currently illegal to transmit data containing personal information across the borders of a number of member countries, even to other EU members. And in China, which uses locality rules for both privacy and industrial policy purposes, it is illegal to transmit virtually any sort of data to any destination outside the country. So, designers must ask whether their edge system will be able to exchange data with the cloud legally, given the rapidly evolving country-by-country legislation.

These limitations are one of the potential advantages of the fog computing concept. By not traversing the public network, systems relying on ISP or wireless-carrier computing resources or local edge resources can exploit additional provisions to reduce the uncertainty in connection delays.

But messages still have to get from your edge system to the service provider’s aggregation hardware or data center. For ISPs, that will mean a physical connection, typically using Internet Protocol over fiber or hybrid copper/fiber connections, often arranged in a tree structure. Such connections allow for provisioning of fog computing nodes at points where branches intersect. But as any cable TV viewer can attest, they also allow for congestion at nodes or on branches to create great uncertainties in available bandwidth and latency. Suspension of net neutrality in the US has added a further uncertainty, allowing carriers to offer different levels of service to traffic from different sources, and to charge for quality-of-service guarantees.

If the connection is wireless, as we are assured many will be once 5G is deployed, the uncertainties multiply. A 5G link will connect your edge system through multiple parallel RF channels and multiple antennas to one or more base stations. The base stations may be anything from a small cell with minimal hardware to a large local processing site with, again, the ability to offer fog-computing resources, to a remote radio transceiver that relies on a central data center for all its processing. In at least the first two cases, there will be a separate backhaul network, usually either fiber or microwave, connecting the base station to the service provider’s central data center.

The challenges include, first, that latency will depend on what kind of base stations you are working with—something often completely beyond your control. Second, changes in RF transmission characteristics along the mostly line-of-site paths can be caused by obstacles, multipath shifts, vegetation, and even weather. If the channel deteriorates, retry rates will go up, and at some point the base station and your edge system will negotiate a new data rate, or roll the connection over to a different base station. So even for a fixed client system, the characteristics of the connection may change significantly over time, sometimes quite rapidly.

Partitioning

Connectivity opens a new world for the embedded-system designer, offering amounts of computing power and storage inconceivable in local platforms. But it creates a partitioning problem: an iterative process of locating tasks where they have the resources they need, but with the latencies, predictability, and reliability they require.

For many tasks location is obvious. Big-data analyses that comb terabytes of data to predict maintenance needs or extract valuable conclusions about the user can go in the cloud. So, can compute-intensive real-time tasks when acceptable latency is long, and the occasional lost message is survivable or handled in a higher-level networking protocol. A smart speaker in your kitchen can always reply “Let me think on that a moment,” or “Sorry, what?”

Critical, high-frequency control loops must stay at or very near the edge. Conventional control algorithms can’t tolerate the delay and uncertainty of any other choice.

But what if there is a conflict: a task too big for the edge resources, but too time-sensitive to be located across the Internet? Fog computing may solve some of these dilemmas. Others may require you to place more resources in your system.

Just how far today’s technology has enriched the choices was illustrated recently by a series of Microsoft announcements. Primarily involved in edge computing as a CSP, Microsoft has for some time offered the Azure Stack—essentially, an instance of their Azure cloud platform—to run on servers on the customer premises. Just recently, the company enriched this offering with two new options: FPGA acceleration, including the Microsoft’s Project Brainwave machine-learning acceleration, for Azure Stack installations, and Azure Sphere, a way of encapsulating Azure’s security provisions in an approved microcontroller, secure operating system, and coordinated cloud service for use at the edge. Similarly, Intel recently announced the OpenVINO™ toolkit, a platform for implementing vision-processing and machine intelligence algorithms at the edge, relying on CPUs with optional support from FPGAs or vision-processing ASICs. Such fog-oriented provisions could allow embedded-system designers to simply incorporate cloud-oriented tasks into hardware within the confines of their own systems, eliminating the communications considerations and making ideas like deep-learning networks within control loops far more feasible.

In other cases, designers may simply have to refactor critical tasks into time-critical and time-tolerant portions. Or they may have to replace tried and true control algorithms with far more complex approaches that can tolerate the delay and uncertainty of communications links. For example, a complex model-based control algorithm could be moved to the cloud, and used to monitor and adjust a much simpler control loop that is running locally at the edge.

Life at the edge, then, is full of opportunities and complexities. It offers a range of computing and storage resources, and hence of algorithms, never before available to most embedded systems. But it demands a new level of analysis and partitioning, and it beckons the system designer into realms of advanced system control that go far beyond traditional PID control loops. Competitive pressures will force many embedded systems into this new territory, so it is best to get ahead of the curve.

 

 

 

 


       Contact Us  |  New User  |  Site Map  |  Privacy  |  Legal Notice 
        Copyright © 1995-2016 Altera Corporation, 101 Innovation Drive, San Jose, California 95134, USA
Update feed preferences

           Understanding Neuromorphic Computing       Cache   Translate Page   Web Page Cache   
Understanding Neuromorphic Computing

The phrase neuromorphic computing has a long history, dating back at least to the 1980s, when legendary Caltech researcher Carver Mead proposed designing ICs to mimic the organization of living neuron cells. But recently the term has taken on a much more specific meaning, to denote a branch of neural network research that has diverged […]

The phrase neuromorphic computing has a long history, dating back at least to the 1980s, when legendary Caltech researcher Carver Mead proposed designing ICs to mimic the organization of living neuron cells. But recently the term has taken on a much more specific meaning, to denote a branch of neural network research that has diverged significantly from the orthodoxy of convolutional deep-learning networks. So, what exactly is neuromorphic computing now? And does it have a future of important applications, or is it just another fertile ground for sowing thesis projects?

A Matter of Definition

As the name implies—if you rea Greek, anyway—neuromorphic networks model themselves closely on biological nerve cells, or neurons. This is quite unlike modern deep-learning networks, so it is worthwhile to take a quick look at biological neurons.

Living nerve cells have four major components (Figure 1). Electrochemical pulses enter the cell through tiny interface points called synapses. The synapses are scattered over the surfaces of tree-root-like fibers called dendrites, which reach out into the surrounding nerve tissue, gather pulses from their synapses, and conduct the pulses back to the heart of the neuron, the cell body.

Figure 1. A schematic diagram shows synapses, dendrites, the cell body, and an axon.

In the cell body are structures that transform the many pulse trains arriving over the dendrites into an output pulse train. At least 20 different transform types have been identified in nature, ranging from simple logic-like functions to some rather sophisticated transforms. One of the most interesting for researchers—and the most widely used in neuromorphic computing—is the leaky integrator: a function that adds up pulses as they arrive, while constantly decrementing the sum at a fixed rate. If the sum exceeds a threshold, the cell body outputs a pulse.

Synapses, dendrites, and cell bodies are three of the four components. The fourth one is the axon: the tree-like fiber that conducts output pulses from the cell body into the nervous tissue, ending at synapses on other cells’ dendrites or on muscle or organ synapses.

So neuromorphic computers use architectural structures modeled on neurons. But there are many different implementation approaches, ranging from pure software simulations to dedicated ICs. The best way to define the field as it exists today may be to contrast it against traditional neural networks. Both are networks in which relatively simple computations occur at the nodes. But beyond that generalization there are many important differences.

Perhaps the most fundamental difference is in signaling. The nodes in traditional neural networks communicate by sending numbers across the network, usually represented as either floating-point or integer digital quantities. Neuromorphic nodes send pulses, or sometimes strings of pulses, in which timing and frequency carry the information—in other words, forms of pulse code modulation. This is similar to what we observe in biological nervous systems.

A second important difference is in the function performed in each node. Conventional network nodes do arithmetic: they multiply the numbers arriving on each of their inputs by predetermined weights and add up the products. Mathematicians see this as a simple dot product of the input vector and the weight vector. The resulting sum may then be subjected to some non-linear function such as normalization, min or max setting, or whatever other creative impulse moves the network designer. The number is then sent on to the next layer in the network.

In contrast, neuromorphic nodes, like neuron cell bodies, can perform a large array of pulse-oriented functions. Most commonly used, as we have mentioned, is the leaky integrate and spike function, but various designers have implemented many others. Like real neurons, neuromorphic nodes usually have many input connections feeding in, but usually only one output. In reference to living cells, neuromorphic inputs are often called synapses or dendrites, the node may be called a neuron, and the output tree an axon.

The topologies of conventional and neuromorphic networks also differ significantly. Conventional deep-learning networks comprise strictly cascaded layers of computing nodes. The outputs from one layer of nodes go only into selected inputs of the next layer (Figure 2). In inference mode—when the network is already trained and is in use—signals flow only in one direction. (During training, signals flow in both directions, as we will discuss in a moment.)

.Figure 2. The conventional deep-learning network is a cascaded series of computing nodes.

There are no such restrictions on the topology of neuromorphic networks. As in real nervous tissue, a neuromorphic node may get inputs from any other node, and its axon may extend to anywhere (Figure 3). Thus, configurations such as feedback loops and delay-line memories, anathema in conventional neural networks, are in principle quite acceptable in the neuromorphic field. This allows the topologies of neuromorphic networks to extend well beyond what can be done in conventional networks, into areas of research such as long-short term memory networks and other recurrent networks.

Figure 3. Connections between living neurons can be complex and three-dimensional.

 

Implementation

Carver Mead may have dreamt of implementing the structure of a neuron in silicon, but developers of today’s deep-learning networks have abandoned that idea for a much simpler approach. Modern, conventional neural networks are in effect software simulations—computer programs that perform the matrix arithmetic defined by the neural network architecture. The network is just a graphic representation of a large linear algebra computation.

Given the inefficiencies of simulation, developers have been quick to adopt optimizations to reduce the computing load, and hardware accelerators to speed execution. Data compression, use of shorter number formats for the weights and outputs, and use of sparse-matrix algorithms have all been applied. GPUs, clever arrangements of multiply-accumulator arrays, and FPGAs have been used as accelerators. An interesting recent trend has been to explore FPGAs or ASICs organized as data-flow engines with embedded RAM, in an effort to reduce the massive memory traffic loads that can form around the accelerators—in effect, extracting a data-flow graph from the network and encoding it in silicon.

In contrast, silicon implementations of neuromorphic processors tend to resemble architecturally the biological neurons they consciously mimic, with identifiable hardware blocks corresponding to synapses, dendrites, cell bodies, and axons. The implementations are usually, but not always, digital, allowing them to run much faster than organic neurons or analog emulations, but they retain the pulsed operation of the biological cells and are often event-driven, offering the opportunity for huge energy savings compared to software or to synchronous arithmetic circuits.

Some Examples

The grandfather of neuromorphic chips is IBM’s TrueNorth, a 2014 spin-off from the US DARPA research program Systems of Neuromorphic Adaptive Plastic Scalable Electronics. (Now that is really working for an acronym.) The heart of TrueNorth is a digital core that is replicated within a network-on-chip interconnect grid. The core contains five key blocks:

  1. The neuron: a time-multiplexed pulse-train engine that implements the cell-body functions for a group of 256 virtual neurons.
  2. A local 256 x 410-bit SRAM which serves as a crossbar connecting synapses to neurons and axons to synapses, and which stores neuron state and parameters.
  3. A scheduler that manages sequencing and processing of pulse packets.
  4. A router that manages transmission of pulse packets between cores.
  5. A controller that sequences operations within the core.

The TrueNorth chip includes 4,096 such cores.

The components in the core cooperate to perform a hardware emulation of neuron activity. Pulses move through the crossbar switch from axons to synapses to the neuron processor, and are transformed for each virtual neuron. Pulse trains pass through the routers to and from other cores as encoded packets. Since transforms like leaky integration depend on arrival time, the supervisory hardware in the cores must keep track of a time-stamping mechanism to understand the intended arrival time of packets.

Like many other neuromorphic implementations, TrueNorth’s main neuron function is a leaky pulse integrator, but designers have added a number of other functions, selectable via control bits in the local SRAM. As an exercise, IBM designers showed that their neuron was sufficiently flexible to mimic 20 different functions that have been observed in living neurons.

Learning

So far we have discussed mostly behavior of conventional and neuromorphic networks that have already been fully trained. But of course that is only part of the story. How the networks learn defines another important distinction between conventional and neuromorphic networks. And that subject will introduce another IC example.

Let’s start with networks of living neurons. Learning in these living organisms is not well understood, but a few of the things we do know are relevant here. First, there are two separate aspects to learning: real nerve cells are able to reach out and establish new connections, in effect rewiring the network as they learn. And they also have a wide variety of functions available in cell bodies. So, learning can involve both changing connections and changing functions. Second, real nervous systems learn very quickly. Humans can learn to recognize a new face or a new abstract symbol, with one or two instances. Conventional convolutional deep-learning networks might require tens of thousands of training examples to master the new item.

This observation suggests, correctly, that training of deep-learning networks is profoundly different from biological learning. To begin with, the two aspects of learning are separated. Designers specify a topology before training, and it does not change unless the network requires redesign. Only the weights applied to the inputs at each node are altered during training.

The process itself is also different. The implementation of the network that gets trained is generally a software simulation running on server CPUs, often with graphics processing unit (GPU) acceleration. Trainers must assemble huge numbers—often tens or hundreds of thousands—of input data sets, and label each one with the correct classification values. Then one by one, trainers feed an input data set into the simulation’s inputs, and simultaneously input the labels. The software compares the output of the network to the correct classification and adjusts the weights of the final stage to bring the output closer to the right answers, generally using a gradient descent algorithm. Then the software moves back to the next previous stage, and repeats the process, and so on, until all the weights in the network have been adjusted to be a bit closer to yielding the correct classification for this example. Then on to the next example. Obviously this is time- and compute-intensive.

Once the network has been trained and tested—there is no guarantee that training on a given network and set of examples will be successful—designers extract the weights from the trained network, optimize the computations, and port the topology and weights to an entirely different piece of software with a quite different sort of hardware acceleration, this time optimized for inference. This is how a convolutional network that required days of training in a GPU-accelerated cloud can end up running in a smart phone.

Neuromorphic Learning

Learning in TrueNorth is quite a different matter. The system includes its own programming language that allows users to set up the parameters in each core’s local SRAM, defining synapses within the core, selecting weights to apply to them, and choosing the functions for the virtual neurons, as well as setting up the routing table for connections with other cores. There is no learning mode per se, but apparently the programming environment can be set up so that TrueNorth cores can modify their own SRAMs, allowing for experiments with a wide variety of learning models.

That brings us to one more example, the Loihi chip described this year by Intel. Superficially, Loihi resembles TrueNorth rather closely. The chip is built as an orthogonal array cores that contain digital emulations of cell-body functions and SRAM-based synaptic connection tables. Both use digital pulses to carry information. But that is about the end of the similarity.

Instead of one time-multiplexed neuron processor in each core, each Loihi core contains 1,024 simple pulse processors, preconnected in what Intel describes as tree-like groups. Communications between these little pulse processors are said to be entirely asynchronous. The processors themselves perform leaky integration via a digital state machine. Synapse weights vary the influence of each synapse on the neuron body. Connectivity is hierarchical, with direct tree connections within a group, links between groups within a core, and a mesh packet network connecting the 128 cores on the die.

The largest difference between Loihi and TrueNorth is in learning. Each Loihi core includes a microcoded Learning Engine that captures trace data from each neuron’s synaptic inputs and axon outputs and can modify the synaptic weights during operation. The fact that the engine is programmable allows users to explore different kinds of learning, including unsupervised approaches, where the network learns without requiring tagged examples.

Where are the Apps?

We have only described two digital implementations of neuromorphic networks. There are many more examples, both digital and mixed-signal, as well as some rather speculative projects such as an MIT analog device using crystalline silicon-germanium to implement synapses. But are these devices only research aids and curiosities, or will they have practical applications? After all, conventional deep-learning networks, for all their training costs and—probably under-appreciated—limitations, are quite good at some kinds of pattern recognition.

It is just too early to say. Critics point out that in the four years TrueNorth has been available to researchers, the most impressive demo has been a pattern recognition implementation that was less effective than convolutional neural networks, and to make things even less impressive, was constructed by emulating a conventional neural network in the TrueNorth architecture. As for the other implementations, some were intended only for neurological research, some have been little-used, and some, like Loihi, are too recent to have been explored much.

But neuromorphic networks offer two tantalizing promises. First, because they are pulse-driven, potentially asynchronous, and highly parallel, they could be a gateway to an entirely new way of computing at high performance and very low energy. Second, they could be the best vehicle for developing unsupervised learning—a goal that may prove necessary for key applications like autonomous vehicles, security, and natural-language comprehension. Succeed or fail, they will create a lot more thesis projects.


       Contact Us  |  New User  |  Site Map  |  Privacy  |  Legal Notice 
        Copyright © 1995-2016 Altera Corporation, 101 Innovation Drive, San Jose, California 95134, USA
Update feed preferences

           Opening Windows into SoC Hardware       Cache   Translate Page   Web Page Cache   
Opening Windows into SoC Hardware

There is a long tradition in system design for embedding special hardware to observe and manipulate the state of the system. From the beginning of digital computing, central processors have had hardware to support single-step, loading and examination of registers and memory, and setting of breakpoints for software debugging. Much later chronologically but early in […]

There is a long tradition in system design for embedding special hardware to observe and manipulate the state of the system. From the beginning of digital computing, central processors have had hardware to support single-step, loading and examination of registers and memory, and setting of breakpoints for software debugging. Much later chronologically but early in their own history, integrated circuits began to include scan hardware for manufacturing test. FPGAs followed this idea with built-in logic analysis capability, allowing designers to examine their circuits in great detail.

As SoCs became more complex and inclusive, it became impractical or impossible to determine what was going on inside the system by merely observing the outside (Figure 1). So, designers experimented with building stimulus generators and checkers into their chip designs—in effect, assertions in silicon. This has become a necessary practice in some kinds of circuits such as high-speed serial transceivers, and has wider application when the SoC is implemented in an FPGA, as the specialized hardware can be removed from the design when it is no longer needed.

Figure 1. As systems get more complex, it becomes impossible to gauge internal functions from the outside.

Today, the practice is taking on new directions. System designers are grappling with challenges quite different from block-level silicon bring-up or embedded software development. Four areas in particular are demanding new attention: system integration, run-time performance optimization, system security, and functional safety. Each is making its own demands on the observability and controllability of systems increasingly locked within the confines of an SoC die. And designers are responding by embedding more dedicated hardware to open windows of observability into the chips.

The Integration Challenge

Once, most of the effort in SoC verification was at the block level. System architectures tended to be simple and CPU-centric (Figure 2), with the blocks snapped into well-defined receptacles on an industry-standard bus. Once you had the blocks working, most of the work was done.

.Figure 2. In a traditional CPU-centric SoC, one debug core can see almost everything.

But today’s SoCs have turned that situation end-for-end. SoCs have several or many CPU cores with no one clear master, so the old CPU-centric organization is gone. Other blocks on the chip may be processing data and sharing memory, so even visibility into every CPU core on the die is no guarantee of success (Figure 3). Many levels of caches may be present, some or all participating in a coherency protocol, obscuring just what is actually going on with the chip, and what data is actually current. Peripherals may have direct memory access (DMA). And the old CPU-controlled synchronous bus has given way to layers of switched busses or to complex, globally asynchronous network on chip (NoC). Further, many of the blocks on the chip will be allegedly pre-verified intellectual property (IP), often from third parties who are reticent about revealing design details.

Figure 3. In complex SoCs, isolated debug cores have limited visibility into the chip

“Things are reversed now,” says Ultra SoC CEO Rupert Baines. “IP-level design tools and verification flows are excellent. There’s a very high probability the IP blocks you use will work as their designers intended. But systemic complexity has grown so that the challenge now is interactions among the blocks.”

These interactions can cause fatal system errors even when all the individual blocks are working correctly. And they can be astonishingly subtle. Caches can thrash due to interactions between tasks on different CPUs. Minor differences in the sequence of events on different parts of the SoC can cause huge differences in task latencies, as for instance when two processors deadlock, a high-priority interrupt service routine on one CPU calls a subroutine on another CPU that happens to have a lower priority, or a seemingly minor firmware change alters the order in which commands arrive at a shared DRAM controller, triggering a string of page misses and slashing effective memory bandwidth.

Against these sorts of time-dependent interactions even the best isolated CPU debug tools and bus monitors can be ineffectual, failing even to isolate the failure, never mind identifying a root cause. You need to be able to capture the full state of the system, set a trigger on a state—or more likely, a sequence of states—that defines the failure’s symptom, and then examine a trace buffer holding state history up to the trigger event. Often you may need to keep the system running at full speed during this process. In other words, you need the facilities of the best CPU hardware debug cores, but for the entire SoC, not just one core at a time.

What we are implying is, in essence, a custom logic analyzer built into the SoC, with state-monitoring or estimating hardware in each functional block of the chip. We are also implying a chip-wide interconnect network capable of bringing the data from these state detectors together, aligning them chronologically, and setting complex triggers on the resulting picture of the system state. Finally, we are suggesting a user interface that makes all of this intelligible to human users.

What most system designers have instead, Baines says, is an often-incomplete collection of siloed tools based on individual blocks and varying widely in quality. CPU IP vendors generally provide a debug module for software developers, allowing single-step, breakpoint, trace, and dump vie JTAG or a dedicated debug port. Such modules vary in quality from real-time and comprehensive to ad-hoc or absent altogether. They often have limited or no ability to see state outside the CPU core without considerable software intervention.

Once you get beyond the CPU cores, the situation even for sensing state gets more challenging. Vendors of DSP cores or dedicated accelerators, as for cryptography, video CODECs, vision processing, or neural-network inference may feel that access to their debug facilities, or even knowledge of the state of their engine, is too sensitive to share with any but the biggest customers. These blocks may be black boxes. Understanding the sate of a GPU may be possible, but so difficult and code-dependent as to render it a black box too, for all but skilled GPU programmers.

In-house IP, especially if reused from a previous project, can be even more challenging. If, for instance, a custom dataflow machine ever had a real-time debug module, and if it were adequately documented, it still might not suit a new application. Reuse guidelines aren’t always clear about reusable debug facilities.

Beyond this, there are utility blocks in SoCs—NoC switches and gaskets, DRAM controllers and network interfaces, DMA and interrupt controllers—not always intended to offer much visibility to system developers. Yet knowledge of their state may be vital to system integrators. Altogether, the problem of capturing the state of a full SoC, while technically possible, may be a design problem not a lot smaller than the original design itself.

Field Optimization

Once the system is working, the need for deep visibility for system integration is—one hopes—over. But a new set of needs may arise: not for debug access, but for system optimization.

Certainly in the data center world, where workloads can change in milliseconds, it is clear that SoCs can benefit from continuous retuning. There are gross adjustments like how many cores are assigned to a task, which tasks share which cores, and how hardware accelerators are assigned. And there are finer adjustments, such as DRAM allocation, and even finer tweaks like interrupt priorities, client priorities in multi-client DRAM controllers, and the marvelous range of adjustments available in NoC switches.

As embedded systems move from dedicated, single-CPU architectures to dynamically allocated multi-core designs, many of these same considerations begin to apply. One might argue that the workload for an embedded system is known in detail at design time, and that is when the chip optimizations should be done. Often this is still true. But increasingly, the shape of an embedded workload is not obvious until after system integration—particularly with highly data-dependent tasks like neural-network inference. So embedded designs, like data-center servers, may need post-integration tuning.

And this sort of tuning also requires deep visibility into the SoC, but a different kind of visibility than debug or integration. Where integration needs to recognize and record sequences of system-wide state, tuning more often depends on aggregate or statistical data: data rates, device utilization percentages, idle-time profiles, and the like. In order to tune, you look for over- and under-utilized resources.

With one notable exception this sort of statistical information can be hard to come by. CPU debug hardware is generally designed to gather short bursts of trace data, not utilization or throughput statistics or cache profiles. Statistics may have to come from random sampling of trace data or from external monitors. Which brings us to that exception. NoCs, touching virtually all traffic between blocks in the SoC, can be ideal for collecting traffic and some activity statistics. Once again for this purpose much of the data may be directly or indirectly available, but it may come down to the design team to collect and assemble it.

Security

With the growing awareness of cyber security, another set of run-time needs is arising for embedded systems designers. Designers of multitasking systems have long relied on the memory protection units (MPUs) attached to processor cores to protect one task’s memory from inspection or corruption by another task. That works, so long as all the memory accesses in the system go through CPUs and all the MPU registers are set correctly. But in a multicore system with numerous blocks doing DMA, and with cyber attacks, neither of those conditions is guaranteed.

One line of defense has been to only make MPU settings accessible from a secure operating mode such as ARM’s TrustZone. Theoretically, a task could only enter this privileged mode by presenting a valid credential, But as publicity about the Meltdown and Spectre vulnerabilities has shown, and as previous, less publicized incidents of attacks through hypervisors had warned, even secure execution modes can be compromised.

Such risks have led some developers to turn to embedded monitoring hardware in the SoC. Monitors on cache and system busses, DRAM controllers, and NoCs can be another line of defense, continuously validating that a task or peripheral is staying within the bounds of its assigned memory.

Safety

If we generalize this notion of monitoring the system for forbidden sequences of states, we get a much more powerful idea. Embedded monitoring, if it comprehends the state of the entire SoC, could recognize when the system is about to do something physically dangerous—like move a tool into an unsecured area or close a switch in an AC power grid without checking for phase matching—and could force the system into a safe state. This ability to anticipate and avoid bad outcomes is the essence of functional safety.

We have come full circle now, once again asking the embedded monitor to collect state information from all the significant blocks in the SoC, and to correlate this data into a coherent view of the chip’s overall state. We’ve seen that in some, but far from all, key blocks there is already circuitry in place to collect this data. It remains to bring the data together—a task that often cannot be relegated to software because of unpredictable latencies and contention for system resources, not to mention security questions.

We are left with the alternative of capturing the state in each significant block, time-stamping it at the source, and routing it to a central collection point using dedicated routing resources. The good news is that a number of vendors are working toward this goal.

One such effort is at ARM, where the CoreSight* developers, anticipating the challenges of multi-core debug, have extended the reach of their hardware-based tools across multiple instances of ARM* IP cores and busses. Another movement comes from the NoC vendors—for example, Arteris, Netspeed, and Sonics—who have a natural path to extend the profiling facilities already available in their endpoints and switches into a chip-wide state monitoring and reporting network.

A third source is an IP vendor dedicated to the problem, Ultra SoC. This company has developed the routing and collection stations to bring time-stamped state information together from across the SoC. They have developed gaskets to extract information from CoreSight and some other core vendors’ debug modules. And they are working with at least one NoC provider. Ultra SoC also develops visualization and analysis software so that state information uploaded from the SoC can be useful to humans.

That seems to be the current situation for commercial tools. There is still much to do in improving visibility into a wider range of processors. Ultra SoC is working with the RISC-V architecture, for instance, and there is obvious application to FPGA accelerators.

There are other kinds of system blocks for which there is no agreement about even what part of their internal state is relevant. And there are enticing questions. How much of the SoC’s internal state can be inferred from a few points rather than measured directly?  Could the industry agree on a standard interface between processing elements and a monitoring network? Could some form of deep-learning network learn from masses of state data to infer root causes of failures, or to anticipate functional safety faults? There is much to do.


       Contact Us  |  New User  |  Site Map  |  Privacy  |  Legal Notice 
        Copyright © 1995-2016 Altera Corporation, 101 Innovation Drive, San Jose, California 95134, USA
Update feed preferences

          Offer - Special offer HP Z840 Workstation Rental and sale Chennai - INDIA      Cache   Translate Page   Web Page Cache   
Built for high-end computing and visualization, the HP Z840 delivers outstanding performance in one of the industry’s most expandable chassis Product Highlights Processor: Intel® Xeon® E5-2683 v4 Memory : 40GB DDR4-2400 Graphics: NVIDIA® NVS™ 310 Hard Disk: 300 GB up to 1.2 TB SAS (10000 rpm) Contact Rental India Name – Chackravarthy (8754542653) Name – Anushree (8971423090) Visit Us: https://shop.rental-india.com/product/hp-z840-workstation-available-on-rental-sale/ Mail Us: enquiry@rental-india.com Mandaveli, Chennai-28.
          Azure Cosmos DB      Cache   Translate Page   Web Page Cache   

I started my sabbatical work with t he Microsoft Azure Cosmos DB team recently. I have been in talks and collaboration with the Cosmos DB people, and specifically with Dharma Shukla, for over 3 years. I have been very impressed with what they were doing and decided that this would be the best place to spend my sabbatical year .

The travel and settling down took time. I will write about those later. I will also write about my impressions of the greater Seattle area as I discover more about it. This was a big change for me after having stayed in Buffalo for 13 years. I love the scenery: everywhere I look I see a gorgeous lake or hill/mountain scene. And, oh my God, there are blackberries everywhere! It looks like the Himalayan blackberries is the invasive species here , but I can't complain. As I go on an evening stroll with my family, we snack on blackberries growing along the sidewalk. It seems like we are the only ones doing so ---people in US are not much used to eating fruits off from trees/bushes.


Azure Cosmos DB

Ok, coming back to Cosmos DB... My first impressions--from an insider-perspective this time-- about Cosmos DB is also very positive and overwhelming. It is hard not to get overwhelmed. Cosmos DB provides a global highly-available low-latency all-in-one database/storage/querying/analytics service to heavyweight demanding businesses. Cosmos DB is used ubiquitously within Microsoft systems/services, and is also one of the fastest-growing services used by Azure developers externally. It manages 100s of petabytes of indexed data, and serves 100s of trillions of requests every day from thousands of customers worldwide, and enables customers to build highly-responsive mission-critical applications.

I find that there are a lot of things to learn before I can start contributing in a meaningful and significant way to the Cosmos DB team. So I will use this blog to facilitate, speed up, and capture my learning. The process of writing helps me detach, see the big picture, and internalize stuff better. Moreover my blog also serves as my augmented memory and I refer back to it for many things.

Here is my first attempt at an overview post. As I get to know Cosmos DB better, I hope to give you other more-in-depth overview posts.

What is Cosmos DB?

Cosmos DB is Azure's cloud-native database service.

(The term "cloud-native" is a loaded key term, and the team doesn't use it lightly. I will try to unpack some of it here, and I will revisit this in my later posts.)

It is a database that offers frictionless global distribution across any number of Azure regions ---50+ of them! It enables you to elastically scale throughput and storage worldwide on-demand quickly, and you pay only for what you provision. It guarantees single-digit-millisecond latencies at the 99th percentile, supports multiple consistency models, and is backed by comprehensive service level agreements (SLAs) .

I am most impressed with its all-in-one capability. Cosmos DB seamlessly supports many APIs, data formats, consistency levels, and needs across many regions. This alleviates data integration pains which is a major problem for all businesses. The all-in-one capability also eliminates the developer effort wasted into keeping multiple systems with different-yet-aligned goals in sync with each other. I had written earlier about the Lambda versus Kappa architectures, and how the pendulum is all the way to Kappa. Cosmos DB all-in-one gives you the Kappa benefits.

This all-in-one capability backed with global-scale distribution enables new computing models as well. The datacenter-as-a-computer paper from 2009 had talked about the vision of warehouse scale machines. By providing a frictionless globe-scale replicated database, CosmosDB opens the way to thinking about the globe-as-a-computer. One of the usecases I heard from some Cosmos DB customers amazed me. Some customers allocate a spare region (say Australia) where they have no read/write clients as an analytics region. This spare region still gets consistent data replication and stays very up-to-date and is employed for running analytics jobs without jeopardizing the access latencies of real read-write clients. Talk about disaggregated computation and storage! This is disaggregated storage, computing, analytics, and serverless across the globe. Under this model, the globe becomes your playground.

This disaggregated yet all-in-one computing model also manifests itself in customer acquisition and settling in Cosmos DB. Customers often come for the query serving level, which provides high throughput and low-latency via SSDs. Then they get interested and invest into the lower-throughput but higher/cheaper storage options to store terrabytes and petabytes of data. They then diversify and enrich their portfolio further with analytics, event-driven lambda, and real-time streaming capabilities provided in Cosmos DB.

There is a lot to discuss, but in this post I will only make a brief introduction to the issues/concepts, hoping to write more about them later. My interests are of course at the bottom of the stack at the core layer, so I will likely dedicate most of my coming posts to the core layer.


Azure Cosmos DB
Core layer

The core layer provides capabilities that the other layers build upon. These include global distribution, horizontally and independently scalable storage and throughput, guaranteed single-digit millisecond latency, tunable consistency levels, and comprehensive SLAs.

Resource governance is an important and pervasive component of the core layer. Request units (allocating CPU, memory, throughput) is the currency to provision the resources. Provisioning a desired level of throughput through dynamically changing access patterns and across a heterogeneous set of database operations presents many challenges. To meet the stringent SLA guarantees for throughput, latency, consistency, and availability, Cosmos DB automatically employs partition splitting and relocation. This is challenging to achieve as Cosmos DB also handles fine-grained multi-tenancy with 100s of tenants sharing a single machine and 1000s of tenants sharing a single cluster each with diverse workloads and isolated from the rest. Adding even more to the challenge, Cosmos DB supports scaling database throughput and storage independently, automatically, and swiftly to address the customer's dynamically changing requirements/needs.

To provide another important functionality, global distribution, Cosmos DB enables you to configure the regions for "read", "write", or "read/write" regions. Using Azure Cosmos DB's multi-homing APIs, the app always knows where the nearest region is (even as you add and remove regions to/from your Cosmos DB database) and sends the requests to the nearest datacenter. All reads are served from a quorum local to the closest region to provide low latency access to data anywhere in the world.


Azure Cosmos DB

Cosmos DB allows developers to choose among five well-defined consistency models along the consistency spectrum . (Yay,consistency levels!) You can configure the default consistency level on your Cosmos DB account (and later override the consistency on a specific read request). About 73% of Azure Cosmos DB tenants use session consistency and 20% prefer bounded staleness. Only 2% of Azure Cosmos DB tenants override consistency levels on a per request basis. In Cosmos DB, reads served at session, consistent prefix, and eventual consistency are twice as cheap as reads with strong or bounded staleness consistency.

This lovely technical report explains the consistency models through publishing of baseball scores via multiple channels. I will write a summary of this paper in the coming days. The paper concludes: " Even simple databases may have diverse users with different consistency needs. Clients should be able to choose their desired consistency. The system cannot possibly predict or determine the consistency that is required by a given application or client. The preferred consistency often depends on how the data is being used. Moreover, knowledge of who writes data or when data was last written can sometimes allow clients to perform a relaxed consistency read, and obtain the associated benefits, while reading up-to-date data. "

Data layer

Cosmos DB supports and projects multiple data models (documents, graphs, key-value, table, etc.) over a minimalist type system and core data model: the atom-record-sequence (ARS) model.

A Cosmos DB resource container is a schema-agnostic container of arbitrary user-generated JSON items and javascript based stored procedures, triggers, and user-defined-functions (UDFs). Container and item resources are further projected as reified resource types for a specific type of API interface. For example, while using document-oriented APIs, container and item resources are projected as collection and document resources respectively. Similarly, for graph-oriented API access, the underlying container and item resources are projected as graph, node/edge resources respectively.

The overall resource model of an application using Cosmos DB is a hierarchical overlay of the resources rooted under the database account, and can be navigated using hyperlinks.

API layer

Cosmos DB supports three main classes of developers: (1) those familiar with relational databases and prefer SQL language, (2) those familiar with dynamically typed modern programming languages (like JavaScript) and want a dynamically typed efficiently queryable database, and (3) those who are already familiar with popular NoSQL databases and want to transfer their application to Azure cloud without a rewrite.

In order to meet developers wherever they are, Cosmos DB supports SQL, MongoDB, Cassandra, Gremlin, Table APIs with SDKs available in multiple languages.

The ambitious future

There is a palpable buzz in the air in Cosmos DB offices due to the imminent multimaster general availability rollout (which I will also write about later). The team members keep it to themselves working intensely most of the time, but would also have frequent meetings and occasional bursty standup discussions. This is my first deployment in a big team/company, so I am trying to take this in as well. (Probably a post on that is coming up as well.)

It looks like the Cosmos DB team caught a good momentum. The team wants to make Cosmos DB the prominent cloud database and even the go-to all-in-one cloud middleware. Better analytic support and better OLAP/OLTP integration is in the works to support more demanding more powerful next generation applications.

Cosmos DB already has great traction in enterprise systems. I think it will be getting more love from independent developers as well, since it provides serverless computing and all-in-one system with many APIs. It is possible to try it for free at https://docs.microsoft.com/en-us/azure/cosmos-db/ . To keep up to date with the latest news and announcements, you can follow @AzureCosmosDB and #CosmosDB on Twitter.


My work at Cosmos DB

In the short term, as I learn more about Cosmos DB, I will write more posts like this one. I will also try to learn and write about the customer usecases, workloads, and operations issue without revealing details. I think learning about the real world usecases and problems will be one of the most important benefits I will be able to get from my sabbatical.

In the medium term, I will work on TLA+/PlusCal translation of consistency levels provided by Cosmos DB and share them here. Cosmos DB uses TLA+/PlusCal to specify and reason about the protocols. This helps prevent concurrency bugs, race conditions, and helps with the development efforts. TLA+ modeling has been instrumental in Cosmos DB's design which integrated global distribution, consistency guarantees, and high-availability from the ground-up. ( Here is an interview where Leslie Lamport shares his thoughts on the foundations of Azure Cosmos DB and his influence in the design of Azure Cosmos DB.) This is very dear to my heart, as I have been employing TLA+ in my distributed systems classes for the past 5 years .

Finally, as I get a better mastery of Cosmos DB internals, I like to contribute to protocols on multimaster multirecord transaction support. I also like to learn more about and contribute to Cosmos DB's automatic failover support during one or more regional outages. Of course, these protocols will all be modeled and verified with TLA+.

MAD questions

1. What would you do with a frictionless cloud middleware? Which new applications can this enable?

Here is something that comes to my mind. Companies are already uploading IOT sensor data from cars to Azure CosmosDB continuously. Next step would be to build more ambitious applications that make sense of correlated readings and use
          Attitudes, realities and challenges: The role of digital technologies      Cache   Translate Page   Web Page Cache   
Fri, 08/10/2018
Sponsored

What do IT leaders in higher ed think about the role of digital learning technologies? Dr. Kenneth Green, renowned authority on information technology in higher ed, discusses findings and implications from his most recent Campus Computing Project survey, the largest ongoing study of IT’s role in American higher ed.


          Senior Systems Engineer - AI Services - Amazon.com - Seattle, WA      Cache   Translate Page   Web Page Cache   
The Amazon Web Services team is innovating new ways of building massively scalable distributed systems and delivering the next generation of cloud computing...
From Amazon.com - Sun, 15 Jul 2018 20:48:54 GMT - View all Seattle, WA jobs
          Support Engineer - AI Services - Amazon.com - Herndon, VA      Cache   Translate Page   Web Page Cache   
The Amazon Web Services team is innovating new ways of building massively scalable distributed systems and delivering the next generation of cloud computing...
From Amazon.com - Thu, 02 Aug 2018 01:41:06 GMT - View all Herndon, VA jobs
          Offer - Special offer HP Z840 Workstation Rental and sale Chennai - INDIA      Cache   Translate Page   Web Page Cache   
Built for high-end computing and visualization, the HP Z840 delivers outstanding performance in one of the industry’s most expandable chassis Product Highlights Processor: Intel® Xeon® E5-2683 v4 Memory : 40GB DDR4-2400 Graphics: NVIDIA® NVS™ 310 Hard Disk: 300 GB up to 1.2 TB SAS (10000 rpm) Contact Rental India Name – Chackravarthy (8754542653) Name – Anushree (8971423090) Visit Us: https://shop.rental-india.com/product/hp-z840-workstation-available-on-rental-sale/ Mail Us: enquiry@rental-india.com Mandaveli, Chennai-28.
          Offer - Special offer HP Z840 Workstation Rental and sale Chennai - INDIA      Cache   Translate Page   Web Page Cache   
Built for high-end computing and visualization, the HP Z840 delivers outstanding performance in one of the industry’s most expandable chassis Product Highlights Processor: Intel® Xeon® E5-2683 v4 Memory : 40GB DDR4-2400 Graphics: NVIDIA® NVS™ 310 Hard Disk: 300 GB up to 1.2 TB SAS (10000 rpm) Contact Rental India Name – Chackravarthy (8754542653) Name – Anushree (8971423090) Visit Us: https://shop.rental-india.com/product/hp-z840-workstation-available-on-rental-sale/ Mail Us: enquiry@rental-india.com Mandaveli, Chennai-28.
          How to Reinstall and Re-register all the built-in apps in Windows 10 Creator Update      Cache   Translate Page   Web Page Cache   
Current Revision posted to TechNet Articles by Richard Mueller on 8/10/2018 7:43:23 AM



Introduction:

Windows apps play a very important role in Windows 10. As we have progressed through various builds the number of apps has also increased:

  • Build 1507: 24 Apps
  • Build 1607: 26 Apps
  • Build 1703: 31 Apps [Creator Update]
In Windows 10 Creator Update we have the following default apps:

  1. Microsoft.3DBuilder
  2. Microsoft.BingWeather
  3. Microsoft.DesktopAppInstaller
  4. Microsoft.Getstarted
  5. Microsoft.Messaging
  6. Microsoft.Microsoft3DViewer
  7. Microsoft.MicrosoftOfficeHub
  8. Microsoft.MicrosoftSolitaireCollection
  9. Microsoft.MicrosoftStickyNotes
  10. Microsoft.MSPaint
  11. Microsoft.Office.OneNote
  12. Microsoft.OneConnect
  13. Microsoft.People
  14. Microsoft.SkypeApp
  15. Microsoft.StorePurchaseApp
  16. Microsoft.Wallet
  17. Microsoft.Windows.Photos
  18. Microsoft.WindowsAlarms
  19. Microsoft.WindowsCalculator
  20. Microsoft.WindowsCamera
  21. microsoft.windowscommunicationsapps
  22. Microsoft.WindowsFeedbackHub
  23. Microsoft.WindowsMaps
  24. Microsoft.WindowsSoundRecorder
  25. Microsoft.WindowsStore
  26. Microsoft.XboxApp
  27. Microsoft.XboxGameOverlay
  28. Microsoft.XboxIdentityProvider
  29. Microsoft.XboxSpeechToTextOverlay
  30. Microsoft.ZuneMusic
  31. Microsoft.ZuneVideo


How To View Default Apps:

To view default apps you can use PowerShell. Open PowerShell as Administrator and run the following PowerShell command:

Get-ProvisionedAppXPackage -Online|Select DisplayName




Problem:

Sometimes during computing, you may experience default apps are not working properly or, you have deleted some by accident.


Resolution:

If you opt to reset Windows 10 then you will reinstall all the default apps. But this process will remove all your documents, pictures, videos etc.

So, using PowerShell to reset or re-install default apps is the easiest solution.

How To:

  • Open PowerShell as Administrator and copy-paste the following command:

Get-AppxPackage -allusers | foreach {Add-AppxPackage -register "$($_.InstallLocation)\appxmanifest.xml" -DisableDevelopmentMode}

  • This will reinstall and re-register all the built-in apps.
  • Restart your system.

Reference:

Windows App
Tags: Apps

          Infrastructure Engineer (End User Computing)      Cache   Translate Page   Web Page Cache   
Highly sought after contract opportunity for an Infrastructure Engineer (End User Computing) to embark on a new venture with my highly renown Public Sector client at their HQ near Canary Wharf. This is an initial three month contract which is deemed Inside IR35. Their behind-the-scenes end user computing team makes sure a range of systems are working effectively for their people. The team is mainly working with Citrix and Microsoft SCCM technologies – all within a hot desking environment that’s highly mobile. As an infrastructure engineer, you’ll provide third and fourth line support when colleagues escalate queries from our people and other IT teams, along with maintaining the general health of the infrastructure – but you’ll also be involved in projects to improve our systems and services. Where you see trends developing, you’ll be quick to address them with the wider team, and help develop the analysis, identification and workarounds with them. You’ll also work directly with my clients vendors to help keep things running smoothly. Their infrastructure engineers are working as part of a team to create the best technical environment for their people. To succeed, you’ll need to have a solid background in a varied technical support environment using Microsoft servers, desktop operating systems and both Windows 7 and 10 – ideally you’ll also be with comfortable Citrix XenDesktop and Microsoft SCCM 2012. This role will suit someone with experience of third line support, who loves the idea of getting the best out of IT systems and making them better in a hotdesking enterprise environment. Competitive day rate and a great working environment so please don't delay in applying today! Reed Specialist Recruitment Limited is an employment agency and employment business
          mobile digital education      Cache   Translate Page   Web Page Cache   
Here are some ideas for a Middle School computing course. Feel free to steal them although I would appreciate if you acknowledge the source. Many thanks to Roland and Paul for initially suggesting these ideas to me.

THEMES
  • digital wearables – take home – ownership - affordable
  • at $25 the micro:bit (cheaper with a bulk buy) could be bought by each student – real ownership of the micro:bit is empowering and invites further exploration
  • the mobile phone has become the socially preferred computer – it is desirable to find a way that students can use their phone to enhance their education, as distinct from entertainment
  • electronics, can be linked to the micro:bit (electronics tends to be a neglected subject)
  • computer coding –far more accessible these days due to drag and drop tiles of Scratch / Makecode (apparently the official term is block based coding since you drag blocks of code around)
  • Maker Education themes
SOFTWARE
  • Scratch – introduction to coding
  • Makecode (MS) free online or free app download– for coding of affordable hardware such as the BBC micro:bit (wicked simulator included)
  • MIT app inventor – writing apps for you mobile phone
HARDWARE and PRELIMINARY COSTING

BBC micro:bit $24.95 (plus $3.95 micro USB cable plus $2.41 battery holder and batteries) from Core Electronics - link - (to be owned by each student)
Features – technical specification, listed at the end

PCs to access makecode (computer lab)

Androd or iOS phone runs a micro:bit app – programs can be sent to micro:bit through bluetooth

Mobile phone programmable by app inventor

Electronics: Break out board, eg. Kitronik Inventor's Kit (for class use) $39.95 from Core Electronics

RATIONALE:

We are rapidly moving towards a world of smart homes / cities, driverless cars and digital wearables for fitness monitoring, health care and fashion statements. Commercially, the Apple watch incorporates all of this. We can anticipate a future where computers are ubiquitous in our environment, eg. the smart frig which will suggest a suitable recipe for its contents. Computers will become as common as dust or oxygen. Refer MITs Project Oxygen

This course outline represents a small beginning towards adapting the school curriculum to preparing students for this future.

BBC MICRO:BIT TECHNICAL FEATURES
32-bit ARM Cortex-M0 CPU
256KB Flash
16KB RAM
5x5 Red LED Array
Two Programmable Buttons
Onboard Light, Compass, Accelerometer and Temp Sensors
BLE Smart Antenna
Three Digital/Analog Input/Output Rings
Two Power Rings — 3V and GND
20-pin Edge Connector
MicroUSB Connector
JST-PH Battery Connector (Not JST-XH)
Reset Button with Status LED
          How did the TimeHop data breach happen?      Cache   Translate Page   Web Page Cache   

In July 2018, TimeHop, in a very transparent manner, discussed the breach of their service which affected approximately 21 million records, some of which included personal identifying information (PII) such as name, email, phone number, and date of birth, while others contained variants.

Reviewing the sequence of events, we see that a trusted insider placed the company’s data at risk when their employee credentials were used by a third-party to log into TimeHop’s Cloud Computing Environment. 

How the intruder obtained the employee’s log-in credentials is unknown.

To read this article in full, please click here


          How Edge Computing Challenges Our Industry, Part Two -- Hyperscale Clouds And SI Industry      Cache   Translate Page   Web Page Cache   
Edge computing introduces changes to the system integrators -- changes that will introduce opportunities along with challenges.
          Explores report on Telecom Cloud Market CAGR of +23% by 2018: Studied in Detail by Focusing on Product Type with Top Companies like Amazon Verizon Communication, AT&T, Vodafone Group Plc, Amdocs      Cache   Translate Page   Web Page Cache   
Explores report on Telecom Cloud Market CAGR of +23% by 2018: Studied in Detail by Focusing on Product Type with Top Companies like Amazon Verizon Communication, AT&T, Vodafone Group Plc, Amdocs A telecom cloud provider is a telecommunications company that has shifted a significant part of its business from landline service to devote resources to providing cloud computing services. This research study provides an overview of the summary that includes global

          What’s new in Julia: Version 1.0 is here      Cache   Translate Page   Web Page Cache   

After nearly a decade in development, Julia, an open source, dynamic language geared to numerical computing, reached its Version 1.0 production release status on August 8, 2018. The previous version was the 0.6 beta.

Julia, which vies with Python for scientific computing, is focused on speed, optional typing, and composability. Programs compile to native code via the LLVM compiler framework. Created in 2009, Julia’s syntax is geared to math; numeric types and parallelism are supported as well. The standard library has asynchronous I/O as well as process control. logging, and profiling.

To read this article in full, please click here

(Insider Story)
          Going multicloud? Avoid these 3 pitfalls      Cache   Translate Page   Web Page Cache   

Going multicloud? This typically means using a mix of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) to create a multicloud cocktail that should provide you with better value and more flexibility.

To read this article in full, please click here

(Insider Story)
          Adjunct Instructor, Adult Basic Education-English as a Second Language (ESL) - Laramie County Community College - Laramie, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products... $23.19 an hour
From Laramie County Community College - Thu, 02 Aug 2018 00:37:52 GMT - View all Laramie, WY jobs
          Adjunct Instructor, Chemistry - Laramie County Community College - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products...
From Laramie County Community College - Sat, 14 Jul 2018 06:37:29 GMT - View all Cheyenne, WY jobs
          Adjunct Instructional Faculty, Mathematics - Laramie County Community College - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products...
From Laramie County Community College - Sat, 14 Jul 2018 06:37:24 GMT - View all Cheyenne, WY jobs
          Radiography Adjunct Instructor - Laramie County Community College - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products...
From Laramie County Community College - Wed, 11 Jul 2018 06:37:25 GMT - View all Cheyenne, WY jobs
          Adjunct Instructor Pool, Communication - Laramie County Community College - Cheyenne, WY      Cache   Translate Page   Web Page Cache   
Proficient skill with use of personal computing applications – specifically Microsoft Office Suite (e.g., Word, Excel, Outlook, and PowerPoint), Adobe products...
From Laramie County Community College - Thu, 05 Jul 2018 06:38:22 GMT - View all Cheyenne, WY jobs
          Field Marketing Manager - SaaS, Cloud, Hardware, B2B      Cache   Translate Page   Web Page Cache   
MN-Bloomington, If you are a Field Marketing Manager - SaaS, Cloud, Hardware, B2B with experience, please read on! We are a Supercomputing and HPC powerhouse looking for a very talented Field Marketing Specialist. HPC experience is a plus, but candidates must come from the software/hardware industry (SaaS, Cloud Computing, Platforms, Analytics). Willingness to travel also important -25%. This is a great opportuni
          Engineering and Computing Resume Workshop      Cache   Translate Page   Web Page Cache   
Additional details
          Prepare for the Engineering and Computing Fair      Cache   Translate Page   Web Page Cache   
Additional details
          Test Technician 2nd Shift: $17-$20/hour      Cache   Translate Page   Web Page Cache   
OH-Delaware, AES designs, builds and services critical infrastructure that enables vital applications for data centers, communication networks, and commercial and industrial facilities. We support today's growing mobile and cloud computing markets with a portfolio of power, thermal and infrastructure management solutions. Position Summary: Depending on operation, the Test Technician will perform any or all the
          Has Synchronoss Technologies Finally Gotten Back on Track?      Cache   Translate Page   Web Page Cache   
At long last, the cloud computing company gave investors some numbers to look at.
          Adaptive Parsons problems, and the role of SES and Gesture in learning computing: ICER 2018 Preview      Cache   Translate Page   Web Page Cache   
  Next week is the 2018 International Computing Education Research Conference in Espoo, Finland. The proceedings are (as of this writing) available here: https://dl.acm.org/citation.cfm?id=3230977. Our group has three papers in the 28 accepted this year. “Evaluating the efficiency and effectiveness of adaptive Parsons problems” by Barbara Ericson, Jim Foley, and Jochen (“Jeff”) Rick These are […]
          Senior Bios Engineer - ZT Systems - Austin, TX      Cache   Translate Page   Web Page Cache   
Join us at this critical growth inflection point as we engineer the hardware infrastructure powering a world of cloud computing, cloud storage, artificial...
From ZT Systems - Thu, 31 May 2018 00:20:51 GMT - View all Austin, TX jobs
          Samsung DeX Isn’t an Accessory — It’s Your Mobile-Powered Computing Platform      Cache   Translate Page   Web Page Cache   
Recently, I’ve been getting two very strong messages, again and again: Employees wish they could just use their phone for everything — they already … Finally, if you have some Windows applications t…
          Open Source at Indeed: Sponsoring the Apache Software Foundation      Cache   Translate Page   Web Page Cache   

As Indeed continues to grow our commitment to the open source community, we are pleased to announce our sponsorship of the Apache Software Foundation. Earlier this year, we joined the Cloud Native Computing Foundation and began sponsoring the Python Software Foundation. For Indeed, this is just the beginning of our work with open source initiatives.  […]

The post Open Source at Indeed: Sponsoring the Apache Software Foundation appeared first on Indeed Engineering Blog.


          Open Source at Indeed: Sponsoring the Python Software Foundation      Cache   Translate Page   Web Page Cache   

At Indeed, we’re committed to taking a more active role in the open source community. Earlier this year, we joined the Cloud Native Computing Foundation. This week, we are pleased to announce that Indeed is sponsoring the Python Software Foundation.  We write lots of Python code at Indeed — it’s one of our major languages […]

The post Open Source at Indeed: Sponsoring the Python Software Foundation appeared first on Indeed Engineering Blog.


          Indeed Expands its Commitment to Open Source      Cache   Translate Page   Web Page Cache   

At Indeed, we’re committed to an active role in open source communities. We’re proud to announce that we’ve joined the Cloud Native Computing Foundation (CNCF), an open source software foundation dedicated to making cloud-native computing universal and sustainable. The CNCF, part of The Linux Foundation, is a vendor-neutral home for fast-growing projects. It promotes collaboration […]

The post Indeed Expands its Commitment to Open Source appeared first on Indeed Engineering Blog.


          Have Your Say In The House Of Lords’ Select Committee On Science And Technology      Cache   Translate Page   Web Page Cache   
Controversy has been raging around ISO 17025 ever since the standard was adopted for digital forensics back in October 2017. Although many people who work in the industry agree that standardisation is advisable and probably necessary if we are to keep moving forward, there have been many criticisms of ISO 17025 and its effectiveness when it comes to digital forensics. The baseline of the problem seems to be that ISO 17025 was not specifically designed for digital forensics; instead, it takes the standards of ‘wet’ or traditional forensics and applies them to computing devices. This has a number of issues, not least the fact that technological advances are constantly happening; in a field where most large apps are being updated a couple of times per month as a minimum, it becomes very difficult to properly standardise tools and methodologies. Read More
          Neo-Nazi deletes anti-Semitic posts from 'alt-right' Twitter       Cache   Translate Page   Web Page Cache   
A neo-Nazi deleted two posts on Gab, a social media company popular with the alt-right and white supremacists, after Microsoft's cloud computing service threatened to block the platform.Gab said in a tweet Thursday that ...
          Supercomputing HIV-1 Replication at TACC - insideHPC      Cache   Translate Page   Web Page Cache   

insideHPC

Supercomputing HIV-1 Replication at TACC
insideHPC
We discovered, in collaboration with other researchers, that HIV uses this small molecule to complete its function,” said Juan R. Perilla, Department of Chemistry and Biochemistry, University of Delaware. “This is a molecule that's extremely available ...
Supercomputer simulations reveal potential therapeutic target in HIV-1 replicationNews-Medical.net

all 2 news articles »

          Nombramientos Estudio Marval O´Farrel & Mairal      Cache   Translate Page   Web Page Cache   







Marval promotes three to partner


 Marval, O’Farrell & Mairal promoted three senior associates to partner: Diego Fernandez, Gustavo Morales Oliver andMartín Mosteirin. These promotions strengthen the development of three of the firm’s most innovative, cutting-edge practice areas: Information Technology & Privacy; Compliance, Anticorruption & Investigations; and Life Sciences. The new partners already have a wealth of experience and are highly specialized in their respective areas of expertise:


  • Diego Fernández is a technology expert, with 14 years of experience. His wide range of expertise includes IT law, software licensing, E-commerce, IT agreements, due diligence and IT compliance, privacy, data protection, cyber security, Internet law, and cloud computing. Diego is ranked as a leading lawyer in the Argentina TMT section of Chambers Latin America. He has a Master’s in Information Technology and Privacy Law from The John Marshall Law School, Chicago, and is a former foreign associate in the IT & Privacy group at Foley & Lardner, Chicago. He is also a Board Member and Vice-Chair of the South America Committee of the International Technology Law Association (ITechLaw), co-chair of the Argentina Chapter of International Association of Privacy Professionals (IAPP), and member of the Technology Committee of the IBA, the Internet Committee of the INTA, and the Open Source Software Committee of the ABA.


  • Gustavo Morales Oliver is a key member of Marval’s compliance and anti-corruption practice. He is fully dedicated to this practice area, giving him unique, hands-on experience of compliance programs, investigations and related litigation. He has advised international companies from a range of industries and participated in many local and international investigations and cases in this highly specialized field. Gustavo teaches Compliance at the Universidad Torcuato Di Tella Law School. He earned an LL.M. from the University of Illinois and is admitted to practice in both Argentina and New York, USA.


  • Martín Mosteirin advises leading multinational companies on regulatory strategies and compliance in the pharmaceutical, healthcare, biotech, medical and medical-technology devices, dental products, cosmetics, toiletries and perfumes, household cleaning products and food sectors. He has strong expertise in both contentious and non-contentious matters. Martín holds postgraduate qualifications in Pharmaceutical Regulatory Matters and Corporate Advice on the International Trade of Goods, Financial Operations and Payment Methods in Contemporary Commercial Law.

Santiago Carregal, chair of Marval, O’Farrell & Mairal’s executive board, commented on the announcement: “These latest promotions demonstrate Marval’s commitment to developing the Argentine legal market and maintaining the firm’s position at the forefront of legal services in the country. We are proud to enhance these innovative practice areas with the promotions of Diego, Gustavo and Martín. All three have outstanding careers in their fields and offer counsel of the highest quality. Marval continues strengthening its leadership in Argentina and expanding its practice to non-traditional areas in order to grant a genuinely full-service offering that is unique in the Argentine market.”

Marval nombró tres nuevos socios
El 1 de agosto de 2018, Marval, O’Farrell & Mairal nombró socios a tres de sus asociados senior: Diego Fernández, Gustavo Morales Oliver y Martín Mosteirin. Con estas promociones, el Estudio impulsa el desarrollo de tres novedosas áreas de práctica: Tecnologías de la Información y Privacidad; Compliance, Anticorrupción e Investigaciones, y Biociencias. Los nuevos socios tienen amplia experiencia y sólida formación en su especialidad.


  • Diego Fernández: Experto en tecnología, con 14 años de experiencia. Cuenta con una amplia expertise que incluye derecho de tecnología de la información, licencias de software, e-commerce, acuerdos de IT, due diligence y compliance IT, privacidad, protección de datos, ciberseguridad, derecho de internet y cloud computing. Diego ha sido reconocido como un abogado líder de TMT en Argentina en Chambers Latin America. Completó un máster en Tecnologías de la Información y Privacidad en The John Marshall Law School, Chicago, y trabajó como abogado extranjero del área de práctica de IT & Privacidad en Foley & Lardner, Chicago. Es miembro del Directorio y presidente del Comité de Sudamérica de la International Technology Law Association (ITechLaw), vicepresidente del capítulo Buenos Aires KnowledgeNet de la International Association of Privacy Professionals (IAPP), miembro del Comité de Tecnología de la  International Bar Association (IBA), del Comité de Internet de la International Trademark Association (INTA), y miembro del Comité de Software Open Source de la American Bar Association (ABA).


  • Gustavo Morales Oliver: Miembro fundamental del área de práctica de Compliance y Anticorrupción, Gustavo dedica el 100 % de su tiempo a esta práctica, por lo que cuenta con una experiencia única y activa en programas de compliance, investigaciones y litigios relacionados. 

          Best Buy Sales Consultant – Computing and DI - Best Buy - Macon, GA      Cache   Translate Page   Web Page Cache   
Use innovative training tools to stay current, confident and complete, driving profitable growth and achieving individual and department goals....
From Best Buy - Sat, 04 Aug 2018 04:27:05 GMT - View all Macon, GA jobs
          Energy Secretary Rick Perry cheers on fusion energy, science education in visit to PPPL      Cache   Translate Page   Web Page Cache   

The Princeton Plasma Physics Laboratory’s (PPPL) mission of doing research to develop fusion as a viable source of energy is vital to the future of the planet, U.S. Energy Secretary Rick Perry said during an Aug. 9 visit. 

“It’s important not just to PPPL, not just to the DOE (Department of Energy) but to the world,” Perry told staff members during an all-hands meeting. “If we’re able to deliver fusion energy to the world, we’re able to change the world forever.” 

Perry received a standing ovation from the audience in the Melvin B. Gottleib Auditorium following the brief speech and question-and-answer session.

Perry, the 14th U.S. Secretary of Energy, was governor of Texas from 2000 to 2015. He was twice a candidate for president. Before becoming governor, he was elected lieutenant governor in 1998 and served two terms as Texas Commissioner of Agriculture and three terms in the Texas House of Representatives. A graduate of Texas A&M University, he served in the Air Force from 1972 to 1977.

A tour of NSTX-U and Science Education Laboratory

Before the all-hands meeting, Secretary Perry toured PPPL accompanied by Princeton University President Christopher L. Eisgruber, Princeton University Vice President for PPPL Dave McComas, and PPPL Director Steven Cowley, and Princeton Site Office Head Pete Johnson. The group first visited the National Spherical Torus Experiment-Upgrade (NSTX-U) test cell, where they learned about PPPL’s flagship experiment. Stefan Gerhardt, head of Experimental Research Operations, told Perry that numerous scientists at other national laboratories and universities and institutions around the world collaborate on the experiment when it is operating.

The group then visited the Science Education Laboratory where they met with Science Education staff, graduate students and summer research interns. Program leader Shannon Greco told Perry about PPPL’s Young Women’s Conference for 7th -to-10th-grade girls, as well as PPPL’s high school internships, college internships through the DOE’s Science Undergraduate Laboratory Internship (SULI) and Community College Internship and other programs. Arturo Dominguez, Science Education Program Leader, showed Perry the Remote Glow Discharge Experiment (RGDX), which allows anyone in the world to learn about plasma through a remote access plasma experiment.

“The coolest job”

In introducing Perry to a packed crowd in the Auditorium and cafeteria, Cowley noted that Perry has called the job of being Energy Secretary “the coolest job” he’s ever had.

In his remarks, Eisgruber said PPPL’s mission intersects with the University’s mission. “Princeton University has always been a place of innovation, a place where we tackle problems in novel and innovative ways,” he said. “Cracking the code on fusion could crack the code on the energy future of the world. Princeton University is proud to be part of that endeavor.”

Perry teased Cowley about the director’s recent knighthood. Perry said that he often visits England but prefers the hot climate of Texas. He was stationed near Cambridge while in the Air Force and his father served in England as well.

Perry gave a bit of his own history. He grew up on a tenant farm 16 miles from the nearest post office and 200 miles from Fort Worth. He rarely left home until he went to Texas A&M University. After serving in the Air Force, he went on to become the governor of Texas, which he said has the 12th largest economy in the world, for 14 years. Perry said being governor was “the best job he ever had.” But being Secretary of Energy is “the coolest,” he said, because “I get to work with some of the coolest people in the world.”

A shout out to Science Education staff

Perry was particularly impressed by PPPL’s science education programs. He gave a shout out to Shannon Greco, a program leader in Science Education, and Deedee Ortiz, the program manager. “When I go back to Texas, I’m going to know there are people here that are passionate, that are potentially changing the world with what you do with that program,” Perry said.

Science education programs should not only be funded adequately but should also be better publicized so that Americans “understand how important it is for us to have this pipeline of engineers and scientists and technicians coming in.” We’re at a juncture in this world, particularly when it comes to nuclear and energy and fusion energy, when we have to make sure that we have the brain power,” he added. “That’s one of our great challenges.”  

Discussing ITER views

At the end of his remarks, Perry answered questions from PPPL staff. The first question was how he views the international ITER experiment in the south of France, which is funded by the United States and other countries. Perry said that the project was “poorly managed” and “was not well run” in the past. However, Bernard Bigot, the current director-general of the ITER Organization, “has done a very good job managing the construction of it and now they’re on track,” Perry said. He said he recently visited the site and feels “more comfortable” with the progress of the experiment. However, “That’s not to say all is well and here’s the check and fill out whatever amount you need.”

Perry was also asked his thoughts about the private efforts such as TAE Technologies (formerly Tri Alpha Energy) to develop fusion energy, and whether the DOE would expand funding for such private enterprises. Perry said that he couldn’t comment on TAE specifically but he is generally “a big believer, a big supporter of public/private partnerships.” “There are people who don’t think the government needs to do anything,” he said. “I’m not one of those. We need to be smart about it, we need to be thoughtful about it, we need to bring Steve Cowley in and have him say, “this one looks pretty good.”

Perry said that fusion energy is just one example of scientific research supported by the DOE that could change the future. “We think about fusion and clean energy and harnessing the power of the sun and the stars but all of these things come along when America really focuses on science and technology and we fund it and celebrate it,” he said. “That’s the beauty and greatness of what this is all about.”

PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Headline: 
Energy Secretary Rick Perry cheers on fusion energy, science education in visit to PPPL

          Assistant/Associate Web Development Professor - Computing and New Media Technologies (CNMT) - UW Stevens Point - Stevens Point, WI      Cache   Translate Page   Web Page Cache   
Tim Krause, Computing New Media Technology Chair. Position Title Web Development Professor....
From University of Wisconsin System - Thu, 31 May 2018 19:32:31 GMT - View all Stevens Point, WI jobs
          A Distributed Classifier for MicroRNA Target Prediction with Validation Through TCGA Expression Data      Cache   Translate Page   Web Page Cache   
Background: MicroRNAs (miRNAs) are approximately 22-nucleotide long regulatory RNA that mediate RNA interference by binding to cognate mRNA target regions. Here, we present a distributed kernel SVM-based binary classification scheme to predict miRNA targets. It captures the spatial profile of miRNA-mRNA interactions via smooth B-spline curves. This is accomplished separately for various input features, such as thermodynamic and sequence-based features. Further, we use a principled approach to uniformly model both canonical and non-canonical seed matches, using a novel seed enrichment metric. Finally, we verify our miRNA-mRNA pairings using an Elastic Net-based regression model on TCGA expression data for four cancer types to estimate the miRNAs that together regulate any given mRNA. Results: We present a suite of algorithms for miRNA target prediction, under the banner Avishkar, with superior prediction performance over the competition. Specifically, our final kernel SVM model, with an Apache Spark backend, achieves an average true positive rate (TPR) of more than 75 percent, when keeping the false positive rate of 20 percent, for non-canonical human miRNA target sites. This is an improvement of over 150 percent in the TPR for non-canonical sites, over the best-in-class algorithm. We are able to achieve such superior performance by representing the thermodynamic and sequence profiles of miRNA-mRNA interaction as curves, devising a novel seed enrichment metric, and learning an ensemble of miRNA family-specific kernel SVM classifiers. We provide an easy-to-use system for large-scale interactive analysis and prediction of miRNA targets. All operations in our system, namely candidate set generation, feature generation and transformation, training, prediction, and computing performance metrics are fully distributed and are scalable. Conclusions: We have developed an efficient SVM-based model for miRNA - arget prediction using recent CLIP-seq data, demonstrating superior performance, evaluated using ROC curves for different species (human or mouse), or different target types (canonical or non-canonical). We analyzed the agreement between the target pairings using CLIP-seq data and using expression data from four cancer types. To the best of our knowledge, we provide the first distributed framework for miRNA target prediction based on Apache Hadoop and Spark. Availability: All source code and sample data are publicly available at https://bitbucket.org/cellsandmachines/avishkar. Our scalable implementation of kernel SVM using Apache Spark, which can be used to solve large-scale non-linear binary classification problems, is available at https://bitbucket.org/cellsandmachines/kernelsvmspark.
          Structural Target Controllability of Linear Networks      Cache   Translate Page   Web Page Cache   
Computational analysis of the structure of intra-cellular molecular interaction networks can suggest novel therapeutic approaches for systemic diseases like cancer. Recent research in the area of network science has shown that network control theory can be a powerful tool in the understanding and manipulation of such bio-medical networks. In 2011, Liu et al. developed a polynomial time algorithm computing the size of the minimal set of nodes controlling a linear network. In 2014, Gao et al. generalized the problem for target control, minimizing the set of nodes controlling a target within a linear network. The authors developed a Greedy approximation algorithm while leaving open the complexity of the optimization problem. We prove here that the target controllability problem is NP-hard in all practical setups, i.e., when the control power of any individual input is bounded by some constant. We also show that the algorithm provided by Gao et al. fails to provide a valid solution in some special cases, and an additional validation step is required. We fix and improve their algorithm using several heuristics, obtaining in the end an up to 10-fold decrease in running time and also a decrease in the size of solutions.
          Microservices Monitor Prometheus Emerges from CNCF Incubation      Cache   Translate Page   Web Page Cache   

Software originally created by SoundCloud to monitor a complex set of dynamically-provisioned software services has graduated from an incubation program sponsored by the Cloud Native Computing Foundation (CNCF). Prometheus is the second open source software program to graduate from the CNCF, following the Kubernetes open source container orchestration engine, originally developed by Google. The CNCF announced the graduation at the annual […]

The post Microservices Monitor Prometheus Emerges from CNCF Incubation appeared first on The New Stack.


          VMware App Volumes Technical Overview      Cache   Translate Page   Web Page Cache   
VMware App Volumes Technical Overview VMware App Volumes Technical Overview This brief video is a techncial overview of VMware App Volumes. Learn about what App Volumes does, how it does it, and understand the architecture. VMware End-User Computing (EUC) solutions empower the digital workspace by simplifying app & access management, unifying endpoint management & transforming Windows delivery. To learn more…

Continue Reading


          (USA-VA-Hampton) Electrical Controls Designer      Cache   Translate Page   Web Page Cache   
**Electrical Controls Designer** **Description** Jacobshas partnered with NASA to support space flight programs for more than 40years\. While the majority of work is directly in support of LaRC, otherindustry partners and Government agencies may be supported at remote sites asdirected by the Contracting Officer \(CO\)\. The Center Maintenance, Operations,and Engineering \(CMOE\) contract comprises of three major categories of work –operations, maintenance, and engineering \(OME\), as described below\. * Maintenance – Research and institutional facility maintenance includes, butnot limited to, preventive maintenance, trouble calls, repairs, ReliabilityCentered Maintenance \(RCM\), Facility Condition Assessment \(FCA\), andmaintenance/operation of central utilities \(e\.g\., electrical powerdistribution, potable water, storm water drainage\) * Operations – FacilitiesOperations support includes, but not limited to, wind tunnels, laboratories,and test stands testing, instrumentation calibration/repair; plant andutilities \(e\.g\., high pressure air, Liquid Nitrogen \(LN2\) and Steam Plant\);technology development and administration, including, but not limited to,Facility Automation Systems \(FAS\) and Data Acquisition Systems \(DAS\) * Engineering – Facility engineering includes, but not limited to, engineeringstudies, design, construction, construction management, configurationmanagement, tactical engineering, pressure system recertification, projectmanagement/planning support Responsible: 99. Provide technical controls and instrumentation system designs and drafting; Coordinates work with project management; senior engineers and drafters 99. Performs creative design from general concepts, basic and independent engineering analysis, liaison, and field monitoring\. 99. Performs complex drafting assignments with general instructions and working from furnished or self\-generated computations and sketches\. 99. Evaluates functional feasibility of designs and their conformance to specifications; uses company and/or client drafting standards and specifications and engineering computations and sketches to assess the accuracy and acceptability of drawings prepared by other drafters, or designers\. Instructs originators in the correction of noted deficiencies and certifies completeness of final drawing packages\. 99. Prepares sketches to be drawn by others; maintain drafting standards and suggest new standards or changes to existing standards; proficient in operating CAD and ReVIT software in preparation of design documents\. 99. Provides technical guidance to assigned team assisting senior engineer, senior designers, project managers in providing design support to projects including Perform electrical controls assignments and work from designs of others\. Prepares design concepts/layouts; independently performs basic engineering analysis to size and select equipment and materials\. 99. Compile data, perform elementary design computations, prepare estimates, prepare equipment arrangement plans, conduit / cabling diagrams, select components and develop bill of materials, develop controls and instrumentation schematics, ladder logic diagrams, interconnection wiring diagrams, Support P&IDs development, PLC controls, I/O list, instrument devices, control panel assembly and layout diagrams, consult manufacturers, vendor research, evaluate materials, write specifications, prepare construction cost estimates, provide professional electrical/controls engineering support within allocated task resources, and work under supervision\. Aut