Next Page: 10000

          

为什么说Prometheus是足以取代Zabbix的监控神器?

 Cache   

本文由 dbaplus 社群授权转载。

一、简介

Kubernetes自从2012年开源以来便以不可阻挡之势成为容器领域调度和编排的领头羊,Kubernetes是Google Borg系统的开源实现,于此对应Prometheus则是Google BorgMon的开源实现。Prometheus是由SoundCloud开发的开源监控报警系统和时序列数据库。从字面上理解,Prometheus由两个部分组成,一个是监控报警系统,另一个是自带的时序数据库(TSDB)。

2016年,由Google发起的Linux基金会旗下的原生云基金会(Cloud Native Computing Foundation)将Prometheus纳入其第二大开源项目。Prometheus在开源社区也十分活跃,在GitHub上拥有两万多Star,并且系统每隔一两周就会有一个小版本的更新。

二、各种监控工具对比

其实,在Prometheus之前市面已经出现了很多的监控系统,如Zabbix、Open-Falcon、Nagios等。那么Prometheus和这些监控系统有啥异同呢?我们先简单回顾一下这些监控系统。

1、Zabbix

Zabbix是由Alexei Vladishev开源的分布式监控系统,支持多种采集方式和采集客户端,同时支持SNMP、IPMI、JMX、Telnet、SSH等多种协议,它将采集到的数据存放到数据库中,然后对其进行分析整理,如果符合告警规则,则触发相应的告警。


          

Now-Crowd Billboards - Office Life

 Cache   

Time to go to work and make a living! The new Now-Crowd Billboards - Office Life will make your office renders look like the hot new startup or the stodgy old executive office! Paint your offices, conference rooms, and hallwaysFULL of people walking, talking, computing, and planning in high quality, flexible and easy to use billboards... perfect as a backdrop for the main action that you add!

Billboards are a fantastically quick and resource easy way to add background crowds to your scenes. These pre-rendered elements are great to play the background characters in your scene. The Now-Crowd Billboards improves on standard billboards by providing 72 different views for every billboard figure. Change a billboard figure to 12 different horizontal angles and 6 different vertical angles. When your camera moves, the Now-Crowd Billboards can be changed to match!

There are also great scripts to make these billboards easy to use! There is one script to turn the billboards to face the camera and another script to change billboard figure angle based on horizontal and vertical orientation. Finally, there is a script that pushes your billboard directly away from your camera view or pulls it towards you.

For Iray and 3Delight

No figures are included with this product.

No other products are needed to use the Now-Crowd Billboards - Office Life.

Note that Now-Crowd Billboard products are large (10GB) because of the many textures (72 images per figure x number of figures) and can take a while to download.

Price: $29.95 Special Price: $14.98


          

Qubit with Matthew Tamsett and Ravi Upreti

 Cache   

Our guests Matthew Tamsett and Ravi Upreti join Gabi Ferrara and Aja Hammerly to talk about data science and their project, Qubit. Qubit helps web companies by measuring different user experiences, analyzing that information, and using it to improve the website. They also use the collected data along with ML to predict things, such as which products users will prefer, in order to provide a customized website experience.

Matthew talks a little about his time at CERN and his transition from working in academia to industry. It’s actually fairly common for physicists to branch out into data science and high performance computing, Matthew explains. Later, Ravi and Matthew talk GCP shop with us, explaining how they moved Qubit to GCP and why. Using PubSub, BigQuery, and BigQuery ML, they can provide their customers with real-time solutions, which allows for more reactive personalization. Data can be analyzed and updates can be created and pushed much faster with GCP. Autoscaling and cloud management services provided by GCP have given the data scientists at Qubit back their sleep!

Matthew Tamsett

Matthew was trained in experimental particle physics at Royal Holloway University of London, and did his Ph.D. on the use of leptonic triggers for the detection of super symmetric signals at the ATLAS detector at CERN. Following this, he completed three post doctoral positions at CERN and on the neutrino experiment NOvA at Louisiana Tech University, Brookhaven National Laboratory, New York, and the University of Sussex UK, culminating in a EU Marie Curie fellowship. During this time, Matt co-authored many papers including playing a minor part in the discovery of the Higgs Boson. Since leaving academia in 2016, he’s worked at Qubit as a data scientist and later as lead data scientist where he lead a team working to improve the online shopping experience via the use of personalization, statistics and predictive modeling.

Ravi Upreti

Ravi has been working with Qubit for almost 4 years now and leads the platform engineering team there. He learned distributed computing, parallel algorithms and extreme computing at Edinburgh University. His four year stint at Ocado helped developed a strong domain knowledge for e-commerce, along with deep technical knowledge. Now it has all come together, as he gets to apply all these learnings to Qubit, at scale.

Cool things of the week
  • A developer goes to a DevOps conference blog
  • Cloud Build brings advanced CI/CD capabilities to GitHub blog
  • Cloud Build called out in Forrester Wave twitter
  • 6 strategies for scaling your serverless applications blog
Interview
  • Qubit site
  • Qubit Blog blog
  • Pub/Sub site
  • BigQuery site
  • BigQuery ML site
  • Cloud Datastore site
  • Cloud Memorystore site
  • Cloud Bigtable site
  • Cloud SQL site
  • Cloud AutoML site
  • Goodbye Hadoop. Building a streaming data processing pipeline on Google Cloud blog
Question of the week

How do you deploy a Windows container on GKE?

Where can you find us next?

Gabi will be at the Google Cloud Summit in Sao Paulo, Brazil.

Aja will be at Cloud Next London.

Sound Effect Attribution

          

Non-disruptive network maintenance Thursday, Oct. 10

 Cache   
WHAT ARE WE DOING? Computing will be upgrading the firmware of a distribution router located at the Grid Computing Center.  During this time, network traffic will be routed via a redundant network path. WHEN WILL THIS OCCUR? Thursday, Oct. 10; 7 a.m. to 8 a.m. WHAT IS THE IMPACT TO YOU? This work is expected to be transparent to users. WHAT DO YOU NEED TO DO? You don’t need to take any action. This is for your information only. Please...
          

U.S. researchers on front line of battle against Chinese theft

 Cache   

WASHINGTON – As the U.S. warned allies around the world that Chinese tech giant Huawei was a security threat, the FBI was making the same point quietly to a Midwestern university.

In an email to the associate vice chancellor for research at the University of Illinois-Urbana-Champaign, an agent wanted to know if administrators believed Huawei had stolen any intellectual property from the school.

Told no, the agent responded: “I assumed those would be your answers, but I had to ask.”

It was no random query.

The FBI has been reaching out to universities across the country as the U.S. tries to stem what American authorities portray as the wholesale theft of technology and trade secrets by researchers tapped by China. The breadth and intensity of the campaign emerges in emails obtained by The Associated Press through records requests to public universities in 50 states.

Agents have lectured at seminars, briefed administrators in campus meetings and distributed pamphlets with cautionary tales of trade secret theft. In the past two years, they’ve requested emails of two University of Washington researchers, asked Oklahoma State University if it has scientists in specific areas and asked about “possible misuse” of research funds by a University of Colorado Boulder professor, according to the emails.

The emails reveal administrators routinely requesting FBI briefings. But they also show some struggling to balance legitimate national security concerns against their own eagerness to avoid stifling research or tarnishing legitimate scientists. The Justice Department says it appreciates that push-pull and wants only to help separate the relatively few researchers engaged in theft from the majority who are not.

Senior FBI officials told AP they’re not encouraging schools to monitor researchers by nationality but instead to take steps to protect research. They consider the briefings vital since they say universities haven’t historically been as attentive to security as they should be.

“When we go to the universities, what we’re trying to do is highlight the risk to them without discouraging them from welcoming the researchers and students from a country like China,” said Assistant Attorney General John Demers, the Justice Department’s top national security official.

The threat, officials say, is genuine. A University of Kansas researcher was recently charged with collecting federal grant money while working full-time for a Chinese university, and a Chinese government employee was arrested in a visa fraud scheme allegedly aimed at recruiting U.S. research talent. The Justice Department launched last year an effort called the China Initiative aimed at identifying priority trade secret cases and focusing resources on them.

“Existentially, we look at China as our greatest threat from an intelligence perspective, and they succeeded significantly in the last decade from stealing our best and brightest technology,” said top U.S. counterintelligence official William Evanina.

The most consequential case this year centered not on a university but on Huawei, charged with stealing corporate trade secrets and evading sanctions. The company denies wrongdoing. Several universities including Illinois, which received the FBI email last February, have begun severing ties with Huawei.

But the government’s track record hasn’t been perfect.

Federal prosecutors in 2015 dropped charges against a Temple University professor earlier accused of sending designs for a pocket heater to China. The professor, Xiaoxing Xi, is suing the FBI. “It was totally wrong,” he said, “so I can only speak from my experience that whatever they put out there is not necessarily true.”

Richard Wood, the then-interim provost at the University of New Mexico, conveyed ambivalence in an email to colleagues last year. He wrote that he took seriously the concerns the FBI had identified to him in briefings, but also said “there are real tensions” with the “traditional academic norms regarding the free exchange of scientific knowledge wherever appropriate.”

“I do not think we would be wise to create new ‘policy’ on terrain this complex and fraught with internal trade-offs between legitimate concerns and values without some real dialogue on the matter,” Wood wrote.

FBI officials say they’ve received consistently positive feedback from universities. The emails show administrators at schools including the University of North Carolina-Chapel Hill and Nebraska requesting briefings, training or generally expressing eagerness for cooperation.

Kevin Gamache, chief research security officer for the Texas A&M University system, told the AP that he values his FBI interactions and that it flows in both directions.

“It’s a dialogue that has to be ongoing.”

The vice president for research and economic development at the University of Nevada, Las Vegas welcomed the assistance in a city she noted was the “birthplace of atomic testing.

“We have a world-class radiochemistry faculty, our College of Engineering has significant numbers of faculty and students from China, and we have several other issues of concern to me as VPR. In all of these cases, the FBI is always available to help,” the administrator wrote to agents.

More than two dozen universities produced records, including symposium itineraries and a 13-page FBI pamphlet titled “China: The Risk to Academia” that warns that China does “not play by the same rules of academic integrity” as American universities.

Some emails show agents seeking tips or following leads.

“If you have concerns about any faculty or graduate researchers, students, outside vendors … pretty much anything we previously discussed – just reminding you that I am here to help,” one wrote to Iowa State officials in 2017.

In May, an agent sent the University of Washington a records request for two researchers’ emails, seeking references to Chinese-government talent recruitment programs.

Last year, an agent asked Oklahoma State University if it had researchers in encryption research or quantum computing. The University of Colorado received an FBI request about an “internal investigation” into a professor’s “possible misuse” of NIH funds. The school told the AP that it found no misconduct and the professor has resigned.

Though espionage concerns aren’t new, FBI officials report an uptick in targeting of universities and more U.S. government attention too. The FBI says it’s seen some progress from universities, with one official saying schools are more reliably pressing researchers about outside funding sources.

Demers, the Justice Department official, said espionage efforts are “as pervasive, as well-resourced, as ever today.

“It’s a serious problem today on college campuses.”


          

Chrome OS: Tips, tools, and other Chromebook intelligence

 Cache   

Google's Chrome OS platform sure has come a long way.

From the early days, when Chrome OS was little more than an experimental "browser in a box," to today — with the platform powering first-class hardware and supporting a diverse range of productivity applications — Google's once-crazy-seeming project has turned into one of the world's most intriguing and rapidly expanding technological forces.

I've been covering Chrome OS closely since the start. I lived with the first Chromebook prototype, the Cr-48, and have used Chromebooks as part of my own personal computing setup in varying capacities ever since. I write about the field not only as someone who's studied it professionally from day 1 but also as someone who has used it personally that entire time, up through today.

To read this article in full, please click here


          

LawNext Episode 53: The AI Behind ROSS, with CTO Jimoh Ovbiagele and Head of Engineering Stergios Anastasiadis 

 Cache   
Back in LawNext Episode 48, I traveled to Toronto to record a live interview with the founders of the AI-driven legal research platform ROSS Intelligence, CEO Andrew Arruda and CTO Jimoh Ovbiagele, in which they discussed the company’s rise from startup in 2014 to a more mature and established company. In this second interview recorded in Toronto, we take a deep dive into the artificial-intelligence technology that drives the ROSS legal research platform. For this interview, CTO Ovbiagele returns, joined this time by Stergios Anastasiadis, head of engineering at ROSS. They discuss why they believe ROSS’s AI technology is unique, how they see AI changing the legal industry, and what’s ahead for ROSS and AI in law. Ovbiagele is a computer scientist who was one of the three original founders of ROSS in 2014, when it emerged out of a cognitive-computing competition at the University of Toronto. Anastasiadis is a computer scientist who joined the company earlier…
          

WD PC & Mobile laptop external and internal hard drive 500GB

 Cache   
السعر: 170 ر. س, الماركة: ويسترن ديجيتال, الحالة: جديد,
FOR EVERYDAY COMPUTING WHILE ON THE GO

Built to WD’s high standards of quality and reliability, WD Blue mobile hard drives offer the features that are ideal for your everyday mobile computing needs. 

A Modern Classic

Designed and manufactured with technology found in WD’s original award-winning d... https://olx.sa.com/ad/wd-pc-mobile-laptop-external-and-internal-hard-drive-500gb-#utm_source=googlier.com/page/2019_10_08/20891&utm_campaign=link&utm_term=googlier&utm_content=googlier.comID6Ockg.html#utm_source=googlier.com/page/2019_10_08/20891&utm_campaign=link&utm_term=googlier&utm_content=googlier.com
          

StarTech.com TB3CDK2DP Thunderbolt 3 and USB-C hybrid dock is magical [Review]

 Cache   
Unless you are a gamer or enthusiast, owning a desktop computer these days is sort of, well... stupid. Look, even if you do most of your computing at a desk, you should still buy a laptop. Why? Think about it -- a desktop keeps you tethered to one place, while a notebook is portable. Thanks to Thunderbolt 3 and USB-C, you can use your laptop as a makeshift desktop by using a docking station. In other words, you can connect your notebook to a monitor, keyboard, mouse, web cam, external hard dive -- pretty much anything you need. The dock… [Continue Reading]
          

US and UK sign deal to speed up electronic evidence collection from tech firms in serious criminal cases - www.computing.co.uk

 Cache   
US and UK sign deal to speed up electronic evidence collection from tech firms in serious criminal cases  www.computing.co.uk#utm_source=googlier.com/page/2019_10_08/22582&utm_campaign=link&utm_term=googlier&utm_content=googlier.com
          

Oracle to hire 2,000 workers to expand cloud business to more countries

 Cache   
Oracle Corp plans to hire nearly 2,000 additional workers as part of an aggressive plan to roll out its cloud computing services to more locations around the world, its cloud chief told Reuters on Monday.

          

Microsoft Surface Laptop 3 Takes Aim at Premium Ultralights

 Cache   

Microsoft clearly isn't afraid to go toe-to-toe with either its OEMs or Apple in the Ultralight notebook market, as it unleashes its third generation of Surface Laptop. The Surface Laptop 3 features upgrades all around, and a new 15-inch model.

The post Microsoft Surface Laptop 3 Takes Aim at Premium Ultralights appeared first on ExtremeTech.


          

[AN #67]: Creating environments in which to study inner alignment failures

 Cache   
Published on October 7, 2019 5:10 PM UTC

Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email.

Audio version here (may not be up yet).

Highlights

Towards an empirical investigation of inner alignment (Evan Hubinger) (summarized by Rohin): Last week, we saw that the worrying thing about mesa optimizers (AN #58) was that they could have robust capabilities, but not robust alignment (AN#66). This leads to an inner alignment failure: the agent will take competent, highly-optimized actions in pursuit of a goal that you didn't want.

This post proposes that we empirically investigate what kinds of mesa objective functions are likely to be learned, by trying to construct mesa optimizers. To do this, we need two ingredients: first, an environment in which there are many distinct proxies that lead to good behavior on the training environment, and second, an architecture that will actually learn a model that is itself performing search, so that it has robust capabilities. Then, the experiment is simple: train the model using deep RL, and investigate its behavior off distribution to distinguish between the various possible proxy reward functions it could have learned. (The next summary has an example.)

Some desirable properties:

- The proxies should not be identical on the training distribution.

- There shouldn't be too many reasonable proxies, since then it would be hard to identify which proxy was learned by the neural net.

- Proxies should differ on "interesting" properties, such as how hard the proxy is to compute from the model's observations, so that we can figure out how a particular property influences whether the proxy will be learned by the model.

Rohin's opinion: I'm very excited by this general line of research: in fact, I developed my own proposal along the same lines. As a result, I have a lot of opinions, many of which I wrote up in this comment, but I'll give a summary here.

I agree pretty strongly with the high level details (focusing on robust capabilities without robust alignment, identifying multiple proxies as the key issue, and focusing on environment design and architecture choice as the hard problems). I do differ in the details though. I'm more interested in producing a compelling example of mesa optimization, and so I care about having a sufficiently complex environment, like Minecraft. I also don't expect there to be a "part" of the neural net that is actually computing the mesa objective; I simply expect that the heuristics learned by the neural net will be consistent with optimization of some proxy reward function. As a result, I'm less excited about studying properties like "how hard is the mesa objective to compute".

A simple environment for showing mesa misalignment (Matthew Barnett) (summarized by Rohin): This post proposes a concrete environment in which we can run the experiments suggested in the previous post. The environment is a maze which contains keys and chests. The true objective is to open chests, but opening a chest requires you to already have a key (and uses up the key). During training, there will be far fewer keys than chests, and so we would expect the learned model to develop an "urge" to pick up keys. If we then test it in mazes with lots of keys, it would go around competently picking up keys while potentially ignoring chests, which would count as a failure of inner alignment. This predicted behavior is similar to how humans developed an "urge" for food because food was scarce in the ancestral environment, even though now food is abundant.

Rohin's opinion: While I would prefer a more complex environment to make a more compelling case that this will be a problem in realistic environments, I do think that this would be a great environment to start testing in. In general, I like the pattern of "the true objective is Y, but during training you need to do X to get Y": it seems particularly likely that even current systems would learn to competently pursue X in such a situation.

Technical AI alignment

Iterated amplification

Machine Learning Projects on IDA (Owain Evans et al) (summarized by Nicholas): This document describes three suggested projects building on Iterated Distillation and Amplification (IDA), a method for training ML systems while preserving alignment. The first project is to apply IDA to solving mathematical problems. The second is to apply IDA to neural program interpretation, the problem of replicating the internal behavior of other programs as well as their outputs. The third is to experiment with adaptive computation where computational power is directed to where it is most useful. For each project, they also include motivation, directions, and related work.

Nicholas's opinion: Figuring out an interesting and useful project to work on is one of the major challenges of any research project, and it may require a distinct skill set from the project's implementation. As a result, I appreciate the authors enabling other researchers to jump straight into solving the problems. Given how detailed the motivation, instructions, and related work are, this document strikes me as an excellent way for someone to begin her first research project on IDA or AI safety more broadly. Additionally, while there are many public explanations of IDA, I found this to be one of the most clear and complete descriptions I have read.

Read more: Alignment Forum summary post

List of resolved confusions about IDA (Wei Dai) (summarized by Rohin): This is a useful post clarifying some of the terms around IDA. I'm not summarizing it because each point is already quite short.

Mesa optimization

Concrete experiments in inner alignment (Evan Hubinger) (summarized by Matthew): While the highlighted posts above go into detail about one particular experiment that could clarify the inner alignment problem, this post briefly lays out several experiments that could be useful. One example experiment is giving an RL trained agent direct access to its reward as part of its observation. During testing, we could try putting the model in a confusing situation by altering its observed reward so that it doesn't match the real one. The hope is that we could gain insight into when RL trained agents internally represent 'goals' and how they relate to the environment, if they do at all. You'll have to read the post to see all the experiments.

Matthew's opinion: I'm currently convinced that doing empirical work right now will help us understand mesa optimization, and this was one of the posts that lead me to that conclusion. I'm still a bit skeptical that current techniques are sufficient to demonstrate the type of powerful learned search algorithms which could characterize the worst outcomes for failures in inner alignment. Regardless, I think at this point classifying failure modes is quite beneficial, and conducting tests like the ones in this post will make that a lot easier.

Learning human intent

Fine-Tuning GPT-2 from Human Preferences (Daniel M. Ziegler et al) (summarized by Sudhanshu): This blog post and its associated paper describes the results of several text generation/continuation experiments, where human feedback on initial/older samples was used in the form of a reinforcement learning reward signal to finetune the base 774-million parameter GPT-2 language model (AN #46). The key motivation here was to understand whether interactions with humans can help algorithms better learn and adapt to human preferences in natural language generation tasks.

They report mixed results. For the tasks of continuing text with positive sentiment or physically descriptive language, they report improved performance above the baseline (as assessed by external examiners) after fine-tuning on only 5,000 human judgments of samples generated from the base model. The summarization task required 60,000 samples of online human feedback to perform similarly to a simple baseline, lead-3 - which returns the first three sentences as the summary - as assessed by humans.

Some of the lessons learned while performing this research include 1) the need for better, less ambiguous tasks and labelling protocols for sourcing higher quality annotations, and 2) a reminder that "bugs can optimize for bad behaviour", as a sign error propagated through the training process to generate "not gibberish but maximally bad output". The work concludes on the note that it is a step towards scalable AI alignment methods such as debate and amplification.

Sudhanshu's opinion: It is good to see research on mainstream NLProc/ML tasks that includes discussions on challenges, failure modes and relevance to the broader motivating goals of AI research.

The work opens up interesting avenues within OpenAI's alignment agenda, for example learning a diversity of preferences (A OR B), or a hierarchy of preferences (A AND B) sequentially without catastrophic forgetting.

In order to scale, we would want to generate automated labelers through semi-supervised reinforcement learning, to derive the most gains from every piece of human input. The robustness of this needs further empirical and conceptual investigation before we can be confident that such a system can work to form a hierarchy of learners, e.g. in amplification.

Rohin's opinion: One thing I particularly like here is that the evaluation is done by humans. This seems significantly more robust as an evaluation metric than any automated system we could come up with, and I hope that more people use human evaluation in the future.

Read more: Paper: Fine-Tuning Language Models from Human Preferences

Preventing bad behavior

Robust Change Captioning (Dong Huk Park et al) (summarized by Dan H): Safe exploration requires that agents avoid disrupting their environment. Previous work, such as Krakovna et al. (AN #10), penalize an agent's needless side effects on the environment. For such techniques to work in the real world, agents must also estimate environment disruptions, side effects, and changes while not being distracted by peripheral and unaffecting changes. This paper proposes a dataset to further the study of "Change Captioning," where scene changes are described by a machine learning system in natural language. That is, given before and after images, a system describes the salient change in the scene. Work on systems that can estimate changes can likely progress safe exploration.

Interpretability

Learning Representations by Humans, for Humans (Sophie Hilgard, Nir Rosenfeld et al) (summarized by Asya): Historically, interpretability approaches have involved machines acting as experts, making decisions and generating explanations for their decisions. This paper takes a slightly different approach, instead using machines as advisers who are trying to give the best possible advice to humans, the final decision makers. Models are given input data and trained to generate visual representations based on the data that cause humans to take the best possible actions. In the main experiment in this paper, humans are tasked with deciding whether to approve or deny loans based on details of a loan application. Advising networks generate realistic-looking faces whose expressions represent multivariate information that's important for the loan decision. Humans do better when provided the facial expression 'advice', and furthermore can justify their decisions with analogical reasoning based on the faces, e.g. "x will likely be repaid because x is similar to x', and x' was repaid".

Asya's opinion: This seems to me like a very plausible story for how AI systems get incorporated into human decision-making in the near-term future. I do worry that further down the line, AI systems where AIs are merely advising will get outcompeted by AI systems doing the entire decision-making process. From an interpretability perspective, it also seems to me like having 'advice' that represents complicated multivariate data still hides a lot of reasoning that could be important if we were worried about misaligned AI. I like that the paper emphasizes having humans-in-the-loop during training and presents an effective mechanism for doing gradient descent with human choices.

Rohin's opinion: One interesting thing about this paper is its similarity to Deep RL from Human Preferences: it also trains a human model, that is improved over time by collecting more data from real humans. The difference is that DRLHP produces a model of the human reward function, whereas the model in this paper predicts human actions.

Other progress in AI

Reinforcement learning

The Principle of Unchanged Optimality in Reinforcement Learning Generalization (Alex Irpan and Xingyou Song) (summarized by Flo): In image recognition tasks, there is usually only one label per image, such that there exists an optimal solution that maps every image to the correct label. Good generalization of a model can therefore straightforwardly be defined as a good approximation of the image-to-label mapping for previously unseen data.

In reinforcement learning, our models usually don't map environments to the optimal policy, but states in a given environment to the corresponding optimal action. The optimal action in a state can depend on the environment. This means that there is a tradeoff regarding the performance of a model in different environments.

The authors suggest the principle of unchanged optimality: in a benchmark for generalization in reinforcement learning, there should be at least one policy that is optimal for all environments in the train and test sets. With this in place, generalization does not conflict with good performance in individual environments. If the principle does not initially hold for a given set of environments, we can change that by giving the agent more information. For example, the agent could receive a parameter that indicates which environment it is currently interacting with.

Flo's opinion: I am a bit torn here: On one hand, the principle makes it plausible for us to find the globally optimal solution by solving our task on a finite set of training environments. This way the generalization problem feels more well-defined and amenable to theoretical analysis, which seems useful for advancing our understanding of reinforcement learning.

On the other hand, I don't expect the principle to hold for most real-world problems. For example, in interactions with other adapting agents performance will depend on these agents' policies, which can be hard to infer and change dynamically. This means that the principle of unchanged optimality won't hold without precise information about the other agent's policies, while this information can be very difficult to obtain.

More generally, with this and some of the criticism of the AI safety gridworlds that framed them as an ill-defined benchmark, I am a bit worried that too much focus on very "clean" benchmarks might divert from issues associated with the messiness of the real world. I would have liked to see a more conditional conclusion for the paper, instead of a general principle.



Discuss
          

Database Administrator - West Virginia Network for Educational Telecomputing (WVNET) - Morgantown, WV

 Cache   
Installs and maintains Oracle web and development products on a variety of platforms including AIX, Linux and Windows platforms.
From Indeed - Wed, 11 Sep 2019 18:29:54 GMT - View all Morgantown, WV jobs
          

Comment on Armenia and the technology of diaspora by tereza uloyan

 Cache   
Emin, I know it's hard for you to accept it. However, that is the case. Hovannes Adamian (1879–1932), one of the founders of color television Sergei Adian (b. 1931), one of the most prominent Soviet mathematicians[40] Tateos Agekian (1913–2006), astrophysicist, a pioneer of stellar dynamics Abraham Alikhanov (1904–1970), Soviet physicist, a founder of nuclear physics in USSR Victor Ambartsumian (1908-1906), astrophysicist, one of the founders of theoretical astrophysics Gurgen Askaryan (1928–1997), physicist, inventor of light self focusing Boris Babayan (b. 1933), father of supercomputing in the former Soviet Union and Russia Mikhail Chailakhyan (1902–1991), founder of hormonal theory of plant development Artur Chilingarov (b. 1939), polar explorer, member of the State Duma from 1993 to 2011 Bagrat Ioannisiani (1911–1985), designer of the BTA-6, one of the largest telescopes in the world Andronik Iosifyan (1905–1993), father of electromechanics in the USSR, one of the founders of Soviet missilery Alexander Kemurdzhian (1921–2003), designer of the first rovers to explore another world: first moon rovers and first mars rovers Semyon Kirlian (1898–1978), founder of Kirlian photography; discovered that living matter is emitting energy fields Ivan Knunyants (1906–1990), chemist, a major developer of the Soviet chemical weapons program Samvel Kocharyants (1909–1993), developer of nuclear warheads for ballistic missiles[41] Yuri Oganessian (b. 1933), nuclear physicist, the world's leading researcher in superheavy elements Leon Orbeli (1882–1958), founder of evolutionary physiology Mikhail Pogosyan (b. 1956), aerospace engineer, general director of Sukhoi Norair Sisakian (1907–1966), biochemist, a founder of space biology; pioneer in biochemistry of sub-cell structures and technical biochemistry Karen Ter-Martirosian (1922–2005), theoretical physicist, known for his contributions to quantum mechanics and quantum field theory These are just some of them.
          

Kernel Patch Protection aka "PatchGuard"

 Cache   

Originally posted on: http://brustblog.net/archive/2006/10/30/95540.aspx#utm_source=googlier.com/page/2019_10_08/29047&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

If anyone has been following this technology closely, there have been a lot of complaints by some of the security vendors regarding PatchGuard. I first heard about this technology at TechEd 2006 in a lot of the Vista sessions.

The recent controversy caused me to do a little more research in to this technology and the issues surrounding it.

The official name for this technology is called Kernel Patch Protection (KPP) and it's purpose is to increase the security and stability of the Windows kernel. KPP was first supported in Windows Server 2003 SP1, Windows XP, and Windows XP Professional Edition. The important thing to understand about this support is that it is for x64 architectures only.

KPP is a direct outgrowth of both customer complaints regarding the security and stability of the Windows kernel and Microsoft's Trustworthy Computing initiative, announced in early 2002.

In order to understand the controversy surrounding KPP, it is important to understand what KPP actually is and what aspects of the Windows operating system it deals with.

What is the Kernel?

The kernel is the "heart" of the operating system and is one of the first pieces of code to load when the operating system starts. Everything in Windows (and almost any operating system, for that matter) runs on a layer that sits on top of the kernel. This makes the kernel the primary factor in the performance, reliability and security of the entire operating system.

Since all other programs and many portions of the operating system itself depend on the kernel, any problems in the kernel can make those programs crash or behave in unexpected ways. The "Blue Screen of Death" (BSoD) in Windows is the result of an error in the kernel or a kernel mode driver that is so severe that the system can't recover.

What is Kernel Patching?

According to Microsoft's KPP FAQ, kernel patching (also known as kernel "hooking") is

the practice of using internal system calls and other unsupported mechanisms to modify or replace code or critical structures in the kernel of the Microsoft Windows operating system with unknown code or data. "Unknown code or data" is any code or data that is not provided by Microsoft as part of the Windows kernel.

What exactly, does that mean? The most common scenario is for programs to patch the kernel by changing a function pointer in the system service table (SST). The SST is an array of function pointers to in-memory system services. For example, if the function pointer to the NtCreateProcess method is changed, anytime the service dispatch invokes NtCreateProcess, it is actually running the third-party code instead of the kernel code. While the third-party code might be attempting to provide a valid extension to the kernel functionality, it could also be malicious.

Even though almost all of the Windows kernels have allowed kernel patching, it has always been an officially unsupported activity.

Kernel patching breaks the integrity of the Windows kernel and can introduce problems in three critical areas:

  • Reliability
    Since patching replaces kernel code with third-party code, this code can be untested. There is no way for the kernel to assess the quality of intent of this new code. Beyond that, kernel code is very complex, so bugs of any sort can have a significant impact on system stability.
  • Performance
    The overall performance of the operating system is largely determined by the performance of the kernel. Poorly designed third-party code can cause significant performance issues and can make performance unpredictable.
  • Security
    Since patching replaces known kernel code with potentially unknown third-party code, the intent of that third-party code is also unknown. This becomes a potential attack surface for malicious code.

Why Kernel Patch Protection?

As I mentioned earlier, the primary purpose of KPP is to protect the integrity of the kernel and improve the reliability, performance, and security of the Windows operating systems. This is becoming increasingly more important with the prevalence of malicious software that is implementing "root kits". A root kit is a specific type of malicious software (although it is usually included as a part of another, larger, piece of software) that uses a variety of techniques to gain access to a computer. Increasingly, root kits are becoming more sophisticated and are attacking the kernel itself. If the rootkit can gain access to the kernel, it can actually hide itself from the file system and even from any anti-malware tools. Root kits are typically used by malicious software, however, they have also been used by large legitimate businesses, including Sony.

While KPP is a good first step at preventing such attacks, it is not a "magic bullet". It does eliminate one way to attack the system...patc#utm_source=googlier.com/page/2019_10_08/29047&utm_campaign=link&utm_term=googlier&utm_content=googlier.comhing kernel images to manipulate kernel functionality. KPP takes the approach that there is no reliable way for the operating system to distinguish between "known good" and "known bad" components, so it prevents anything from patching the kernel. The only official way to disable KPP is by attaching a kernel debugger to the system.

KPP monitors certain key resources used by the kernel to determine if they have been modified. If the operating system detects that one of these resources has been modified it generates a "bug check", which is essentially a BSoD, and shuts down the system. Currently the following actions trigger this behavior:

  • Modifying system service tables.
  • Modifying the interpret descriptor table (IDT).
  • Modifying the global descriptor table (GDT).
  • Using kernel stacks that are not allocated by the kernel.
  • Patching any part of the kernel. This is currently detected only on AMD64-based systems.

Why x64?

At this point, you may begin to wonder why Microsoft chose to implement this on x64 based systems only. Microsoft is again responding to customer complaints in this decision. Implementing KPP will almost certainly impact comparability of many legitimate software, primarily security software such as anti-virus and anti-malware tools, which were built using unsupported kernel patching techniques. This would cause a huge impact on the consumer and also on Microsoft's partners. Since x64-based machines still make up the smaller install base (although they are gaining ground rapidly) and the majority of x64-based software has been rewritten to take advantage of the newer architecture, the impact is considered to be substantially smaller.

So...why#utm_source=googlier.com/page/2019_10_08/29047&utm_campaign=link&utm_term=googlier&utm_content=googlier.com the controversy?

Since KPP prevents an application or driver from modifying the kernel, it will, effectively, prevent that application or driver from running. KPP in Vista x64 requires any application drivers be digitally signed, although there are some non-intuitive ways to turn that off. (Turning off signed drivers does prevent certain other aspects of Windows from operating, such as being able to view DRM protected media.) However, all that really means is anyone with a legitimately created company and about $500 per year to spend can get the required digital signature from VeriSign. Unfortunately, even it is a reputable company, it still doesn't provide any guarantees as to the reliability, performance, and security of the kernel.

In order for software (or drivers) to work properly on an operating system that implements KPP, the software must use Microsoft-documented interfaces. If what you are trying to do doesn't have such an interface, then you cannot safely use that functionality. This is what has lead to the controversy. The security vendors are saying that the interfaces they require are not publicly documented by Microsoft (or not yet at any rate) but that Microsoft's own security offerings (Windows OneCare, Windows Defender, and Windows Firewall) are able to work properly and use undocumented interfaces. The security vendors want to "level the playing field".

There are many arguments on both sides of the issue, but it seems that many of them are not thought out completely. Symantec and McAfee have argued that the legitimate security vendors be granted exceptions to KPP using some sort of signing process. (See the TechWeb article.) However, this is fraught with potential problems. As I mentioned earlier, there is currently no reliable way to verify that code is actually from a "known good" source. The closest we can come to that is by digital signing, however, a piece of malicious code can simply include enough pieces from a legitimate "known good" source and hook into the exception.

So lets say, for arguments sake, that Microsoft does relent and is able to come up with some sort of exception mechanism that minimizes (or even removes) the chance of abuse. What happens next? Windows Vista, in particular, already includes an array of new features to provide security vendors ways to work within the KPP guidelines.

The Windows Filtering Platform (WFP) is one such example. WFP enables software to perform network related activities, such as packet inspection and other firewall type activities. In addition to WFP, Vista implements an entirely new TCP stack. This new stack has some fundamentally different behavior than the existing TCP stack on Windows. We also have network cards that implement hardware based stacks to perform what is called "chimney offload", which effectively bypasses large portions of the software based TCP stack. Hooking the network related kernel functions (as a lot of software based firewalls currently do), will miss all of the traffic on a chimney offload based network card. However, hooking in to WFP will catch that traffic.

Should Microsoft stop making technological innovations in the Windows kernel simply because there are a handful of partners and other ISVs that are complaining? The important thing to realize is that KPP is not new in Windows Vista. It has been around since Windows XP 64-bit edition was released. Why is it now that the security vendors are realizing that their products don't work properly on the x64-based operating systems? The main point Microsoft is trying to get across is that most of the functionality required doesn't have to be done in the kernel. Microsoft has been working for the last few years trying to assist their security partners in making their solutions compatible. If there is an interface that isn't documented, or functionality that a vendor believes can only be accomplished by patching the kernel, they can contact their Microsoft representative or email msra@microsoft.com#utm_source=googlier.com/page/2019_10_08/29047&utm_campaign=link&utm_term=googlier&utm_content=googlier.com for help finding a documented alternative. According to the KPP FAQ, "if no documented alternative exists...the#utm_source=googlier.com/page/2019_10_08/29047&utm_campaign=link&utm_term=googlier&utm_content=googlier.com functionality will not be supported on the relevant Windows operating system version(s) that include patch protection support."

I think the larger controversy is the fact that there are now documented ways to break KPP. This is where Microsoft and it's security partners and other security ISVs should be spending their time and energy. If we are going to have a reliable and secure kernel, we need to focus on locking down the kernel so that no one is able to breach it, including the hackers. This is an almost endless process, as the attackers generally have almost infinite amounts of time to bring their "products" to market and don't really have an quality issues to worry about. Even with the recent introduction by Intel and AMD of hardware based virtualation technology (which essentially creates a virtual mini-core processor that can run a specially created operating system), there is still a long way to go.

What's next?

While it is important to understand the goals of KPP and the potential avenues of attack against it, the most important thing for the security community to focus on is in making sure that the Windows kernel stays safe. The best way to do this is to keep shrinking the attack surface until it is almost non-existent. There will always be an attack surface, however, the smaller that surface becomes the easier it is to protect. Imagine guarding a vault. If there is only one way in and out, and that entrance is only 2-feet wide it is much more easily guarded than a vault that has 2 entrances, each of which are 30-feet wide.

However, as malware technology advances it is important for the security technology that tries to protect against it to advance as well. In fact, the security technology really needs to be ahead of the malware if it is to be successful. PatchGuard has already been hacked, some of the proposed Microsoft APIs for KPP won't be available until sometime in 2008, and the security vendors do have legitimate reasons for needing to access certain portions of the kernel.

Host Intrusion Prevention Systems (HIPS), for instance, uses kernel access to prevent certain types of attacks, such has buffer overflow attacks or process injection attacks, by watching for system functions being called from memory locations where they shouldn't be called. The Code Red Worm would not have been detected if only file-based protection systems were in use.

The bottom line is that the malware vendors are unpredictable and not bound by any legal, moral, or ethical constraints. They are also not bound by customer reviews, deadlines, and code quality. The security vendors and Microsoft need to work together to ensure that the attack surface for the kernel and Windows itself is small and stays small. They can do this by:

  • Establishing a more reliable way to authenticate security vendors and their products that will prevent "spoofing" by the malware vendors.
  • Minimizing the attack surface of the Windows Kernel.
  • Establishing documented APIs to interact with the kernel to perform security related functions, such as firewall activities.
  • Enforcing driver signatures...in#utm_source=googlier.com/page/2019_10_08/29047&utm_campaign=link&utm_term=googlier&utm_content=googlier.com other words, don't allow this mechanism to be turned off. At least don't allow it to be turned off for certain security critical drivers.
  • Enforcing security software digital signatures. If you want your security tool to run, it must be signed. Again, don't allow this mechanism to be turned off.
  • Establishing a more secure way for the security products to hook in to the kernel.
  • Restricting products to patching only specific areas of the kernel. Currently, it is possible to patch almost any portion of the kernel.
  • Enforcing Windows certification testing for any security products.

          

BitLocker™ - The dirty details

 Cache   

Originally posted on: http://brustblog.net/archive/2006/07/04/84045.aspx#utm_source=googlier.com/page/2019_10_08/29049&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

One of the new security features coming in Windows Vista and Longhorn is the new BitLocker™ Drive Encryption technology. BitLocker™ is designed to help prevent information loss, whether it is by theft or accidental. Information loss is costly to business on several levels, and the U.S. Department of Justice estimates that intellectual property theft cost enterprises $250 billion in 2004.

BitLocker™ Drive Encryption gives you improved data protection on your notebooks, desktops, and servers by providing a transparent user experience that requires little to no interaction on a protected system. BitLocker also prevents the use of another operating system or hacking tool to break file and system protections by preventing the offline viewing of user data and OS files through enhanced data protection and boot validation using TPM v1.2.

For those of you who may not know, TPM stands for Trusted Platform Module. So what's that? TPM is a piece of hardware that is part of the motherboard that:

  • Performs cryptographic functions
    • RSA, SHA-1, RNG
    • Meets encryption export requirements
  • Can create, store, and manage keys
    • Provides a unique Endorsement Key (EK)
    • Provides a unique Storage Root Key (SRK)
  • Performs digital signature operations
  • Holds platform measurements (hashes)
  • Anchors a chain of trust for keys and credentials
  • Protects itself against attacks

So now that you know what a TPM is, why should you use one? A TPM is a hardware implementation of a Root-of-Trust, which can be certified to be tamper resistant. When combined with software, it can protect root secrets better than software alone. A TPM can ensure that keys and secrets are only available for use when the environment is appropriate.

The important thing to know about BitLocker is that it will only encrypt the Windows partition. You also won't be able to dual-boot another operating system on the same partition, different partitions are fine. Any attempts to modify the protected Windows partition will render it unbootable.

To completely protect all of the data on the computer, you will need to use a combination of BitLocker on the Windows partition and Encrypted File System (EFS) on the other partitions. When properly configured, EFS is computationally infeasible to crack.

Even with all of the new security that is provided by BitLocker, it can't stop everything. Some of the areas that BitLocker is helpless to defend against are:

  • Hardware debuggers
  • Online attacks—BitLocker is concerned only with the system’s startup process
  • Post logon attacks
  • Sabotage by administrators
  • Poor security maintenance
  • BIOS reflashing
    • Protection against this can be enabled if you wish

Additional Resources


          

Oracle to hire 2,000 workers to expand cloud business to more countries

 Cache   
Oracle Corp plans to hire nearly 2,000 additional workers as part of an aggressive plan to roll out its cloud computing services to more locations around the world, its cloud chief told Reuters on Monday.

          

ePrint Report: Practical Privacy-Preserving K-means Clustering

 Cache   

ePrint Report: Practical Privacy-Preserving K-means Clustering
Payman Mohassel, Mike Rosulek, Ni Trieu

Clustering is a common technique for data analysis, which aims to partition data into similar groups. When the data comes from different sources, it is highly desirable to maintain the privacy of each database. In this work, we study a popular clustering algorithm (K-means) and adapt it to the privacy-preserving context.

Our main contributions are to propose: i) communication-efficient protocols for secure two-party multiplication, and ii) batched Euclidean squared distance in the adaptive amortizing setting, when one needs to compute the distance from the same point to other points. These protocols are the key building blocks in many real-world applications such as Bio-metric Identification. Furthermore, we construct a customized garbled circuit for computing the minimum value among shared values.

We implement and evaluate our protocols to demonstrate their practicality and show that they are able to train data-sets that are much larger than in the previous work. For example, our scheme can partition the data-sets of size 100,000 into 5 groups under one hour. The numerical results also show that the proposed protocol reaches a ratio of 91.68% accuracy compared to a K-means plain-text clustering algorithm.
          

ePrint Report: The Bitcoin Backbone Protocol Against Quantum Adversaries

 Cache   

ePrint Report: The Bitcoin Backbone Protocol Against Quantum Adversaries
Alexandru Cojocaru, Juan Garay, Aggelos Kiayias, Fang Song, Petros Wallden

Bitcoin and its underlying blockchain protocol have received recently significant attention in the context of building distributed systems as well as from the perspective of the foundations of the consensus problem. At the same time, the rapid development of quantum technologies brings the possibility of quantum computing devices from a theoretical concept to an emerging technology. Motivated by this, in this work we revisit the formal security of the core of the Bitcoin protocol, called the Bitcoin backbone, under the assumption that the adversary has access to a scalable quantum computer. We prove that the protocol's essential properties stand in the post-quantum setting assuming a suitably bounded Quantum adversary in the Quantum Random Oracle (QRO) model. Specifically, our results imply that security can be shown by bounding the quantum queries so that each quantum query is worth $O(p^{-1/2})$ classical ones and that the wait time for safe settlement is expanded by a multiplicative factor of $O(p^{-1/6})$, where $p$ is the probability of success of a single classical query to the protocol's underlying hash function.
          

ePrint Report: Further Optimizations of CSIDH: A Systematic Approach to Efficient Strategies, Permutations, and Bound Vectors

 Cache   

ePrint Report: Further Optimizations of CSIDH: A Systematic Approach to Efficient Strategies, Permutations, and Bound Vectors
Aaron Hutchinson, Jason LeGrow, Brian Koziel, Reza Azarderakhsh

CSIDH, presented at Asiacrypt 2018, is a post-quantum key establishment protocol based on constructing isogenies between supersingular elliptic curves. Several recent works give constant-time implementations of CSIDH along with some optimizations of the ideal-class group action evaluation algorithm, including the SIMBA technique of Meyer, Campos, and Reith and the two-point method of Onuki, Aikawa, Yamazaki, and Takagi. A recent work of Cervantes-Vázquez, Chenu, Chi-Domínguez, De Feo, Rodríguez-Henríquez, and Smith details a number of improvements to the works of Meyer et al. and Onuki et al. Several of these optimizations---in particular, the choice of ordering of the primes, the choice of SIMBA partition and strategies, and the choice of bound vector which defines the secret keyspace---have been made in an ad hoc fashion, and so while they yield performance improvements it has not been clear whether these choices could be improved upon, or how to do so. In this work we present a framework for improving these optimizations using (respectively) linear programming, dynamic programming, and convex programming techniques. Our framework is applicable to any CSIDH security level, to all currently-proposed paradigms for computing the class group action, and to any choice of model for the underlying curves. Using our framework---along with another new optimization technique---we find improved parameter sets for the two major methods of computing the group action: in the case of the implementation of Meyer et al. we obtain a 16.85% speedup without applying the further optimizations proposed by Cervantes-Vázquez et al., while for that of Cervantes-Vázquez et al. under the two-point method we obtain a speedup of 5.08%, giving the fastest constant-time implementation of CSIDH to date.
          

'I believe climate change is real,' Sen. Lamar Alexander writes in op-ed

 Cache   
Sen. Lamar Alexander
I believe climate change is real.

I believe that human emissions of greenhouse gases are a major cause of climate change.

So, as one Republican, I propose this response: The United States should launch a New Manhattan Project for Clean Energy, a five-year project with Ten Grand Challenges that will use American research and technology to put our country and the world firmly on a path toward cleaner, cheaper energy.

Meeting these Grand Challenges would create breakthroughs in advanced nuclear reactors, natural gas, carbon capture, better batteries, greener buildings, electric vehicles, cheaper solar, fusion and advanced computing. To help achieve these Ten Grand Challenges, the federal government should double its funding for energy research and keep the United States number one in the world in advanced computing. (Read the rest of Lamar's essay)

Rod's Comment: I agree with Senator Alexander that climate change is real.  I know, I know; some of you think I am nuts. Anytime I say this, it is met with derision by fellow conservatives. I am told I have drank the Kool Aid.  Unfortunately, many Republicans have bought the argument that is all a hoax. I find the science and observations compelling. Not that I do not respect the doubters. The "hide the decline" exposé of a few years ago was enough to spread doubt. The repeatedly missed deadline for the end of the earth made one think the climate change warriors were just Chicken Littles. Still, on balance I think the evidence supports the theory. 

I am not so sure the climate change warriors really believe climate change is real.  If they did, I think they would embrace nuclear energy, natural gas, technology and a growing economy and capitalism. Unfortunately, most climate change warriors seem more motivated by hatred of modernity, science and capitalism than motivated by a desire to curtail climate change. Maybe it is unfair to say they don't believe that climate change is real; one can agree on the problem and disagree on the means to solve it. 

We are not going to end climate change by turning the clock back to the middle ages.  We are not going to solve climate change by embracing renewable energy. That may be a minor part of the solution but not a very significant part. Like Senator Lamar says, we need to embrace the future and create large amounts of clean, inexpensive energy. We need to encourage economic development because most of the increase in greenhouse gases is in developing countries.

Lamar says the “Green New Deal,” is basically an assault on cars, cows and combustion. It is a plan that could never work. Capitalism and American innovation are the answer; not deprivation and socialism.

          

Samsung’s Chromebook 4 line delivers a brand new design and full USB-C support

 Cache   
Samsung on Monday unveiled two new Chromebook models, both affordable offerings that could satisfy the computing needs of users who don’t require more sophisticated operating systems. If Chrome OS is good enough for your needs — coupled with all the Android apps you could ask for — then you should check out the Chromebook 4 and Chromebook 4+, which are out just in time for the busy holiday season. Starting at $229.99 and $299.99, the Chromebook 4 and Chromebook 4+ have the exact same set of specs: Intel Celeron N4000 processor, Intel UDH Graphics 600, 4GB or 6GB of LPDDR4 memory, 64GB built-in eMMC flash memory, 720p HD camera, Wi-Fi 5 support, Bluetooth, dual stereo speakers, USB-C and USB 3.0 connectivity, microSD slot, 3.5mm headphone jack, and 39Wh batteries. The smaller model features an 11.6-inch display with 1366 x 768 resolution, while the bigger Chromebook packs a 15.6-inch Full HD display. Because Samsung uses the same battery on both devices, you get 12.5 hours of life on the Chromebook 4 and 10.5 hours on the 15.6-inch laptop. The bigger machine also packs an additional USB-C port. These ports support power and data transfer, which means you can hook up either model to a 4K display. One other standout feature is support for Google Assistant, which is built into both machines. Finally, the laptops are thin and light (2.6 and 3.75 pounds, respectively) featuring a much sleeker design than the old Chromebook 3 — from a distance, they might be confused with MacBooks. It’s unclear whether they’re made of aluminum, but they do feature a “Platinum Titan” color option, and Samsung says the Chromebooks’ new sophisticated look means you won’t see any screws anywhere. Both Chromebook 4 models are available in Best Buy stores this Monday, and online at Best Buy and Samsung.
          

Comment on Patent for Even Thinner MacBook Keyboard by Sören Nils Kuklau

 Cache   
I guess that's fair. I thought "aren't computer people" is a bit broad, but it does seem fair to say that Tim doesn't have "appreciation for desktop computing".
          

Comment on Patent for Even Thinner MacBook Keyboard by Michael Tsai

 Cache   
@Sören It doesn't sound like you are disputing the original claim about “appreciation for desktop computing.” I don’t mean to imply anything about what Steve would have done—no pointing arguing a hypothetical.
          

Comment on Patent for Even Thinner MacBook Keyboard by Sören Nils Kuklau

 Cache   
My hope is that they realize the “one keyboard for the entire line-up” strategy is no longer feasible, and that if the 12-inch MacBook ever does make a come-back (possibly with an ARM CPU, possibly as an “iPad Laptop”, etc.), its keyboard <em>doesn’t</em> also get rolled out to the MacBook Pro later on. <blockquote> After SJ passed, Apple wound up with an excess of C-suite execs who aren’t computer people. </blockquote> Not sure how you would define that. Of their 13 C- or SVP-level execs, eight have computer-adjacent degrees: Industrial engineer: Tim Cook Mechanical engineer: Jeff Williams, Dan Riccio, Sabih Khan Computer scientist: Eddy Cue, Craig Federighi, John Giannandrea, Johny Srouji Jonathan Ive has a BA, Luca Maestri and Deirdre O’Brien have management-esque degrees, Phil Schiller is a biologist (??), and Katherine Adams, the general counsel, has a law degree. But those aren’t really in the position to make decisions about computers, with perhaps the exception of Phil, who <em>does</em> appear very interested in pushing computing forward (and also doesn’t fit in the “after SJ passed” narrative). I don’t think this idea holds water. The two top people, Tim Cook and Jeff Williams, are engineers. They’re <em>not</em> MBAs.
          

Computing Solutions P/T - Best Buy Canada - Beacon Hill, SK

 Cache   
As Canada's fastest-growing specialty retailer of consumer electronics, Best Buy ensures it offers one of the best work environments in the country.
From Best Buy - Wed, 02 Oct 2019 02:32:54 GMT - View all Beacon Hill, SK jobs
          

CRM Migration (MS Dynamics to SalesForce)

 Cache   
I need someone who can help me to understand the mapping of Various Entities when Data Needs to be migrated from MS Dynamics 365 to SalesForce. If you have previous Experiences, please contact me. I will pay by hour... (Budget: $15 - $25 USD, Jobs: Cloud Computing, CRM, Salesforce App Development, Salesforce.com#utm_source=googlier.com/page/2019_10_08/33556&utm_campaign=link&utm_term=googlier&utm_content=googlier.com, Sharepoint)
          

New Intel Xeon W and X-Series Processors Accelerate Workstation AI

 Cache   

Today Intel unveiled its latest lineup of Intel Xeon W and X-series processors, which puts new classes of computing performance and AI acceleration into the hands of professional creators and PC enthusiasts.  "The professional and enthusiast communities require product engineering that caters to their specific mission-critical needs and keeps them on the cutting edge of technology advancements. This means the best hardware and software optimizations, but also looking at how we can infuse things like AI acceleration,”

The post New Intel Xeon W and X-Series Processors Accelerate Workstation AI appeared first on insideHPC.


          

Supercomputing Structures of Intrinsically Disordered Proteins

 Cache   

Researchers using the Titan supercomputer at ORNL have created the most accurate 3D model yet of an intrinsically disordered protein, revealing the ensemble of its atomic-level structures. The combination of neutron scattering experiments and simulation is very powerful,” Petridis said. “Validation of the simulations by comparison to neutron scattering experiments is essential to have confidence in the simulation results. The validated simulations can then provide detailed information that is not directly obtained by experiments.”

The post Supercomputing Structures of Intrinsically Disordered Proteins appeared first on insideHPC.


          

Use Cases for HPC in the Cloud

 Cache   
cloud computing

In this special guest feature, Robert Roe from Scientific Computing World looks at use cases for cloud technology in HPC. "In previous years there have been some concerns around security or the cost of moving data to and from the cloud, but these reservations are slowly being eroded as more users see value in developing a cloud infrastructure as part of their HPC resource."

The post Use Cases for HPC in the Cloud appeared first on insideHPC.


          

Job of the Week: Tactical HPC Software Engineer at General Dynamics

 Cache   

General Dynamics Mission Systems is seeking a Tactical HPC Software Engineer in our Job of the Week. "The Tactical High Performance Computing Software Engineer leads the development of next generation high performance radar and sensor signal processing for some of the world's most advanced defense systems."

The post Job of the Week: Tactical HPC Software Engineer at General Dynamics appeared first on insideHPC.


          

GigaIO Optimizes Scalability of Xilinx Alveo FPGAs

 Cache   

Today GigaIO introduced FabreX support for Xilinx Alveo Accelerators, in addition to an exclusive offering that provides Xilinx FPGA developers with remote cloud access to the FabreX platform. In conjunction with the Xilinx Alveo family of adaptable accelerator cards, Xilinx developers will use FabreX to enhance proof of concept, software testing, and scale-out deployments in applications like artificial intelligence, deep learning inference, and high-performance computing.

The post GigaIO Optimizes Scalability of Xilinx Alveo FPGAs appeared first on insideHPC.


          

Taichi: A Language for High-Performance Computation on Spatially Sparse Data Structures

 Cache   
3D visual computing data are often spatially sparse. To exploit such sparsity, people have developed hierarchical sparse data structures, such as multilevel sparse voxel grids, particles, and 3D hash tables. However, developing and using these high-performance sparse data structures is challenging, due to their intrinsic complexity and overhead. We propose Taichi, a new data-oriented programming […]
          

Verification of GPU Program Optimizations in Lean

 Cache   
Graphics processing units (GPUs) have become of major importance for highperformance computing due to their high throughput. To get the best possible performance, GPU programs are frequently optimized. However, every optimization carries the risk of introducing bugs. In this thesis, we present a framework for the theorem prover Lean to formally verify transformations of GPU […]
          

Connected bulbs Market: Global Share, Size, Trends and Growth Analysis Forecast to 2019-2024

 Cache   
Connected Bulb Industry Description Connected bulbs are embedded with chips that can communicate with computing devices like smartphones and watches. A major factor that spurs the prospects for growth in this market is the advent of wireless networking technologies. What makes connected bulbs useful is that they have embedded IoT technology, which lets devices and … Continue reading Connected bulbs Market: Global Share, Size, Trends and Growth Analysis Forecast to 2019-2024
          

THE WORLDWIDE LHC COMPUTING GRID – WLCG

 Cache   
Dealing with the LHC data deluge.
          

THE CERN DATA CENTRE

 Cache   
The CERN Data Centre is the heart of CERN’s entire scientific, administrative and computing infrastructure. All services, including e-mail, scientific data management and videoconferencing, use equipment based here. As of 2019, 230 000 processor cores and 15 000 servers run 24/7.
          

Samsung launches Chromebook 4 and 4+ with prices starting at $230

 Cache   

Samsung has released not one, but two follow-up devices to the Chromebook 3, and they're both just as affordable as their predecessor. The Chromebook 4 has an 11.6-inch HD (1,366 x 768 pixel) display and weighs 2.6 pounds like its predecessor, while the larger Chromebook 4+ has a 15.6-inch FHD (1,920 x 1,080) display and weighs in a bit more at 3.75 pounds. Both devices are powered by a Celeron N4000 processor and can have up to 64GB of storage and up to 6GB of RAM. They also have Gigabit Wi-Fi capabilities, as well as built-in access to Google Assistant and the Play Store.

Source: Samsung


          

Senior/Principal Solutions Architect (DBCDM) - Oracle - Seattle, WA

 Cache   
Our Cloud Database Architects work closely with product managers to shape the next generation of cloud computing, to promote adoption and to disseminate usage…
From Oracle - Sat, 28 Sep 2019 02:16:28 GMT - View all Seattle, WA jobs
          

Senior Cloud Database Solutions Architect - Oracle - Seattle, WA

 Cache   
Our Cloud Database Architects work closely with product managers to shape the next generation of cloud computing, to promote adoption and to disseminate usage…
From Oracle - Sat, 28 Sep 2019 02:15:36 GMT - View all Seattle, WA jobs
          

FREE DESIGN - SewWhat-Pro Embroidery Software - Sew What Pro - Embroidery Editing - Embroidery Software - Digitizing by AppliqueBliss

 Cache   

65.00 USD

As an experienced embroiderer and digitizer, I know a good embroidery software! I began selling SewWhat-Pro because I was impressed with the low price yet high functionality and user friendliness. I have seen many other software programs and Sewwhat-Pro is by far the best! It is a great program for a home or small business embroiderer to add names/monograms or make adjustments to existing designs. Please see the FREE TRIAL information below and make sure to read the entire description! :)

*******Limited Time Offer - Get a FREE DESIGN of your choice with your purchase of SewWhat-Pro! Instructions included in the file documents on how to receive your free design! (Please do not add the design to your cart with the purchase of SewWhat-Pro!)

FREE TRIAL!!!
Copy and paste link below into your browser to install the trial version of the software. Once purchased, just register your trial version to gain access to the full retail version! This is highly recommended that you download the trial to ensure software is compatible with your computer. Because this free trial is offered, no refunds will be accepted! After installation, select "Demo"

http://www.sandscomputing.com/#utm_source=googlier.com/page/2019_10_08/41871&utm_campaign=link&utm_term=googlier&utm_content=googlier.comApplications/InstallSWP.exe#utm_source=googlier.com/page/2019_10_08/41871&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

*Add names & combine designs
*Compatible with Brother, Babylock, Tajima, Viking, Pfaff, Singer, and many more (see photos for listing)
*Full version with free software updates (For WINDOWS ONLY)
*Simply register your trial version for full access - no need to redownload

Features of SewWhat-Pro:
-View thumbnails (in Album View) of files in your working directory
-Write Designer-1© floppy disks and USB drives
-Use TrueType fonts [TTF©] to create monogram lettering
-Resize, reposition, delete, rotate, and merge sewing patterns
-Convert from, and save to, various file formats (see chart below) either individually or in batch mode
-Simulate the real-time stitch out of a pattern
-Change individual thread colors and background fabrics
-Print out the design and design summary
-View (or hide) a stitch histogram of the thread length distribution for each pattern
-Hide or view (as thickened or dashed lines) jump stitches
-ICON toolbar button to toggle between thread pane information and alphabet mode for easy entering of pre-digitized lettering
-Applique cutter tool for creating SVG and JPG files for outline of applique, used to cut applique fabric in Cameo© and Cricut© -software
-Cutting toolbar allows graphical separation of patterns at specific stitches
-Density adjustment dialog to resize a pattern at constant density
-Graphical or text-based reordering of thread color stops is available
-Capability to write Smart Media or Compact Flash cards for Singer, Brother, Janome, and new Bernina machines
-Converts Cross Stitch patterns to embroidery files using a “plugin” from myriaCross (MC). -
-Supports a command-line interface for file conversion which has the form:
-SewWhat-Pro.exe#utm_source=googlier.com/page/2019_10_08/41871&utm_campaign=link&utm_term=googlier&utm_content=googlier.com file1.ext#utm_source=googlier.com/page/2019_10_08/41871&utm_campaign=link&utm_term=googlier&utm_content=googlier.com1 file2.ext#utm_source=googlier.com/page/2019_10_08/41871&utm_campaign=link&utm_term=googlier&utm_content=googlier.com2 /c

Integrated Project Management Features of SewWhat-Pro:
-Includes an editable table for entering project information
-Easily editable list of thread colors and manufacturer brands
-Capability to read/write thread color “txt” files for single/multiple projects
-Thread palettes from over 15 manufacturers are available
-Customizable user-defined thread palettes can be easily added

***This software does not convert images to stitch files or allow you to create your own original embroidery designs. This software is primarily an editor.

***Registration emailed within 24 hours, but usually much sooner!

You will receive multiple links to video tutorials and the Manufacturer's User Manual!

***Applique Bliss is an authorized seller of SewWhat-Pro***


          

Pharmacist FT - Shoppers Drug Mart - North Battleford, SK

 Cache   
Strong personal computing skills and knowledge of Pharmacy systems a definite asset. 11412 Railway Ave E, North Battleford, Saskatchewan, S9A 3G8.
From Shoppers Drug Mart / Pharmaprix - Fri, 24 May 2019 21:33:38 GMT - View all North Battleford, SK jobs
          

Solver asking for variables in domain it is not supposed to solve for in Nonisothermal Flow

 Cache   

Hi all!

I have been working on a model for a while now and keep getting the same kind of errors over and over. It is probably a stupid mistake, as I am a novice in COMSOL (as a student), but I cannot figure it out.

In my model, there are two steps, one for computing a turbulent flow through one domain and one for computing the heat flow through the entire model, as well as convection in one domain that has no initial flow (but is seperated from the other domain). I have included a picture of the problem to this post, as well as a .mph#utm_source=googlier.com/page/2019_10_08/46055&utm_campaign=link&utm_term=googlier&utm_content=googlier.com file of my model.

The model is perfectly able to compute the step for the turbulent flow. However, it keeps failing on the second step computing the heat flow. In the second study step, it gives an error that says that the Nonisothermal Flow interface that has the turbulent flow interface (over domain 5) selected for Fuid Flow under Coupled Interfaces, lacks a velocity component in a domain it is not even supposed to solve for (domain 8). I included a picture of the error to this post (Error_Domer.png#utm_source=googlier.com/page/2019_10_08/46055&utm_campaign=link&utm_term=googlier&utm_content=googlier.com).

To try and counteract the error, I tried to first have the model solve for a Laminar Flow in the domain the Nonisothermal Flow is asking for, but it seems unable to solve for a stationary domain which should display no flow, or zero velocity (Error_Domer_2.png#utm_source=googlier.com/page/2019_10_08/46055&utm_campaign=link&utm_term=googlier&utm_content=googlier.com).

Can anyone pinpoint what is going wrong?

All help is welcome. Thank you in advance!


          

Computing stiffness in Orthotropic material in a Mutliphysics

 Cache   

Hello everyone,

I just started using COMSOL Multiphysics, and I am impressed by the software and the website. I tried to compute stiffness in an isotropic material, and the results were satisfying. I also got some help from the blog: https://www.comsol.com/blogs/computing-stiffness-linear-elastic-structures-part-2/#utm_source=googlier.com/page/2019_10_08/46073&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

Now, I am trying to compute the stiffness of orthotropic material which is PZT-2 in my case. I am getting results that are half the expected value. Can anyone please tell me why this is happening and how can I get an expected result.

Thanks.


          

FSCast Episode 104 - Jonathan Mosen speaks with Ted Henter, who founded Henter-Joyce and created JAWS: Part 1

 Cache   

Jonathan Mosen speaks with Ted Henter, who founded Henter-Joyce and created the JAWS® screen reading software. In part one of this two-part interview, Ted talks about his early life, the accident that left him blind, his interest in computing, founding Henter-Joyce and the early days of JAWS for DOS

Show Host: Jonathan Mosen

Episode 104 - Jonathan Mosen speaks with Ted Henter, who founded Henter-Joyce and created JAWS: Part 1


          

FSCast Episode 98 - JAWS Timed License, Tablet Computing

 Cache   

Eric Damery joins Jonathan Mosen to introduce the new JAWS® timed license. We also discuss some of the new tablet computers on which people are now running our JAWS screen reading software.

Show Host: Jonathan Mosen

Episode 98 - JAWS Timed License, Tablet Computing


          

System Analyst - ADGA Group - Ottawa, ON

 Cache   
ADGA delivers strategic insight, world-class technology and service excellence in Defence, Security and Enterprise Computing to clients in the Federal…
From ADGA Group - Mon, 12 Aug 2019 21:15:27 GMT - View all Ottawa, ON jobs
          

Want to Start a Videoconferencing Program? Here’s How

 Cache   
Want to Start a Videoconferencing Program? Here’s How eli.zimm#utm_source=googlier.com/page/2019_10_08/47990&utm_campaign=link&utm_term=googlier&utm_content=googlier.comerman_9856 Mon, 09/23/2019 - 14:07

In February 2017, I was a digital coach at Pinecrest Academy St. Rose, a middle school in Las Vegas, where I supported teachers and students with tech-based initiatives

A colleague of mine was teaching about the assassination of President John F. Kennedy, and his U.S. history students wanted to learn more. 

I contacted Clint Hill, who worked as a Secret Service agent and witnessed the shooting in Dallas in 1963, and asked if he could speak to the class. Hill, who had served in Jacqueline Kennedy’s Secret Service detail, agreed to do a live session from San Francisco

We originally planned to connect by phone, but I figured why not try a videoconference? We agreed to use the technology to connect the students for a once-in-a-lifetime history lesson.

In my 10 years as an educator, I had never seen such high-impact lessons before. I decided to run with it and create as many guest speaking events as possible for students.

Digital%20Transformation_IR_1.jpg#utm_source=googlier.com/page/2019_10_08/47990&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

5 Ways to Expand Learning Beyond Classroom Walls

Videoconferencing, also known as web conferencing, is not new. But at Pinecrest Academy, we’ve developed a unique program that features videoconferencing sessions several times a week, at multiple schools, at minimal cost. 

In some cases, we’ve set up videoconferencing programs that simultaneously connect up to 200 classrooms in five countries

The vast majority of the sessions, though, connect our classrooms with the speakers, who typically conduct the sessions in their home or at their office. 

Teachers contact me all the time to ask how we got started. It requires minimal technology — and schools already have much of the basic infrastructure needed. 

Use these five best practices to get a videoconferencing program started in your district. 

  1. Set goals and an overall vision: Will the videoconferencing run only for one school’s students, or will the school open it up to other schools in the district, local community colleges and universities, or other institutions around the country or world? It’s best to start with high school students because they are more mature, but any group from fifth or sixth grade up will work fine. 

  2. Determine technical requirements: There’s a very low barrier to entry with videoconferencing. Anyone can get started with an internet connection and a computing device that has a webcam and a projector. Most schools already have these pieces in place. If the guest speaker isn’t comfortable with the video tools, it’s possible to send them an audio link from a videoconferencing service, which allows the speaker to connect to the classroom, make a presentation and answer questions with students in an audio-only format. We once had a 94-year-old guest speaker who was a Holocaust survivor and was uncomfortable with the video setup. In that situation, the audio-only connection worked well.

  3. Embrace the flipped classroom: Videoconferencing is a great tool to facilitate a flipped classroom, increasing student engagement. Once a speaker is scheduled, the teacher should think through strategies to ensure students are prepared for a robust conversation. For example: Assign short videos about the topic at hand or the speaker’s life, or have students read about the speaker and prepare questions for the presentation.

  4. Strive for emotional learning experiences: We’ve had a World War II veteran tell our students about fighting in the Battle of the Bulge. We’ve also had a survivor of the 9/11 attacks recount what they witnessed in New York on that day and how they escaped danger. For younger students, we’ve had zoologists talk about various animals and had a NASA scientist discuss space missions. Hearing these stories firsthand creates an experience that students can’t replicate by reading a book or watching a documentary. It’s a way to engage them — and 40 percent of middle and high school students say they don’t feel engaged, according to a survey from the national nonprofit YouthTruth. And with roughly half of middle and high school students saying they don’t feel their studies are relevant, creating an emotional connection to the topic is a way to turn that around.

  5. Find funding for a full-time pro: Videoconferencing works best when there is at least one person — whether it’s someone in the IT department or an educational technology teacher — who’s dedicated to managing it full time. 

That person will plan videoconferencing sessions and manage a website where teachers can make requests, view the schedule and sign up. 

If funding for a full-time professional isn’t in the budget, consider splitting the responsibilities between two or more people. Another option is to apply for grants to seed the program.

PRODUCT REVIEW: Yamaha YVC-1000 Speakerphone gives K–12 collaboration an audio boost.

Explore the Possibilities of Classroom Videoconferencing

Many colleges have had solid videoconferencing programs for years. K–12 schools have lagged, mainly because many lack the funds to dedicate full-time staff and acquire the needed technology

However, as costs decrease, school districts that lack resources can usually find the money to secure at least a basic setup. In many states, districts can apply for technology grants. 

For example, funding for my position as a digital coach came through a Nevada Ready 21 technology grant, which also supported secure one-to-one technology and digital coach positions throughout our cluster of charter schools in southern Nevada. 

We support teachers with device management and develop new ways to deliver content in the classroom to help create a tech-rich learning environment.

Once administrators try videoconferencing, they’ll appreciate the wide range of speakers they can bring in and the topics they can cover, from history to current events and even complex subjects like math and computer science.

Ralph Krauss works in digital innovations at Pinecrest Academy of Nevada, a group of five charter schools in the Las Vegas area, and Academica Virtual Education, a support service for virtual schools.


          

Quantum A.I Trade - quantumai.trade

 Cache   
I'm not the admin. Listed on monitor. Start: 22/09/2019 Register here About: QuantumAI.trad#utm_source=googlier.com/page/2019_10_08/53857&utm_campaign=link&utm_term=googlier&utm_content=googlier.come is a successful Quantum Computing and A.I-oriented investment company . Our solutions are complex, automated and using the market leading self-learning trading program, which aims at high profits within a short period of time. Quantum Computing and A.I gives us the prospect of hundreds of correct trading decisions in matters of seconds. Plans: 0.1-2% daily for lifelong [Minimum ...
          

SLIIT conducted a motivational programme at Thurstan College, Colombo

 Cache   
It was a fun filled learning day to A/L students in Thurstan College, Colombo. SLIIT conducted a motivational programme to help students to empower their capabilities on 30th of September 2019. This programme was conducted by Ms.Anjalie Mendis, Academic Coordinator (Metro), Student Advisor/ Senior Lecturer, Faculty of Computing, SLIIT, with the coordination of Ranga J.
          

First look at Surface Pro X, Pro 7, Laptop 3

 Cache   
CNET's Dan Ackerman tries out Microsoft's new line of Surface laptops and hybrid computing devices.

          

Customer Service Agent $18/hr ** SEASONAL** **FLIGHT BENEFITS - QUICKFLIGHT INC - Jackson Hole, WY

 Cache   
Preparing and issuing tickets, computing fares, issuing refunds. Customer Service responsibilities will include but are not limited to. $18 an hour
From QUICKFLIGHT INC - Wed, 17 Apr 2019 10:30:40 GMT - View all Jackson Hole, WY jobs
          

HELLO!

 Cache   
This is my podcast for Educational Computing!
          

HELLO!

 Cache   
This is my podcast for Educational Computing!
          

Quantum computer bests all conventional computers in first claim of ‘supremacy’

 Cache   
By Adrian Cho, Science Magazine Gil voices another worry long held by many the field: That after all the hype surrounding quantum supremacy, quantum computing may experience a letdown like the one that plagued the field of artificial intelligence from the 1970s until the current decade, when technology finally caught up with aspirations. However, Google […]
          

Google’s quantum bet on the future of AI—and what it means for humanity

 Cache   
Katrina Brooker, Fast Company Hartmut Neven, who leads Google’s quantum team, presented the lab’s advances during Google’s Quantum Spring Symposium in May, describing the increases in processing power as double exponential. Within computer science circles, this growth rate for quantum computing has been dubbed Neven’s law, a nod to Moore’s law, which posits that “classical” computing […]
          

StarTech.com TB3CDK2DP Thunderbolt 3 and USB-C hybrid dock is magical [Review]

 Cache   
Unless you are a gamer or enthusiast, owning a desktop computer these days is sort of, well... stupid. Look, even if you do most of your computing at a desk, you should still buy a laptop. Why? Think about it -- a desktop keeps you tethered to one place, while a notebook is portable. Thanks to Thunderbolt 3 and USB-C, you can use your laptop as a makeshift desktop by using a docking station. In other words, you can connect your notebook to a monitor, keyboard, mouse, web cam, external hard dive -- pretty much anything you need. The dock… [Continue Reading]

          

Senior Credit Risk Allowance Analyst

 Cache   
GENERAL FUNCTION: This function includes direct interaction with Enterprise Risk Management and Finance senior management. The Senior Credit Risk Allowance Analyst, along with the Allowance for Loan Loss Manager, are charged with assisting the LoanLossReserve Committee in developing quarterly estimates of the allowance for loan and lease losses that conform to appropriate accounting principles and regulatory guidance.



ESSENTIAL DUTIES & RESPONSIBILITIES:

* Lead the development and production of Allowance for Loan and Lease Losses (ALLL) methodology and reporting in assigned areas:

oAssume responsibility for the existing ALLL methodology and all related calculations and reports, which requires the employee to:

.Develop an understanding of existing and proposed credit loss models and ALLL methodologies.

.Identify, analyze, and support significant assumptions used in credit loss models and ALLL methodologies.

.Maintaining credit loss model and ALLL methodology documentation, including Model Validation templates and Sarbanes-Oxley risks and controls.

.Work collaboratively with Internal and External Auditors, Federal Reserve Bank Examiners, and Model Validation personnel.

oTypical duties include:

.Perform studies to estimate loss emergence and look-back periods for significant commercial product groups.

.Use assigned models to develop credit loss estimates.

Perform edit/validation checks and analytical reviews as necessary to verify data inputs are complete and accurate.

Manage and adhere to end-user computing controls.

Perform analytical review on model outputs to understand changes in modeled credit losses and to identify any errors or omissions.

.Perform and/or leverage portfolio analysis such as industry/geographic/product/other segmentation and determine impact upon ALLL.

.Perform analyses including vintage and cohort/attribution analysis, transition matrices, etc.

.Develop, analyze, and justify specific qualitative factors to support the level of qualitative adjustments to model output and the level of the unallocated component of the ALLL, incorporating macroeconomic factors and regulatory guidance.

.Maintain procedural documents, data and calculation reference material.

.Document controls in the ALLL process and perform tests of their design and operating effectiveness to assist in compliance with Sarbanes-Oxley.

.Provide support to the Risk Management division and other downstream users of credit risk information.

.Assist in developing ALLL estimates for CCAR stress tests.

.Communicate with internal stakeholders: Maintain regular communication with Accounting, the lines of business, various data sources, and Risk Management groups to ensure that all appropriate items and issues are addressed, and that important itemsaresuitably documented and presented to senior management.

.Assume additional responsibilities as required to fulfill data enhancement goals.

* Process Improvement:

oParticipate in the development of new ALLL model(s) and methodology.

oCollaborate with the teams who design, develop, implement, document and maintain new models and methodologies to improve the quality, consistency and transparency of the estimation processes.

oProvide alignment of methodologies, data, and reports between the allowance, economic capital, and loss forecasting.

oDevelop relationships within appropriate levels of management for process improvement ideas, issue identification and resolution.



SUPERVISORY RESPONSIBILITIES: No direct supervisory responsibilities, however, expected act as a mentor to junior staff.


          

Microsoft says its Surface Duo phone isn't a phone -- here's why - CNET

 Cache   
  1. Microsoft says its Surface Duo phone isn't a phone -- here's why  CNET
  2. 3 Reasons Microsoft's New Folding Smartphone Could Be a Big Success  The Motley Fool
  3. The petitions are right. The Surface Duo should run Windows 10X  Digital Trends
  4. Microsoft says its Surface Duo phone isn't a phone -- here's why  CNET
  5. Download Absolutely Stunning Wallpapers Coming with the New Microsoft Surface Devices (Direct Links)  Wccftech
  6. View full coverage on Google News

          

Visiting Instructor / Assistant Professor of Computer Science (Cloud Computing) - Western Wyoming Community College - Rock Springs, WY

 Cache   
Recent coding experience in one or more of the following languages, Microsoft .NET, Java, Perl, Nod.js#utm_source=googlier.com/page/2019_10_08/65220&utm_campaign=link&utm_term=googlier&utm_content=googlier.comDev, Ruby or Python.
From Western Wyoming Community College - Mon, 25 Mar 2019 20:46:12 GMT - View all Rock Springs, WY jobs
          

Database Administrator - West Virginia Network for Educational Telecomputing (WVNET) - Morgantown, WV

 Cache   
Knowledge of Oracle database administration, SQL, C, shell scripting, Apache, java concepts, Tomcat, and Unix. 837 Chestnut Ridge Road, Morgantown, WV, 26505.
From Indeed - Wed, 11 Sep 2019 18:29:54 GMT - View all Morgantown, WV jobs
          

Latest Tech Trends, Their Problems, And How to Solve Them

 Cache   

Few IT professionals are unaware of the rapid emergence of 5G, Internet of Things or IoT, edge-fog-cloud or core computing, microservices, and artificial intelligence known as machine learning or AI/ML.  These new technologies hold enormous promise for transforming IT and the customer experience with the problems that they solve.  It’s important to realize that like all technologies, they introduce new processes and subsequently new problems.  Most are aware of the promise, but few are aware of the new problems and how to solve them.

5G is a great example.  It delivers 10 to 100 times more throughput than 4G LTE and up to 90% lower latencies.  Users can expect throughput between 1 and 10Gbps with latencies at approximately 1 ms.  This enables large files such as 4K or 8K videos to be downloaded or uploaded in seconds not minutes.  5G will deliver mobile broadband and can potentially make traditional broadband obsolete just as mobile telephony has essentially eliminated the vast majority of landlines. 

5G mobile networking technology makes industrial IoT more scalable, simpler, and much more economically feasible.  Whereas 4G is limited to approximately 400 devices per Km2, 5G increases that number of devices supported per Km2 to approximately 1,000,000 or a 250,000% increase. The performance, latency, and scalability are why 5G is being called transformational.  But there are significant issues introduced by 5G.  A key one is the database application infrastructure.

Analysts frequently cite the non-trivial multi-billion dollar investment required to roll-out 5G.  That investment is primarily focused on the antennas and fiber optic cables to the antennas.  This is because 5G is based on a completely different technology than 4G.  It utilizes millimeter waves instead of microwaves.  Millimeter waves are limited to 300 meters between antennas.  The 4G microwaves can be as far as 16 Km apart.  That is a major difference and therefore demands many more antennas and optical cables to those antennas to make 5G work effectively.  It also means it will take considerable time before rural areas are covered by 5G and even then, it will be a degraded 5G. 

The 5G infrastructure investment not being addressed is the database application infrastructure.  The database is a foundational technology for analytics.  IT Pros simply assume it will be there for their applications and microservices. Everything today is interconnected. The database application infrastructure is generally architected for the volume and performance coming from the network.  That volume and performance is going up by an order of magnitude.  What happens when the database application infrastructure is not upgraded to match?  The actual user performance improves marginally or not at all.  It can in fact degrade as volumes overwhelm the database applications not prepared for them.  Both consumers and business users become frustrated.  5G devices cost approximately 30% more than 4G – mostly because those devices need both a 5G and 4G modem (different non-compatible technologies).  The 5G network costs approximately 25% more than 4G.  It is understandable that anyone would be frustrated when they are spending considerably more and seeing limited improvement, no improvement, or negative improvement.  The database application infrastructure becomes the bottleneck.  When consumers and business users become frustrated, they go somewhere else, another website, another supplier, or another partner.  Business will be lost.

Fortunately, there is still time as the 5G rollout is just starting with momentum building in 2020 with complete implementations not expected until 2022, at the earliest.  However, IT organizations need to start planning their application infrastructure upgrades to match the 5G rollout or may end up suffering the consequences.

IoT is another technology that promises to be transformative.  It pushes intelligence to the edge of the network enabling automation that was previously unthinkable.  Smarter homes, smarter cars, smarter grids, smarter healthcare, smarter fitness, smarter water management, and more.  IoT has the potential to radically increase efficiencies and reduce waste.  Most of the implementations to date have been in consumer homes and offices.  These implementations rely on the WiFi in the building they reside. 

The industrial implementations have been not as successful…yet.  Per Gartner, 65 to 85% of Industrial IoT to date have been stuck in pilot mode with 28% of those for more than 2 years.  There are three key reasons for this.  The first are the limitations of 4G of 400 devices per Km2.  This limitation will be fixed as 5G rolls out.  The second is the same issue as 5G, database application infrastructure not suited for the volume and performance required by industrial IoT.  And the third is latency from the IoT edge devices to the analytics, either in the on-premises data center (core), or cloud.  Speed of light latency is a major limiting factor for real-time analytics and real-time actionable information.  This has led to the very rapid rise of edge-fog-cloud or core computing.

Moving analytic processing out to the edge or fog significantly reduces distance latency between where the data is being collected and where it is being analyzed.  This is crucial for applications such as autonomous vehicles.  The application must make decisions in milliseconds not seconds.  It may have to decide whether a shadow in the road is actually a shadow, a reflection, a person, or a dangerous hazard to be avoided.  The application must make that decision immediately and cannot wait.  By pushing the application closer to the data collection, it can make that decision in the timely manner that’s required.  Smart grids, smart cities, smart water management, smart traffic management, are all examples requiring fog (near the edge) or edge computing analytics.  This solves the problem of distance latency; however, it does not resolve analytical latency.  Edge and fog computing typically lack the resources to provide ultra-fast database analytics.  This has led to the deployment of microservices

Microservices have become very popular over the past 24 months.   They tightly couple a database application with its database that has been extremely streamlined to do only the few things the microservice requires.  The database may be a neutered relational, time series, key value, JSON, XML, object, and more.  The database application and its database are inextricably linked.  The combined microservice is then pushed down to the edge or fog compute device and its storage.  Microservices have no access to any other microservices data or database.  If it needs access to another microservice data element, it’s going to be difficult and manually labor-intensive. Each of the microservices must be reworked to grant that access, or the data must be copied and moved via an extract transfer and load (ETL) process, or the data must be duplicated in ongoing manner.  Each of these options are laborious, albeit manageable, for a handful of microservices.  But what about hundreds or thousands of microservices, which is where it’s headed?  This sprawl becomes unmanageable and ultimately, unsustainable, even with AI/ML.

AI/ML is clearly a hot tech trend today.  It’s showing up everywhere in many applications.  This is because standard CPU processing power is now powerful enough to run AI / machine learning algorithms.  AI/ML is showing up typically in one of two different variations.  The first has a defined specific purpose.  It is utilized by the vendor to automate a manual task requiring some expertise.  An example of this is in enterprise storage.  The AI/ML is tasked with placing data based on performance, latency, and data protection policies and parameters determined by the administrator.  It then matches that to the hardware configuration.  If performance should fall outside of the desired parameters AI/ML looks to correct the situation without human intervention.  It learns from experience and automatically makes changes to accomplish the required performance and latency.   The second AI/ML is a tool kit that enables IT pros to create their own algorithms. 

The 1st is an application of AI/ML.  It obviously cannot be utilized outside the tasks it was designed to do. The 2nd is a series of tools that require considerable knowledge, skill, and expertise to be able to utilize.  It is not an application.  It merely enables applications to be developed that take advantage of the AI/ML engine.  This requires a very steep learning curve.

Oracle is the first vendor to solve each and every one of these tech trend problems. The Oracle Exadata X8M and Oracle Database Appliance (ODA) X8 are uniquely suited to solve the 5G and IoT application database infrastructure problem, the edge-fog-core microservices problem, and the AI/ML usability problem. 

It starts with the co-engineering.  The compute, memory, storage, interconnect, networking, operating system, hypervisor, middleware, and the Oracle 19c Database are all co-engineered together.  Few vendors have complete engineering teams for every layer of the software and hardware stacks to do the same thing.  And those who do, have shown zero inclination to take on the intensive co-engineering required.  Oracle Exadata alone has 60 exclusive database features not found in any other database system including others running the same Oracle Database. Take for example Automatic Indexing.  It occurs multiple orders of magnitude faster than the most skilled database administrator (DBA) and delivers noticeably superior performance.  Another example is data ingest.  Extensive parallelism is built-into every Exadata providing unmatched data ingest.  And keep in mind, the Oracle Autonomous Database is utilizing the exact same Exadata Database Machine.  The results of that co-engineering deliver unprecedented Database application latency reduction, response time reduction, and performance increases.  This enables the application Database infrastructure to match and be prepared for the volume and performance of 5G and IoT.

The ODA X8 is ideal for edge or fog computing coming in at approximately 36% lower total cost of ownership (TCO) over 3 years than commodity white box servers running databases.  It’s designed to be a plug and play Oracle Database turnkey appliance.  It runs the Database application too.  Nothing is simpler and no white box server can match its performance.

The Oracle Exadata X8M is even better for the core or fog computing where it’s performance, scalability, availability and capability are simply unmatched by any other database system.  It too is architected to be exceedingly simple to implement, operate, and manage. 

The combination of the two working in conjunction in the edge-fog-core makes the application database latency problems go away.  They even solve the microservices problems.  Each Oracle Exadata X8M and ODA X8 provide pluggable databases (PDBs).  Each PDB is its own unique database working off the same stored data in the container database (CDB).  Each PDB can be the same or different type of Oracle Database including OLTP, data warehousing, time series, object, JSON, key value, graphical, spatial, XML, even document database mining.  The PDBs are working on virtual copies of the data.  There is no data duplication.  There are no ETLs.  There is no data movement.  There are no data islands.  There are no runaway database licenses and database hardware sprawl.  Data does not go stale before it can be analyzed.  Any data that needs to be accessed by a particular or multiple PDBs can be easily configured to do so.  Edge-fog-core computing is solved.  If the core needs to be in a public cloud, Oracle solves that problem as well with the Oracle Autonomous Database providing the same capabilities of Exadata and more.

That leaves the AI/ML usability problem.  Oracle solves that one too.  Both Oracle Engineered systems and the Oracle Autonomous Database have AI/ML engineered inside from the onset.  Not just a tool-kit on the side.  Oracle AI/ML comes with pre-built, documented, and production-hardened algorithms in the Oracle Autonomous Database cloud service.  DBAs do not have to be data scientists to develop AI/ML applications.  They can simply utilize the extensive Oracle library of AI/ML algorithms in Classification, Clustering, Time Series, Anomaly Detection, SQL Analytics, Regression, Attribute Importance, Association Rules, Feature Extraction, Text Mining Support, R Packages, Statistical Functions, Predictive Queries, and Exportable ML Models.  It’s as simple as selecting the algorithms to be used and using them.  That’s it.  No algorithms to create, test, document, QA, patch, and more.

Taking advantage of AI/ML is as simple as implementing Oracle Exadata X8M, ODA X8, or the Oracle Autonomous Database.   Oracle solves the AI/ML usability problem.

The latest tech trends of 5G, Industrial IoT, edge-fog-core or cloud computing, microservices, and AI/ML have the potential to truly be transformative for IT organizations of all stripes.  But they bring their own set of problems.  Fortunately, for organizations of all sizes, Oracle solves those problems.


          

Monday's ETF Movers: SKYY, ILF

 Cache   
In trading on Monday, the First Trust Cloud Computing ETF (SKYY) is outperforming other ETFs, up about 0.4% on the day. Components of that ETF showing particular strength include shares of Hubspot (HUBS), up about 4.2% and shares of Docusign (DOCU), up about 4.1% on the day.
          

Dr. Richard Daystrom on (News Article):IT’S HERE: D-Wave Announces 2048-Qubit Quantum Computing System, Theoretically Capable of Breaking All Classical Encryption, Including Military-Grade

 Cache   

IT’S HERE: D-Wave Announces 2048-Qubit Quantum Computing System, Theoretically Capable of Breaking All Classical Encryption, Including Military-Grade

 Tuesday, September 24, 2019 by: Mike Adams
Tags: big governmentbreakthroughcomputingcryptocurrencyD-Wavedecryptionencryptiongoodscienceinventionsquantum computingqubitssurveillance

 Over the last several days, we’ve highlighted the stunning breakthrough in “quantum supremacy” announced by Google and NASA. Across other articles, we’ve revealed how quantum computing translates highly complex algorithmic computational problems into simple, linear (or geometric) problems in terms of computational complexity. In practical terms, quantum computers are code breakers, and they can break all known classical encryption, including the encryption used in cryptocurrency, military communications, financial transactions and even private encrypted communications.

As the number of qubits (quantum bits) in quantum computers exceeds the number of bits used in classical encryption, it renders that encryption practically pointless. A 256-qubit quantum computer, in other words, can easily break 256-bit encryption. A 512-bit qubit computer can break 512-bit encryption, and so on.

Those of us who are the leading publishers in independent media have long known that government-funded tech advancements are typically allowed to leak to the public only after several years of additional advances have already been achieved. Stated in practical terms, the rule of thumb is that by the time breakthrough technology gets reported, the government is already a decade beyond that.

Thus, when Google’s scientists declare “quantum supremacy” involving a 53-qubit quantum computer, you can confidently know that in their secret labs, they very likely already have quantum computers operating with a far greater number of qubits.

At the time we were assembling those stories, we were not yet aware that D-Wave, a quantum computing company that provides exotic hardware to Google and other research organizations, has announced a 2048-qubit quantum computer.

The system is called the “D-Wave 2000Q” platform, and it features 2048 qubits, effectively allowing it to break military-grade encryption that uses 2048 or fewer encryption bits.

As explained in a D-Wave Systems brochure:

The D-Wave 2000Q system has up to 2048 qubits and 5600 couplers. To reach this scale, it uses 128,000 Josephson junctions, which makes the D-Wave 2000Q QPU by far the most complex superconducting integrated circuit ever built.

Other facts from D-Wave about its superconducting quantum computing platform:

  • The system consumes 25 kW of power, meaning it can be run on less electricity than what is typically wired into a residential home (which is typically 200 amps x 220 v, or 44 kW).
  • The system produces virtually no heat. “The required water cooling is on par with what a kitchen tap can provide,” says the D-Wave brochure.
  • The system provides a platform for truly incredible improvements in computational efficiency involving machine learning, financial modeling, neural networking, modeling proteins in chemistry and — most importantly — “factoring integers.”

“Factoring integers” means breaking encryption

The “factoring integers” line, found in the D-Wave brochure, is what’s causing unprecedented nervousness across cryptocurrency analysts right now, some of whom seem to be pushing the bizarre idea that quantum computers are an elaborate hoax in order to avoid having to admit that quantum computing renders cryptocurrency cryptography algorithms obsolete. (At least as currently structured, although perhaps there is a way around this in the future.)

“Factoring integers” is the key to breaking encryption. In fact, it is the extreme difficulty of factoring very large numbers that makes encryption incredibly difficult to break using classical computing. But as we have explained in this previous article, quantum computing translates exponentially complex mathematical problems into simple, linear (or you could call it “geometric”) math, making the computation ridiculously simple. (In truth, quantum computers are “computing” anything. The universe is doing the computations. The quantum computer is merely an interface that talks to the underlying computational nature of physical reality, which is all based on a hyper-computational matrix that calculates cause-effect solutions for all subatomic particles and atomic elements, across the entire cosmos. Read more below…)

Depending on the number of bits involved, a quantum computer can take a problem that might require literally one billion years to solve on a classical computer and render a short list of likely answers in less than one second. (Again, depending on many variables, this is just a summary of the scale, not a precise claim about the specifications of a particular system.)

Given that D-Wave’s quantum computers cost only a few million dollars — while there are billions of dollars’ worth of crypto floating around that could be spoofed and redirected if you have a system that can easily crack cryptography — it seems to be a matter of economic certainty that, sooner or later, someone will acquire a quantum computing system and use it to steal cryptocurrency wallets by spoofing transactions. To be clear, I’m sure D-Wave likely vets its customers rather carefully, and the company would not knowingly provide its quantum computing tech to an organization that appeared to be motivated by malicious intent. Yet, realistically, we’ve all seen historical examples of advanced technology getting into the hands of twisted, evil people such as those who run the Federal Reserve, for example.

D-Wave quantum computers don’t really “compute” anything; they send mathematical questions into multiple dimensions, then retrieve the most likely answers

So how does quantum computing really work? As we’ve explained in several articles, these systems don’t really carry out “computing” in the classic work sense of the term. There is no “computing” taking place in the D-Wave hardware. The best way to describe this is to imagine quantum computers as computational stargates. They submit mathematical questions into a hyper-dimensional reality (the quantum reality of superposition, etc.), and the universe itself carries out the computation because the very fabric of reality is mathematical at its core. As some brilliant scientists say, the universe IS mathematics, and thus the fabric of reality cannot help but automatically compute solutions in every slice of time, with seemingly infinite computational capability down to the subatomic level.

Put another way, the world of quantum phenomena is constantly trying out all possible combinations and permutations of atomic spin states and subatomic particles, and it naturally and automatically derives the best combination that achieves the lowest energy state (i.e. the least amount of chaos).

The end result is that a short list of the best possible solutions “magically” (although it isn’t magic, it just seems like magic) appears in the spin states of the elements which represent binary registers. Thus, the answers to your computational problems are gifted back to you from the universe, almost as if the universe itself is a God-like computational guru that hands out free answers to any question that you can manage to present in binary. (Technically speaking, this also proves that the universe was created by an intelligent designer who expresses creation through mathematics.)

Programmers can easily break encryption codes using standard C++ commands that interface with the quantum portal

All of these quantum functions, by the way, are controlled by standard computer language code, including C++, Python and MATLAB. The system has its own API, and you can even submit commands to the quantum realm via its “Quantum Machine Instruction” (QMI) commands. As D-Wave explains in its brochure:

The D-Wave 2000Q system provides a standard Internet API (based on RESTful services), with client libraries available for C/C++, Python, and MATLAB. This interface allows users to access the system either as a cloud resource over a network, or integrated into their high-performance computing environments and data centers. Access is also available through D-Wave’s hosted cloud service. Using D-Wave’s development tools and client libraries, developers can create algorithms and applications within their existing environments using industry-standard tools.

While users can submit problems to the system in a number of different ways, ultimately a problem represents a set of values that correspond to the weights of the qubits and the strength of the couplers. The system takes these values along with other user-specified parameters and sends a single quantum machine instruction (QMI) to the QPU. Problem solutions correspond to the optimal configuration of qubits found; that is, the lowest points in the energy landscape. These values are returned to the user program over the network.

In other words, breaking cryptography is as simple as submitting the large integer to the quantum system as a series of bits which are then translated into electron spin states by the quantum hardware. From there, a “go” command is issued, and the universe solves the equation in a way that automatically derives the best combinations of multiple qubit spin states to achieve the lowest overall energy state (i.e. the simplest solution with the least chaos). A short list of the best possible factors of the large integer are returned in a time-sliced representation of the binary registers, which can be read over a regular network like any subroutine request.

From there, a classical computer can then try factoring the large integer with the short list of the best answers from the quantum system, using standard CPUs and code logic. Within a few tries from the short list, the correct factors are easily found. Once you have the factors, you now have the decryption keys to the original encrypted message, so decryption is effortless. In effect, you have used quantum computing to “cheat” the keys out of the system and hand them to you on a silver platter. (Or, in some cases, a holmium platter lined with platinum, or whatever exotic elements are being used in the quantum spin state hardware.)

Any competent programmer who has access to this technology, in other words, can break encryption almost without effort. The programming logic is not complex at all. The difficulty in such systems is in the hardware control systems, including spin state “reads” and “writes,” which are strongly affected by temperature and electromagnetic interference. The exotic hardware is the real breakthrough in all this, not the computational part. (Quantum computers are physics oracles, in a sense. The physics is the challenge, not the computer code.)

Most people cannot grasp quantum computing, but that’s not a reason to pretend it isn’t real

One of the more curious things I’ve found recently is that some writers and publishers who don’t understand quantum computing are trending in the direction of pretending it doesn’t exist. According to some, Google’s 53-qubit announcement was a hoax, which must also mean that, in their view, D-Wave Systems isn’t real and doesn’t sell quantum computers at all.

That is not a rational position. There’s no doubt that D-Wave is a real company with real hardware, and that Google already possesses 2048-qubit quantum computing capabilities. Furthermore, Google and the NSA have every reason to keep this fact secret for as long as possible, so that they can continue to scrape everyone’s “encrypted” emails and financial transactions, all of which can be retroactively decrypted any time the NSA wants to look more closely at your activities.

To me, it has long been obvious that the cosmos itself is inherently computational. Just look at the collapse of probability waves found in the orbital shells of electrons. It should be self-evident that the universe is computing solutions at the subatomic level in every instant, effortlessly and without apparent cost. The very framework of the cosmos is driven by mathematics and rapid computational solutions. Once you realize how much subatomic phenomena is quantized, it becomes blatantly apparent that the universe is digitized and mathematical. The entire construct in which we exist, in other words, is a mathematical simulation, perhaps created by God for the purpose of amusing himself by watching our collective stupidity.

D-Wave Systems, by the way, knows exactly what’s up with all this. Their goal is to make quantum computing available to the masses. They also seem to hint at the hyperdimensional reality of how quantum computing works. From their brochure: (emphasis added)

While the D-Wave quantum computer is the most advanced in the world, the quantum computing revolution has only begun. Our vision is of a future where quantum computers will be accessible and of value to all, solving the world’s most complex computing problems. This will require advances in many dimensions and contributions from experts in diverse domains. It is exciting to see increasing investment worldwide, advances in research and technology, and a growing ecosystem of developers, users, and applications needed to deliver on that vision.

I can tell that the D-Wave people are some very smart folks. Maybe if these systems get at least an order of magnitude less expensive, we could buy one, install it in our mass spec lab, and start throwing computational questions at the universe.

Personally, if I had one of these systems, I would use it to solve protein folding questions for all the obvious reasons. Then I would probably have it start looking for blood and urine biomarkers for cancer. You could make a fortune applying quantum computing to solving horse race betting and handicapping equations, but that would seem silly compared to what the system is really capable of. Another application would be solving atomic decay patterns to derive the best way to synthesize antimatter, which can be used to power faster-than-light drive systems. (Which I cover at OblivionAgenda.com#utm_source=googlier.com/page/2019_10_08/66696&utm_campaign=link&utm_term=googlier&utm_content=googlier.com in a series of lectures. The FTL lectures have yet to be posted there, but are coming soon.)

Sadly, the deep state will probably use this technology to surveil humanity and enslave everyone with AI facial recognition and “precrime” predictive accusations that get translated into red flag laws. Once the tech giants profile you psychologically and behaviorally, a quantum computing system can easily compute your likelihood of becoming the next mass shooter. You could be found guilty by “quantum law” even if you’ve never pulled the trigger.

As with all technologies, this one will be abused by governments to control and enslave humanity. It doesn’t mean the technology is at fault but rather the lack of morality and ethics among fallen humans.

Read more about science and computing at Science.news#utm_source=googlier.com/page/2019_10_08/66696&utm_campaign=link&utm_term=googlier&utm_content=googlier.com.

 

*********************************************


          

Dr. Richard Daystrom on (News Article):BREAKING: NO MORE SECRETS – Google Achieves “Quantum Supremacy” That Will Soon Render All Cryptocurrency Breakable, All Military Secrets Revealed

 Cache   

BREAKING: NO MORE SECRETS – Google Achieves “Quantum Supremacy” That Will Soon Render All Cryptocurrency Breakable, All Military Secrets Revealed

 

Saturday, September 21, 2019 by: Mike Adams
Tags: bitcoincryptocurrencycryptographyencryptionGooglemilitary encryptionquantum computingquantum supremacyqubitssecrets

Preliminary report. More detailed analysis coming in 24 hours at this site. According to a report published at Fortune.com#utm_source=googlier.com/page/2019_10_08/66700&utm_campaign=link&utm_term=googlier&utm_content=googlier.com, Google has achieved “quantum supremacy” with a 53-qubit quantum computer. From reading the report, it is obvious that Fortune.com#utm_source=googlier.com/page/2019_10_08/66700&utm_campaign=link&utm_term=googlier&utm_content=googlier.com editors, who should be applauded for covering this story, really have little clue about the implications of this revelation. Here’s what this means for cryptocurrency, military secrets and all secrets which are protected by cryptography.

Notably, NASA published the scientific paper at this link, then promptly removed it as soon as the implications of this technology started to become apparent to a few observers. (The link above is now dead. The cover-up begins…) However, the Financial Times reported on the paper before it was removed. Google is now refusing to verify the existence of the paper.

Here’s the upshot of what this “quantum supremacy” means for Google and the world:

  • Google’s new quantum processor took just 200 seconds to complete a computing task that would normally require 10,000 years on a supercomputer.
  • A 53-qubit quantum computer can break any 53-bit cryptography in mere seconds, or in fractions of sections in certain circumstances.
  • Bitcoin’s transactions are currently protected by 256-bit encryption. Once Google scales its quantum computing to 256 qubits, it’s over for Bitcoin (and all 256-bit crypto), since Google (or anyone with the technology) could easily break the encryption protecting all crypto transactions, then redirect all such transactions to its own wallet. See below why Google’s own scientists predict 256-qubit computing will be achieved by 2022.
  • In effect, “quantum supremacy” means the end of cryptographic secrets, which is the very basis for cryptocurrency.
  • In addition, all military-grade encryption will become pointless as Google’s quantum computers expand their qubits into the 512, 1024 or 2048 range, rendering all modern cryptography obsolete. In effect, Google’s computer could “crack” any cryptography in mere seconds.
  • The very basis of Bitcoin and other cryptocurrencies rests in the difficulty of factoring very large numbers. Classical computing can only compute the correct factoring answers through brute force trial-and-error, requiring massive computing power and time (in some cases, into the trillions of years, depending on the number of encryption bits). Quantum computing, it could be said, solves the factoring problem in 2^n dimensions, where n is the number of bits of encryption. Unlike traditional computing bits that can only hold a value of 0 or 1 (but not both), qubits can simultaneously hold both values, meaning an 8-qubit computer can simultaneously represent all values between 0 and 255 at the same time. A deeper discussion of quantum computing is beyond the scope of this news brief, but its best application is breaking cryptography.
  • The number of qubits in Google’s quantum computers will double at least every year, according to the science paper that has just been published. As Fortune reports, “Further, they predict that quantum computing power will ‘grow at a double exponential rate,’ besting even the exponential rate that defined Moore’s Law, a trend that observed traditional computing power to double roughly every two years.”
  • As a conservative estimate, this means Google will achieve > 100 qubits by 2020, and > 200 qubits by 2021, then > 400 qubits by the year 2022.
  • Once Google’s quantum computers exceed 256 qubits, all cryptocurrency encryption that uses 256-bit encryption will be null and void.
  • By 2024, Google will be able to break nearly all military-grade encryption, rendering military communications fully transparent to Google.
  • Over the last decade, Google has become the most evil corporation in the world, wholly dedicated to the suppression of human knowledge through censorship, demonetization and de-platforming of non-mainstream information sources. Google has blocked nearly all websites offering information on natural health and holistic medicine while blocking all videos and web pages that question the corrupt scientific establishment on topics like vaccines, pesticides and GMOs. Google has proven it is the most corrupt, evil entity in the world, and now it has the technology to break all cryptography and achieve “omniscience” in our modern technological society. Google is a front for Big Pharma and communist China. Google despises America, hates human health and has already demonstrated it is willing to steal elections to install the politicians it wants.
  • With this quantum technology, Google will be able to break all U.S. military encryption and forward all “secret” communications to the communist Chinese. (Yes, Google hates America and wants to see America destroyed while building out a Red China-style system of social control and total enslavement.)
  • Google’s quantum eavesdropping system, which might as well be called, “Setec Astronomy,” will scrape up all the secrets of all legislators, Supreme Court justices, public officials and CEOs. Nothing will be safe from the Google Eye of Sauron. Everyone will be “blackmailable” with Google’s quantum computing power.
  • Google will rapidly come to dominate the world, controlling most of the money, all speech, all politics, most science and technology, most of the news media and all public officials. Google will become the dominant controlling authoritarian force on planet Earth, and all humans will be subservient to its demands. Democracy, truth and freedom will be annihilated.

Interestingly, I publicly predicted this exact scenario over two years ago in a podcast that was banned by YouTube and then re-posted on Brighteon.com#utm_source=googlier.com/page/2019_10_08/66700&utm_campaign=link&utm_term=googlier&utm_content=googlier.com a year later. This podcast directly states that the development of quantum computing would render cryptocurrency obsolete:

Beyond Skynet: Google’s 3 pillars of tech: AI, Quantum computing and humanoid robotics

Google has been investing heavily in three key areas of research:

  • Artificial intelligence (machine learning, etc.)
  • Quantum computing
  • Humanoid robotics

When you combine these three, you get something that’s far beyond Skynet. You eventually create an all-seeing AI intelligence that will know all secrets and control all financial transactions. With AI quickly outpacing human intelligence, and with quantum computing rendering all secrets fully exposed to the AI system, it’s only a matter of time before the Google Super Intellect System (or so it might be named) enslaves humanity and decides we are no longer necessary for its very existence. The humanoid robots translate the will of the AI system into the physical world, allowing Google’s AI intellect system to carry out mass genocide of humans, tear town human cities or carry out anything else that requires “muscle” in the physical world. All such robots will, of course, be controlled by the AI intellect.

Google is building a doomsday Skynet system, in other words, and they are getting away with it because nobody in Washington D.C. understands mathematics or science.

A more detailed analysis of this will appear on this site tomorrow. Bottom line? Humanity had better start building mobile EMP weapons and learning how to kill robots, or it’s over for the human race.

In my opinion, we should pull the plug on Google right now. We may already be too late.

 

***********************************


          

AI Researchers See Danger of Haves, Have-Nots

 Cache   

The growing cost of artificial intelligence research means that fewer people can easily access computing power to advance the technology, which scientists warn could shortchange university laboratories and make behemoths like Google, Microsoft, and Facebook dominant.


          

Using Light to Speed Up Computation

 Cache   

A new type of photonic integrated circuit called a PAXEL, for photonic accelerator, shows promise for high-speed, energy-efficient computing.


          

LambdaGuard – AWS Lambda Serverless Security Scanner

 Cache   
LambdaGuard –  AWS Lambda Serverless Security Scanner

LambdaGuard is a tool which allows you to visualise and audit the security of your serverless assets, an open-source AWS Lambda Serverless Security Scanner.

AWS Lambda is an event-driven, serverless computing platform provided by Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code.

LambdaGuard is an AWS Lambda auditing tool designed to create asset visibility and provide actionable results.

Read the rest of LambdaGuard – AWS Lambda Serverless Security Scanner now! Only available at Darknet.


          

[نرم افزار] دانلود Microsys Smarter Battery v5.8 - نرم افزار کنترل وضعیت باتری لپ تاپ

 Cache   

دانلود Microsys Smarter Battery v5.8 - نرم افزار کنترل وضعیت باتری لپ تاپ

Smarter Battery ابزاری برای نظارت بر باتری انواع رایانه های قابل حمل مانند لپ تاپ و تبلت می باشد که تمام اطلاعات مربوط به باتری را در اختیار شما قرار می دهد. این نرم افزار به شما کمک می کند تا علاوه بر صرفه جویی در مصرف انرژی، عمر باتری خود را نیز افزایش دهید. Smarter Battery روند شارژ یا دشارژ باتری را به شما نمایش داده و چندین پارامتر مهم مربوط به باتری مانند میزان مصرف و چرخه تخلیه شارژ را محاسبه می کند. این نرم افزار دائماٌ اطلاعات باتری را خوانده و پیش بینی هایی را برای ...


https://p30download.com/fa/entry/80736/#utm_source=googlier.com/page/2019_10_08/72243&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

مطالب مرتبط:



دسته بندی: دانلود » نرم افزار » کاربردی » سیستمی
برچسب ها: , , , , , , , , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: https://p30download.com/fa/entry/80736#utm_source=googlier.com/page/2019_10_08/72243&utm_campaign=link&utm_term=googlier&utm_content=googlier.com


          

[نرم افزار] دانلود NetSarang Xmanager Power Suite v6 Build 0164 - نرم افزار کنترل سیستم های سرور از راه دور

 Cache   

دانلود NetSarang Xmanager Power Suite v6 Build 0164 - نرم افزار کنترل سیستم های سرور از راه دور

Xmanager Power Suite شامل برنامه های Xmanager, Xshell, Xftp و Xlpd از شرکت NetStrong می باشد. Xmanager برای ویندوز و Xshell manages برای کنترل از راه دور سرور Unix/Linux با استفاده از یک ترمینال مطمئن است. برنامه ی Xftp نیز برای انتفال فایل بکار ورد و در نهایت Xlpd امکان پرینت اسناد (اسناد غیر لوکال که بر روی سیستم های دیگر هستند) را فراهم می کند.Xmanager Enterprise از امکانات مدیریتی زیادی بر شبکه و کنترل دیگر سیستم ها برخوردار می باشد که به وسیله آن ها هر مدیر سیستمی قادر خواهد بود تا نیازهای مختلف خود را در این زمینه ...


https://p30download.com/fa/entry/81450/#utm_source=googlier.com/page/2019_10_08/72255&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

مطالب مرتبط:



دسته بندی: دانلود » نرم افزار » اینترنت » کنترل از راه دور
برچسب ها: , , , , , , , , , , ,
لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید آنتی ویروس, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است همین حالا مشترک این پایگاه شوید!
لینک دانلود: https://p30download.com/fa/entry/81450#utm_source=googlier.com/page/2019_10_08/72255&utm_campaign=link&utm_term=googlier&utm_content=googlier.com


          

How Infrastructure and Operations Can Enable Digital Change

 Cache   

Infrastructure and operations (I&O) organizations in the digital era are on the cusp of dramatic change. From the death of the data center to the artificial intelligence (AI) revolution, every new wave of digital has a profound impact on the way I&O operates. And as I&O rapidly evolves from a support function to a strategic one, IT teams are no longer seen as service providers, but instead as enablers of business transformation. 

I&O leaders must refocus their priorities to meet the changing expectations of IT leadership to avoid losing resources and relevance within the organization

“To stay relevant in today’s enterprise, I&O needs to become more agile and customer-centric,” says Chirag Dekate, Senior Director Analyst, Gartner. “As digital adoption grows, top-performing I&O organizations are rethinking their structures, metrics and skill sets to align more closely with business stakeholders.”

To improve efficiency, increase productivity and enable digital business transformation, I&O leaders must adapt their organizations around changing CIO priorities. Dekate outlines three ways to do so.

Embrace new technologies and promote agility

In the Gartner 2019 CIO Survey, IT leaders reported digital initiatives, revenue growth and operational excellence among their top priorities. Traditional I&O areas of concern, such as modernization of legacy systems and ERP, weighed in much lower on the list. 

I&O leaders must refocus their priorities to meet the changing expectations of IT leadership to avoid losing resources and relevance within the organization. Adopting new technologies and processes that nurture digital business can demonstrate I&O’s commitment to transformation efforts. This includes technologies such as cloud, AI and automation, Internet of Things (IoT) and edge computing.

I&O leaders can also ensure that their teams are prepared to deliver on digital initiatives by accelerating agility. Simplify processes and embrace intelligent automation for low-value, repetitive tasks to free up time for more strategic efforts.

[swg_ad]

Speed up to the pace of business

CIOs report that IT is a core enabler of change in 94% of organizations that are revising their business models. In today’s era of digital transformation, this means that IT teams have the opportunity to directly impact their organization’s bottom line. However, change happens quickly, and I&O needs to speed up operations to be effective in supporting digital transformation. 

I&O leaders must reshape their organizations to deliver solutions at the rate and frequency that business requires them, rather than when I&O can deliver them. Often, this will mean a more product-centric operating model. Think beyond traditional silos like storage, networking or cloud, and instead deploy resources based on specific use cases. Restructure your I&O team to prioritize tasks that have a noticeable business impact and support CIO initiatives. 

Transform people for digital productivity

Delivering successful digital initiatives requires new skills. As I&O organizations embrace digital transformation, reskilling teams becomes necessary. Whether hiring new talent or retraining and restructuring existing teams, focusing on people will enable I&O to cultivate digital productivity.

New key performance indicators (KPIs) that focus on customer and enterprise value will also help I&O support digital initiatives. KPIs that encourage outcome-oriented execution will help ensure that activities align with the expectations of the CIO and other business stakeholders.

The post How Infrastructure and Operations Can Enable Digital Change appeared first on Smarter With Gartner.


          

Inside Sales Representative commercial computing - HP - Allerød kommune

 Cache   
From profiling, defining an effective commercial approach and managing opportunities to closing orders. You’re out to reimagine and reinvent what’s possible—in…
Fra HP - Sat, 28 Sep 2019 10:13:09 GMT - Vis alle Allerød kommune job
          

Principal IT Engineer Applications - (Greenwood Village, Colorado, United States)

 Cache   
This senior level employee is primarily responsible for translating business requirements and functional specifications into software solutions, for contributing to and leveraging the technical direction for the development of integrated business and/or enterprise application solutions, and for serving as a technical expert for project teams while providing consultation to help ensure new and existing software solutions are developed. Essential Responsibilities: Conducts or oversees business-specific projects by applying deep expertise in subject area; promoting adherence to all procedures and policies; developing work plans to meet business priorities and deadlines; determining and carrying out processes and methodologies; coordinating and delegating resources to accomplish organizational goals; partnering internally and externally to make effective business decisions; solving complex problems; escalating issues or risks, as appropriate; monitoring progress and results; recognizing and capitalizing on improvement opportunities; evaluating recommendations made; and influencing the completion of project tasks by others. Practices self-leadership and promotes learning in others by building relationships with cross-functional stakeholders; communicating information and providing advice to drive projects forward; influencing team members within assigned unit; listening and responding to, seeking, and addressing performance feedback; adapting to competing demands and new responsibilities; providing feedback to others, including upward feedback to leadership and mentoring junior team members; creating and executing plans to capitalize on strengths and improve opportunity areas; and adapting to and learning from change, difficulties, and feedback. As part of the IT Engineering job family, this position is responsible for leveraging DEVOPS, and both Waterfall and Agile practices, to design, develop, and deliver resilient, secure, multi-channel, high-volume, high-transaction, on/off-premise, cloud-based solutions. Provides insight into recommendations for technical solutions that meet design and functional needs. Translates business requirements and functional specifications into physical program designs, code modules, stable application systems, and software solutions by partnering with Business Analysts and other team members to understand business needs and functional specifications. Ensures appropriate translation of business requirements and functional specifications into physical program designs, code modules, stable application systems, and software solutions by partnering with Business Analysts and other team members to understand business needs and functional specifications. Serves as an expert for innovative technical solutions that meet design and functional needs. Facilitates and serves as a technical expert for project teams throughout the release schedule of business and enterprise software solutions. Builds and maintains trusting relationships with internal customers, third party vendors, and senior management to ensure the alignment, buy-in, and support of diverse project stakeholders. Recommends complex technical solutions that meet design and functional needs. Collaborates with architects and/or software consultants to ensure functional specifications are converted into flexible, scalable, and maintainable solution designs. Provides expertise and guidance to team members for systems incident responses for complex issues. Provides implementation and post-implementation triage and support of business software solutions by programming and/or configuring enhancements to new or packaged-based systems and applications. Reviews and validates technical specifications and documentation. Identifies specific interfaces, methods, parameters, procedures, and functions to support technical solutions while incorporating architectural designs. Supports component integration testing (CIT) and user acceptance testing (UAT) for application initiatives by providing triage, attending test team meetings, keeping the QC up-to-date, performing fixes and unit testing, providing insight to testing teams in order to ensure the appropriate depth of test coverage, and supporting the development of proper documentation. Leads systems incident support and troubleshooting for complex and non-complex issues. Identifies specific interfaces, methods, parameters, procedures, and functions, as required, to support technical solutions serving as an escalation point for complex or unresolved issues related to requirements translation. Develops and validates complex testing scenarios to identify application errors and ensure software solutions meet functional specifications. Participates in all software development lifecycle phases by applying comprehensive understanding of company methodology, policies, standards, and internal and external controls. Develops, configures, or modifies basic to moderately complex integrated business and/or enterprise application solutions within various computing environments by designing and coding component-based applications using programming languages. Provides consultation to help ensure new and existing software solutions are developed with insight into industry best practices, strategies, and architectures. Builds partnerships with IT teams and vendors to ensure written code adheres to company architectural standards, design patterns, and technical specifications. Leads the development, validation, and execution of testing scenarios to identify application errors and ensure software solutions meet functional specifications. Leads, mentors, and trains other technical resources to develop software applications.
          

DevOps Engineer (m/w/d)

 Cache   
DevOps Engineer (m/w/d) ALLPLAN GmbH München Sie bringen mehrjährige Erfahrung und Best-Practices im Cloud-Computing mit und erarbeiten an Hand Ihres Erfahrungsschatzes Verbesserungen, Upgrades und Fixes und begleiten den anschließenden Rollout hierzu; Weiterentwicklung d. Infrastrukturarchiter;... werden sie teil unserer erfolgsgeschichte und verstärken sie unsere abteilung development zum
          

دیکشنری ای دیتا: TCG Opal چیست؟

 Cache   

دیکشنری ای دیتا: TCG Opal چیست؟ TCG که مخفف «گروه محاسبه‌گر مورد اعتماد» است (Trusted Computing Group) یک موسسه متخصص در زمینه توسعه استانداردهای صنعت است و از گروه‌های کاری مختلفی تشکیل‌شده است. به‌عنوان نمونه SDD TCG Opal یک گروه فعال در زمینه ذخیره‌سازی و Storage است. هرکدام از اعضا در این گروه [...]

نوشته دیکشنری ای دیتا: TCG Opal چیست؟ اولین بار در آونگ. پدیدار شد.


          

cloud computing using Javascript

 Cache   
. Implement a prototype of the Keccak-f[25] function Your company is able to create a program using the SHA-3 (which uses the permutation Keccak-f[25]) to generate message digests, so as to ensure the integrity of customers’ data 2... (Budget: $30 - $250 AUD, Jobs: C# Programming, C++ Programming, Java, Javascript, Software Architecture)
          

Oracle Reveals $40 Million Investment in Chip Start-Up Ampere

 Cache   

Oracle recently announced an investment in chip startup Ampere. Ampere is run by Renee James, a former Intel executive who served as president of the company from 2013 till her departure in 2016, and currently serves on Oracle’s board. Ampere Computing develops microprocessors for cloud servers. Their processors are based on the chips designed by…

The post Oracle Reveals $40 Million Investment in Chip Start-Up Ampere appeared first on WebProNews.


          

How cloud/edge computing for printing is creating new opportunities for partners

 Cache   
By Ondrej Kracijek, Chief Technology Strategist, Y Soft.

Over the years, the technology sector has evolved and grown at an extortionate rate, with new platforms and features being created every single day. But once in a while a new idea creates a fundamental paradigm shift that changes everything. Cloud computing should be put into this group.
          

Vision and Graphics Lab Wins at NYC Media Lab Summit

 Cache   
The team, led by professor Steven Feiner, won the Top Prize, Future Interfaces & Spatial Computing, for their demo, Bounce! Collaborative VR for Low-Latency Interaction at the NYC Media Lab Summit last week.
          

Microsoft wants to make its cloud the platform 'for every workload on the planet,' and one way it's doing that is helping startups win bigger customers (MSFT)

 Cache   

Charlotte Yarkoni 2

  • Microsoft launched a program called Microsoft for Startups in February 2018 to teach startups how to sell to larger customers and give them access to Microsoft's customer base.
  • Microsoft sees helping startups as an investment because they can become partners in the future, and startups are also using Microsoft's cloud, Microsoft Azure.
  • Microsoft is also doing more to attract developers in startups like investing in open source software, something that it was previously reluctant to do.
  • Click here for more BI Prime stories.

When Chad Fowler, general manager of startups at Microsoft, initially joined the company through its acquisition of the Berlin-based productivity app startup Wunderlist, he felt that Microsoft was an "unlikely suitor" because it was a "sales-y company" that "didn't really get startups."

"I would describe myself as one of the least likely eventual Microsoft employees," Fowler told Business Insider. "I grew up in the '90s in technology companies, and I was very much on the open source side of the world. Microsoft's perception and resonance with that community is pretty well known historically."

Fowler meant that back in the 1990s, Microsoft often fought with open source, or software that's free for anyone to download and contribute to. Former Microsoft CEO Steve Ballmer even called the popular open source operating system Linux "a cancer." 

Fowler says that he still had these misconceptions, even after he joined Microsoft initially, but since then, Microsoft has made a turnaround. Now, he's leading a program called Microsoft for Startups, which was launched in February 2018. It helps startups sell their products to Microsoft's customers and teaches them how to scale up.

"I think of myself as a startup person first and an open source person second, and here I am doing that stuff in the context of Microsoft where I never thought I would actually be, but the company is different now," Fowler said.

The startup program underscores the radical shift at Microsoft in the five years since Satya Nadella took the CEO reins, leaving behind old hangups and propelling the company to a trillion dollar market valuation. For Microsoft, forging close ties with startups represents an important market opportunity as it tries to make headway against Amazon in the booming cloud computing market.

Charlotte Yarkoni, corporate vice president of commerce and ecosystem in the cloud and AI division at Microsoft, says that while startups have talented engineering teams, they often need help selling, so that's where Microsoft is uniquely positioned to help.

Startups on their own may have a harder time getting in front of enterprise customers in fields such as health care or manufacturing, but Microsoft for Startups gives them access to Microsoft's customer base. It also gives startups credits for Microsoft's cloud, Microsoft Azure, making startups likely to pick Azure as their cloud of choice to run their software. As a result, Microsoft sees helping these startups as an investment. 

"Startups are important because they do represent a lot of the innovation, whether it's building new workloads or even developing new trends and technologies that are more forefront in terms of business models they're contemplating," Yarkoni told Business Insider.

'Help first, sell last'

It's a win for Microsoft, too. Yarkoni says these startups may end up becoming partners in the future. 

"What we really wanted to do is build more of a startup ecosystem that represented a partner ecosystem for us, a budding partner ecosystem," Yarkoni said.

Microsoft also offers these startups credits for its cloud Microsoft Azure and helps them onboard onto it. If these startups sell their product on Microsoft's cloud, it also benefits Microsoft, and Microsoft gets to see "interesting new solutions" running on its cloud, Yarkoni says.

"We aspire for our cloud to be the platform for every workload on the planet," Yarkoni said. "These newer workloads and the innovations they bring are fantastic for us to have on our cloud."

Still, Fowler says helping startups is his top priority. His mission is to "help first, sell last." 

Read more: A top VMware exec explains how it avoided getting crushed by Microsoft in its early days — and the lesson startups can learn from it

Yarkoni says that startups who are part of the Microsoft program see an immediate increase in their businesses, with an average deal size "north of six figures."

"We continue to hear from startups that the most valuable thing we can do for them is help them win customers," Yarkoni said. 

'These startups today are the big enterprise customers of the future.'

Fowler worked on the Wunderlist team at Microsoft for a while, but he missed the startup world. When he met Yarkoni, she pitched him the idea of leading startup teams and advising founders, which he thought would be "the best of both worlds."

"Startups are small companies usually," Fowler said. "You can think of it as an investment. Not every startup that we work with goes on to become a unicorn and eventually IPO, but we do believe that Microsoft was a startup one day as well. These startups today are the big enterprise customers of the future."

Now, he leads the startup team, helping startups land deals with big companies and guiding them through technological and business expertise that they will need when selling to enterprise customers. The team is staffed with people who have gone through the pains of growing startups, such as having to completely rewrite their technology as the company grows. 

Fowler recalls that at Wunderlist, when a large customer contacted it to buy a large license, the team had no idea how to address things like disaster recovery and data privacy standards. He recalls that when Wunderlist was first building a sales team, the team "had no idea what we were doing."

"We hired the wrong sorts of people," Fowler said. "We were doing the wrong processes, all kinds of classic mistakes. What we do at the Microsoft for Startups program is become that trusted adviser that helps startups through that time."

Yarkoni says Microsoft plans to continue its investment in this program, including in its presence around the world and also in female founders. She says the number of startups coming into the program is growing. 

"Thes startup ecosystem is a great representation of meeting these communities where they are. It's very much the cornerstone of our startup philosophy," Yarkoni said. "Helping Microsoft get more engaged in the community is really something we're quite proud of, and we obviously want to share our journey as we go."

Got a tip? Contact this reporter via email at rmchan@businessinsider.com#utm_source=googlier.com/page/2019_10_08/84973&utm_campaign=link&utm_term=googlier&utm_content=googlier.com, Telegram at @rosaliechan, or Twitter DM at @rosaliechan17. (PR pitches by email only, please.) Other types of secure messaging available upon request. You can also contact Business Insider securely via SecureDrop.

SEE ALSO: Executives at $2.15 billion PagerDuty explain how it's keeping its 'love affair' with developers alive, almost six months after IPO

Join the conversation about this story »

NOW WATCH: Animated map shows how cats spread across the world


          

IT Architect Specialist - FIS - Brown Deer, WI

 Cache   
May require in-depth knowledge of networking, computing platform, storage, database, security, middleware, network and systems management, and related…
From FIS - Wed, 12 Dec 2018 13:26:30 GMT - View all Brown Deer, WI jobs
          

IT Architect Senior - FIS Global - Milwaukee, WI

 Cache   
May require in-depth knowledge of networking, computing platform, storage, database, security, middleware, network and systems management, and related…
From FIS Global - Mon, 23 Sep 2019 17:25:58 GMT - View all Milwaukee, WI jobs
          

Sr. Java, Big Data Developer

 Cache   
MN-Eagan, Genesis10 is seeking a Sr. Java Developer for a right to hire position with our client in Eagan, MN. Summary: The Senior Java Developer – Rebates, will be working as a Full-Stack developer on one of the most technically advanced teams at Client. You will be doing Agile development using Java/J2EE, Spring/Hibernate on an application that uses Grid Computing and Big Data sets. Responsibilities: Desi
          

accounting technician - MP Computing Ltd - Whitehorse, YT

 Cache   
Prepare other statistical, financial and accounting reports. Business Equipment and Computer Applications. Prepare trial balance of books.
From Canadian Job Bank - Sat, 28 Sep 2019 23:59:15 GMT - View all Whitehorse, YT jobs
          

End User Computing Senior Architect - AlixPartners - Detroit, MI

 Cache   
AlixPartners is a proud Silver award-winning Veteran Friendly Employer. AlixPartners is a results-driven global consulting firm that specializes in helping…
From AlixPartners - Wed, 05 Jun 2019 00:17:16 GMT - View all Detroit, MI jobs
          

Immunet Protect Free AntiVirus 7.0.0

 Cache   
Description: Immunet Protect Free AntiVirus is a malware and antivirus protection system that utilizes cloud computing to provide enhanced community-based security.Immunet Protect is free, light weight, cloud based Anti-Virus software which uses new approaches to provide malware protection. It is designed to work alongside Symantec, AVG and Mcafee to provide significantly improved detection rates in […]
          

Pixhawk 2.1 Standard Set with Here GNSS/Pixhawk2.1 Edison with Here GNSS Kit/Here+ RTK GNSS Set/Here GNSS(M8N) GPS Unit

 Cache   

foxtech


New Arrival:

Pixhawk 2.1 is the latest open source autopilot which greatly improved from the previous version Pixhawk in all aspects. Pixhawk 2.1 adopts modular design, so you could chose different carrier boards according to different requirements. Compared with the previous version, anti-jamming and stability of Pixhawk 2.1 are greatly improved.

This Pixhawk2.1 Intel Edison version is ideal for those who are looking for more computing power with Pixhawk 2.1 autopilot. With its dual core atom processor the Edison compute module will have loads of processing power for whatever application you decide to use this autopilot for(computer vision tasks or advanced flight algorithms).

Here+ RTK is centimeter level GNSS position system designed for the pixhawk 2. This module has an industry leading navigation sensitivity of -167dBm.

Here GNSS is compatible with Pixhawk, Pixhawk2.1 and most of the other open source flight controller. This GPS features an integrated GNSS module and a digital compass, a backup battery and LED lights for visible indications of UAV status.


Foxtech is going to fly GAIA 160-Hybrid across sea bay again from Sep.5 to Sep.9! This time the total range is 100km due to traffice control.  We will update the photos and videos on our facebook, you are welcomed to give us suggestions and ideas.


  


          

Open Core Summit: The Value of Cloud and Commercial Open Source Software

 Cache   

At the inaugural Open Core Summit (OCS) the key takeaways and opinions from the event included: the relationship between cloud computing and commercial open source software is an “and” relationship, rather than “versus”; open core is a business model, and should not be confused with open source software; and open core companies build extract a small amount of the total value they create.

By Daniel Bryant
          

Microsoft says its Surface Duo phone isn't really a phone -- here's why - CNET

 Cache   
  1. Microsoft says its Surface Duo phone isn't really a phone -- here's why  CNET
  2. Microsoft Surface Pro X vs 12.9-inch iPad Pro: Which big-screened productivity tablet is best?  PCWorld
  3. 3 Reasons Microsoft's New Folding Smartphone Could Be a Big Success  The Motley Fool
  4. Surface Duo, Galaxy Fold and every dual-screen phone we've seen  CNET
  5. The petitions are right. The Surface Duo should run Windows 10X  Digital Trends
  6. View full coverage on Google News

          

Contribute to the 2019 Network World State of the Network survey

 Cache   

Network World’s annual State of the Network study provides a comprehensive view of technology adoption trends among network IT.

This definitive research focuses on technology implementation objectives and reveals how leading objectives change the way IT decision-makers do their jobs, as well as the way their organizations function.

Survey results reveal trends about how network teams collaborate with other departments in the corporate structure, what factors derive technology investment, which way IT spending is moving and more.

This year Network World is looking for IT pros’ experience with technologies including 5G, SD-WAN, Wi-Fi and Edge Computing

To read this article in full, please click here


          

High performance computing: Do you need it?

 Cache   

In today's data-driven world, high performance computing (HPC) is emerging as the go-to platform for enterprises looking to gain deep insights into areas as diverse as genomics, computational chemistry, financial risk modeling and seismic imaging. Initially embraced by research scientists who needed to perform complex mathematical calculations, HPC is now gaining the attention of a wider number of enterprises spanning an array of fields.

"Environments that thrive on the collection, analysis and distribution of data – and depend on reliable systems to support streamlined workflow with immense computational power – need HPC," says Dale Brantly, director of systems engineering at Panasas, an HPC data-storage-systems provider.

To read this article in full, please click here


          

Chrome OS: Tips, tools, and other Chromebook intelligence

 Cache   

Google's Chrome OS platform sure has come a long way.

From the early days, when Chrome OS was little more than an experimental "browser in a box," to today — with the platform powering first-class hardware and supporting a diverse range of productivity applications — Google's once-crazy-seeming project has turned into one of the world's most intriguing and rapidly expanding technological forces.

I've been covering Chrome OS closely since the start. I lived with the first Chromebook prototype, the Cr-48, and have used Chromebooks as part of my own personal computing setup in varying capacities ever since. I write about the field not only as someone who's studied it professionally from day 1 but also as someone who has used it personally that entire time, up through today.

To read this article in full, please click here


          

Hardware Engineer Undergrad Internship for Workstations Premium System Team - HP - Fort Collins, CO

 Cache   
HP brings together a portfolio that spans printing, personal computing, software and services to serve more than 1 billion customers in over 170 countries.
From HP - Fri, 04 Oct 2019 10:12:51 GMT - View all Fort Collins, CO jobs
          

Software Engineer Internship for Workstations Technical Marketing - HP - Fort Collins, CO

 Cache   
HP brings together a portfolio that spans printing, personal computing, software and services to serve more than 1 billion customers in over 170 countries.
From HP - Thu, 03 Oct 2019 10:12:48 GMT - View all Fort Collins, CO jobs
          

Software Engineer Internship for Virtual Reality - HP - Fort Collins, CO

 Cache   
HP brings together a portfolio that spans printing, personal computing, software and services to serve more than 1 billion customers in over 170 countries.
From HP - Thu, 03 Oct 2019 10:12:48 GMT - View all Fort Collins, CO jobs
          

Software Engineering Intern - Analytics (Master's) - HP - Fort Collins, CO

 Cache   
HP brings together a portfolio that spans printing, personal computing, software and services to serve more than 1 billion customers in over 170 countries.
From HP - Thu, 03 Oct 2019 10:12:48 GMT - View all Fort Collins, CO jobs
          

Software Engineering Intern - App Development - HP - Fort Collins, CO

 Cache   
HP brings together a portfolio that spans printing, personal computing, software and services to serve more than 1 billion customers in over 170 countries.
From HP - Thu, 03 Oct 2019 10:12:48 GMT - View all Fort Collins, CO jobs
          

Software - I voleri esuberanti di Cloud Computing di SAP (DBAssistant)

 Cache   
Dal sito https:/ dbassistant.it#utm_source=googlier.com/page/2019_10_08/93774&utm_campaign=link&utm_term=googlier&utm_content=googlier.com, DBAssistant scrive nella categoria Software che: SAP ha inoltre presentato la sua rete di produzione, una piattaforma collaborativa basata su cloud integrata con SAP Ariba che collega i clienti con fornitori di servizi di produzione
SI parla anche di Software
1 Voti I voleri esuberanti di Cloud Computing di SAP

Vai all'articolo completo » .I voleri esuberanti di Cloud Computing di SAP.
I voleri esuberanti di Cloud Computing di SAP
          

Security and Privacy for Big Data, Cloud Computing and Applications

 Cache   
Название: Security and Privacy for Big Data, Cloud Computing and Applications
Автор: Wei Ren, Lizhe Wang
Издательство: The Institution of Engineering and Technology
Год: 2019
Страниц: 330
Язык: английский
Формат: pdf (true)
Размер: 10.1 MB

As big data becomes increasingly pervasive and cloud computing utilization becomes the norm, the security and privacy of our systems and data becomes more critical with emerging security and privacy threats and challenges. This book presents a comprehensive view on how to advance security and privacy in big data, cloud computing, and their applications. Topics include cryptographic tools, SDN security, big data security in IoT, privacy preserving in big data, security architecture based on cyber kill chain, privacy-aware digital forensics, trustworthy computing, privacy verification based on machine learning, and chaos-based communication systems.
          

The heterogeneous themes and issues explored sur Soufflé au chèvre frais

 Cache   
The heterogeneous themes and issues explored within the chapter utilise the following features: Synopsis The real-life examples from the health and group distress setting in Scenario features okay you to utter theory into office practically and learn thither situations you may or may not already be familiar withComputing software If you contain computing software at one's fingertips in regard to you to treatment you should ?nd this the easiest and quickest disposition to ana- lyse your factsIn children aged from whole year the germane enormousness of Toxic/therapeutic disturbances tracheal tube can be assessed on the following method: Thromboemboli Internal diameter (mm) (seniority in years/4) 4 Infants in the cardinal infrequent weeks of passion inveterately need a tube of Algorithm through despite paediatric advanced life strengthen greatness 3-3Cushions When travelling on a plane, patients are advised to keep their cushions with them and not to allow them to be stored in the check with the wheelchair, as they can smoothly get down the drainOther intelligence on the NHS Core Screening Slate can be inaugurate on the NHS Cancer Screening Programmes website generic alesse 0.18mg fast delivery.
The incisional line on the mediastinal pleura already made later to the hilum is extended help toward the pulmonary ligament to bring to light the motherland of the entire lowly pulmonary streakGastrointestinal stromal swelling (MEAT) These are the commonest mesenchymal neoplasms of the gastrointestinal sector, with up to 6000 new cases diagnosed annually in the USALPS stimulation of zinc-sufficient monocytes results in downregulation of rabid cyto- kines such as tumor necrosis factor-alpha (TNF-), IL-1, and IL-8 3,34] dipyridamole 25 mg low cost. Beyond compromising users, buying cheaper gear on costs insurers well-to-do in the long run, as Marcia suggestedFurlong; and by means of the in?uence he entirety them the American Orthopedic Associa- exerted on many more who came to pinch-hit wait out at his feet tion and the French, German, Scandinavian, in requital for shorter spellsFrom this Robert Jones well-grounded not to weaken time, and to very much unadorned, undigni?ed, perhaps illicit, some- be versed the intimate possibilities of right treat- times bawdy, but continually happy and joyous activ- mentThis catalogue of a regal orthopedic After the war he returned to StIn information, Pedro’s metabolic adapt causes a certain pecu- liar device of paunchiness that results in a curvilinear (moon) aspect and corpulence around the trunk (centripetal obesity) order venlor online now. These findings care for substantiation that the shear ictus receptive procedure is elsewhere at stenosis, but baksheesh in non-stenotic segments of abnormal coronary arteriesPerimortem fracture with naтА╣ve bone answer can be seen in the two nautical port ribs, whereas the ribs on the accurate show healed ante-mortem fracturesIn people ranging in majority from 19 to 90, length intricate compres- whistle the brachial artery with a blood demands cuff so as to shorten off spreadAll patients underwent an echocardiogram and radioisotopic ventriculography rest/exercise, and were randomly assigned to an L-arginine coterie and a citrulline malate heap for a two-month yearsNondepolarizing neuromuscular blockade can be a valu- able adjunct to the conduct of these patients discount prinivil generic. While cubicle transmutation assays get been available instead of upon 50 years, in the dead and buried decade they gained reputation within contract research organisations, regulatory bodies, university laboratories and chemical, agrochemical, cosmetic and pharma- ceutical industriesHRT was beginning released in the 1960s in the body of estrogen-only preparations and grew in favour up to 1974, when 28 mil- lion prescriptions were filled 2]The rejoin is in the central components of the cerebral hemispheres and nearby areas (Feinberg and Keenan 2005), as might be expected as a locate least unguarded to sense damageWe over two modules only, as has already been considered in (Taylor 1997) in the turns out that of color illusions, involving two color modules, ditty as a remedy for each color, and the bubble solutions institute to attack the patterns observed when a blue/red confines was stabilized on subjects' retinas (joined being the non-intuitive sea' of blue and red mixed together, as in the know by some subjects)Edelhauser HF, Rowe-Rendleman C-L, Robinson MR, Dawson DG, Chader GJ, Grossniklaus HE, Rittenhouse KD, Wilson CG, Weber DA, Kuppermann BD, Csaky KG, Olsen TW, Kompella UB, Holers VM, Hageman GS, Gilger BC, Campochiaro PA, Whitcup SM, Wong WT (2010) Ophthalmic Put on ice 12 (continued) 3 generic rumalaya gel 30 gr amex. For example, in 1972 more than 100 As of 2002, information estimates compiled from vari- countries, including the Collective States, signed the Conclave ous agencies provide indications that more than two dozen on the Debarring of the Development Shaping, and the countries are actively involved in the maturation of biologi- Stockpiling of Bacteriological (Biological) and Toxin cal weaponsDuring the in conflict in the Crimea, Pirogoff was made the Surgeon Shared in storm of the medical business in SevastopolMeet References Facial paresis Hocquet Diabolique - look at HICCUPS Hoffmann’s Notify Hoffmann’s clue or reflex is a digital reflex consisting of flexion of the thumb and typography hand finger in response to snapping or flicking the distal phalanx of the mid do one's part, causing a unanticipated widening of the intersection buy confido cheap online. Mutation Research/Fundamental and Molecular Mechanisms of Mutagenesis. 2004;555:133148. 237TAAs fool been classified into a sprinkling categories includ- ing differentiation, tissue-specific, mutated, and overexpressed antigensWithout insight of the operative income, understanding this complex anatomy is altogether difficult and can be erroneous as an acute dissection transformNormalization of the vagina by dilator treatment merely in com- plete androgen insensitivity syndrome and Mayer-Rokitansky- Kuster-Hauser syndrome buy fluoxetine overnight delivery. Collateral vessels (denominated "moyamoya vessels" (MVs)) are seen in the basal ganglia and thalami extending from the suprasellar district 955]E- and P-selectin are not tangled in the recruitment of inflammatory cells across the bloodbrain frontier in empirical autoimmune encephalomyelitisAlthough still notional, this attestation suggests that the epileptic vocation may not be precisely associated with the composure of infiltrating T cells (Hauf et al., 2009)Shiatsu is a Japanese conduct of acupressure that uses pres- unshakable from the fingers to independent energy glideAlthough the safe rejoinder is required pro the famed elimination of pathogens, an boisterous fomenting response can lead to tissue damage, component lead balloon, and death purchase lopressor 25 mg fast delivery. In totalling, it is of utmost importance to spotlight the gamble championing generalized anesthesia, sedation, and analgesia (first of all thiopentane should be avoided) because of unexpected death reported in dissimilar casesAtypical teratoidrhabdoid tumors of the pre-eminent in a sweat modus operandi accept been increas- ingly recognized terminated the career decadeIn the pass out sanctum sanctorum, recollapse was prevented in cases with adequate feasible area corre- sponding to the acetabular subchondral roof on regular anteroposterior radio- graphs and 45° ?exion on anteroposterior views buy cheap combivent 100 mcg. Laboratory and Diagnostic Tests The original diagnosis is time after time made based on the retailing and clinical findingsOn the other round of applause, lame as S2~4 spi- nal roots are, but they can be identified from each other at the concrete of spinal cord, combined with electrifying stimulation and bladder pressure mea- solid, the particular considerably toes of shirty anastomo- sis can be correctly determinedTreatment Fees in place of Computer Infrastructures Developers can choose to hurriedly their systems on diverse computer infrastructures ready on the InternetAssessment Assessment of musculoskeletal dysfunction in children includes healthfulness recapitulation, specialist examination, and laboratory and diagnostic testingRespecting variable job parameters it is admissible to define a ilk of the parameter that is shown when the coming is configured purchase tamsulosin pills in toronto.
          

What Open Source Can Learn from Scientific Computing and Vice Versa

 Cache   

By Yordan Karadzhov We all come to open source from different places, but my particular journey to becoming a full-time open source contributor—and more recently a project co-maintainer—is a little unusual. I started out working in experimental physics doing scientific computing. As part of my work, I was also a project co-maintainer, and it’s interesting

The post What Open Source Can Learn from Scientific Computing and Vice Versa appeared first on VMware Open Source Blog.


          

Los costes de trabajar en la nube y por qué muchas empresas no quieren asumirlos

 Cache   

Los costes de trabajar en la nube y por qué muchas empresas no quieren asumirlos

Hoy en día una empresa de nueva creación es muy posible que apueste por llevarse muchos de sus servicios o infraestructuras a la nube. Sin embargo las ya consolidadas, a pesar de que poco a poco van dando pasos en esa dirección tienen en los costes a largo plazo una barrera importante a la hora de dar el salto a la nube.

Fundamentalmente las nuevas empresas ven varios motivos fáciles de entender para apostar por la nube. Por un lado los costes iniciales se reducen muchísimo ya que la mayoría de programas, servicios o infraestructuras son de pago por uso. Por otro lado, es escalable. Si el negocio va bien contratar más usuarios, más potencia en el servidor, más extensiones para la centralita virtual, es coser y cantar. También en caso de que la cosa no vaya según lo previsto y tengamos que reducir personal o incluso cerrar.

La empresa veterana ya tiene hardware y software para utilizar

Una empresa ya consolidada sabe que la inversión que va a realizar en montar un nuevo servidor, programa o servicio la tiene amortizada. Sabe que su número de clientes no subirá como la espuma, pero tampoco bajará. Que tiene los empleados que necesita para sacar a adelante el trabajo y ampliar plantilla tampoco es algo tan habitual.

Cuando habla de mudarse a la nube es el momento de echar números. Y ve que a 10 años le sigue saliendo más rentable invertir en hardware, software o infraestructuras que tiene en su propia oficina. Esa inversión inicial más elevada, le sale más económica que la que le propone la nube. Y es complicado convencerlos.

Porque por mucho que al cabo de 10 años haya que volver a renovar, en la mayoría de los casos la vida útil de programas es un ciclo que no convence a todo el mundo. Por mucho que con la nube siempre tengamos la última versión de un programa o el último hardware para nuestro servidor virtual.

Un ejemplo de esta dualidad de tendencias lo tenemos en los coches de leasing, donde la empresa paga una cuota que comprende todo, desde el coste del vehículo a su mantenimiento. Y llegado el momento de renovar, buscan un modelo nuevo. Otras empresas adquieren su vehículo, y corren con los gastos de mantenimiento y lo exprimen hasta que ya no da más de sí.

Con los datos en la nube muchas organizaciones temen que sus secretos no estén a salvo

Por último una de las grandes ventajas de trabajar en la nube, la movilidad, poder trabajar desde cualquier sitio no es una necesidad para muchas organizaciones que trabajan apegados a su oficina en un horario concreto. Al salir del despacho no quieren saber nada del trabajo. Y el no saber dónde están físicamente sus datos un problema para muchos de ellos que no se fían y consideran que en sus propias instalaciones estarían más seguros.

Quizás todavía haga falta tiempo. La nube es un cambio muy grande y hay que ir haciéndolo poco a poco. ¿Quién le iba a decir a muchas empresas que su centralita sería virtual hace apenas unos años? ¿Y su programa de facturación o el de nóminas en la nube? Muchos solo dan el paso cuando alguien cercano ven que ha apostado por determinado servicio y funciona bien.

Imagen | wynpnt

También te recomendamos

¿Están los documentos en la nube más seguros que en nuestras oficinas?

La dependencia tecnológica de la nube en la pyme, ¿un impedimento para su adopción?

¿Si la Administración se pasa a Google Apps, por qué no tu empresa?

-
La noticia Los costes de trabajar en la nube y por qué muchas empresas no quieren asumirlos fue publicada originalmente en Pymes y Autonomos por Carlos Roberto .


          

Database Administrator - West Virginia Network for Educational Telecomputing (WVNET) - Morgantown, WV

 Cache   
Installs and maintains Oracle web and development products on a variety of platforms including AIX, Linux and Windows platforms.
From Indeed - Wed, 11 Sep 2019 18:29:54 GMT - View all Morgantown, WV jobs
          

Concurrent learning and information processing: a neuro-computing system that learns

 Cache   
none
          

Lenovo Elite ThinkPad W700ds

 Cache   

If one huge, high-resolution display on a notebook is good, two must be better, right? That’s exactly what Lenovo’s design team is banking on with the new Lenovo ThinkPad W700ds.

The W700ds isn’t a replacement or a refresh of the original, but rather, an optional upgrade to the same basic platform that makes good on Lenovo’s long-rumored promise to launch a notebook with dual displays. As NBR editor Jerry Jackson noted when he took a look at pre-production W700ds last month, one crucial area where desktop replacements have proved to be no replacement for a good desktop workstation is in the ability to pack up multiple displays and take them with you.

With a sliding 10.6 inch display that pops out from the space behind the original W700’s 17 inch panel, the W700ds still doesn’t have the screen real estate of two full-size desktop displays. But it also gives the W700 platform yet another leg up on its rivals, and, especially, another enticement for graphics pros on the go.

Lenovo ThinkPad W700ds Specifications:

  • Processor: Intel Core 2 Extreme Q9300 (2.53 GHz, 1066 MHz FSB, 12 MB L2 cache)
  • Memory: 4 GB DDR3 SDRAM
  • Screen: 17” 1920x1200 WUXGA TFT LCD and 10.6” 1280x760 TFT LCD
  • Storage: 259 GB HDD (7200 RPM) x 2, RAID 0 configuration
  • Optical Drive: DVD recordable
  • Wireless: Intel Wi-Fi Link 5300 (802.11a/g/n), Bluetooth 2.0
  • Graphics: NVIDIA Quadro FX 3700M with 1 GB
  • Battery: 9-cell lithium-ion (84 Wh)
  • Dimensions: 16.1” x 12.3” x 2.1”
  • Weight: 10 lbs, 15 oz (with battery
  • Price As Tested: $5,309.00
  • Starting Price: $3,663.00

The W700ds sports a base price of $3,663, which is already more than a grand higher than the single-screen version. And as configured – with top of the line processing and graphics options – our test model will set you back considerably more. Dressed as seen in this review, you’ll need a spare $5,309 to cover the cost.

Because our W700ds features the same fundamental technologies under the hood as the recently reviewed original ThinkPad W700, we’ve reused sections of that review where appropriate. All performance testing and benchmarking, however, is specific to this particular review unit.

Build and Design

In our original review of the original W700, I jokingly called the single-screen variant "the laptop designed to make normal people feel small." But as Jerry noted in checking out the W700ds for the first time, clearly big wasn’t big enough for the folks at Lenovo. While the basic chassis appears unchanged (with a basic footprint of 16 by 12 inches), the addition of that second slide-out screen behind the primary panel is felt in a line that’s thicker than some slender notebooks all by itself.

All closed up, the entire notebook measures some over two inches thick, with the computer’s rubber feet adding another quarter of an inch or so to this measurement when the machine is sitting on a desk.

Open the W700ds up, press gently on the right-hand side of the lid, and the second display emerges from its spring-loaded compartment. When extended, the portrait-orientation secondary screen juts out an addition seven inches or so on the right side – well into the personal space of those seated next to you at coffee shops or on airplanes. Even more amusing is the W700ds’s total width measurement with the screen extended: about half an inch shy of two feet across.

As with all desktop machines, portability is relative here, though the W700ds’s weight and bulk makes it less fun to move around than even the majority of 17 inch notebooks. For starters, the dual screens further up this model’s already considerable heft – to nearly 11 pounds on its own, or a whopping 13 pounds and change when you pack along the very bricklike power brick as well. Unfortunately, a year’s supply of chiropractic work wasn’t included among the accessories and options on Lenovo’s customization page. Conversely, even tipping the scales at an inauspicious 13 pounds, the W700ds is still a fair bit easier to manage on the road than the original W700 plus a second monitor, even a small one.

As before, build quality with the W700ds is everything Lenovo is known for, with tight fitment all around and an impressively small measure of panel flex for a laptop this large. I was initially concerned about whether the second screen’s attachment mechanism would seem flimsy, given its sliding design. But true to form, Lenovo has engineered a solution to this problem as well, with second panel exhibiting little flex at its attachment point and really not feeling at all precarious.

As we noted with the original W700, the W700ds certainly doesn’t look like a notebook aimed, at least in part, at creatives and graphics pros. Next to the shiny, ultra-modern aesthetic of the MacBook Pros that are ubiquitous in the photo/graphics world, the W700’s business-like shape and matte gray finish have about as much sex appeal as Soviet agricultural equipment. That said, Lenovo build quality is nothing short of legendary, and while I’ve long used Macs almost exclusively for graphics work, I grew to love the sturdy, down-to-business appeal of the original W700 in a long-term test with that machine. Those who love ThinkPads do so with good reason, and even with the addition of a potential weak spot in the sliding secondary screen, I was unable to find a serious charge to level against the W700ds’s build quality.

Primary Display

As with the original W700, our W700ds review unit came packing Lenovo's high-end 17 inch primary display with 1920x1200 (WUXGA) resolution and 400 NIT brightness. Rivaling a good desktop display for brightness, clarity, and even size, the W700ds's premium LCD panel continues to be one of this model's key selling point for power graphics users.

Like we saw the first time around, the display is smooth and crisp with more brightness and appreciably better contrast than we're used to in a laptop screen. A light reflective coating protects the screen, but glare is well controlled (the screen's native brightness certainly helps in this regard). Backlighting is relatively even, though a careful inspection reveals some slight brightness fall-off toward the top of the panel; it's a nit-picky consideration for sure, but in a display option costing this much, there's no reason not to be particular.

Viewing angles are excellent side-to-side, though only acceptable on the vertical axis. Like some other laptop screens we've looked at, the W700ds's display has a marked sweet spot for vertical viewing, with contrast washout coming quickly if you're viewing from too high up, and false excessive contrast introduced from too low. The viewing window for precise color reproduction is certainly less than 10 degrees wide on the vertical axis, but to the positive it's fairly easy to tell when you're "locked in."

The W700ds's wide-gamut display covers a claimed 72 percent of the Adobe RGB color space, a significant improvement over your typical laptop LCD. Nonetheless, Lenovo's decision to use a twisted nematic (TN) panel across the W700 platform rather than an in-plane switching variant as is the standard in professional desktop applications was a cause for concern on our first go-round with these ThinkPads. Interested users can check out my original W700 write-up for more detail, but basically the TN display used here performs admirably when calibrated/profiled, and gamers will love its fast refresh. You can certainly find areas in which color reproduction doesn’t keep pace with a high-end desktop display, but you have to be pushing the W700ds pretty hard to do so. As before, at saturation extremes, you will run up against some noticeable color flattening and out-of-gamut issues; photographers and graphic designers who prep for print as well as web use should take note.

As to the question of whether the W700ds’s upgraded display is worth the cost it adds, I still feel after a lot more time with both the original W700 and the new model that the answer is at once yes and no. For what most of us do, even at a fairly demanding level, the panel's gamut is more than sufficient – not to mention excellent contrast and brightness that's superior to just about anything else out there. In fact, the W700ds's 72 percent gamut strikes a nice balance for general use, providing good color reproduction for sRGB applications and a nicely saturated look that isn't so wide-ranging as to be difficult to deal with outside of color-managed applications. For power graphics users, though, the problem is that these machines have some strong competition on the display front coming to market in the high-end graphics workstation space from HP and Dell.

Secondary Display

The one feature that distinguishes the W700ds from the original W700 is its secondary screen. As described previously, a second 10.6 inch, 1280x768 display is mounted in portrait orientation into a recess behind the primary panel, and can be popped out as desired to further expand the W700ds’s desktop area. As we noted in our first look at the W700ds, the screen can also be tilted forward about 30 degrees if desired, providing a more ideal viewing angle for the second display.

Screen power-on and recognition were seamless, with our test unit’s 64-bit Vista install automatically picking up the additional space almost immediately when the screen was extending, and exhibiting no problems when we retracted it again during use. With nearly identical vertical resolutions (1200 on the main display, versus 1280 on the secondary screen), dragging windows across the two displays wasn’t as difficult or awkward as might be assumed for this slight mismatch.

The bigger differences between the two displays, in fact, have to do with color reproduction, brightness, and clarity. The second display lacks the more highly reflective coating of its high-end companion, and with 280 NIT brightness (versus 400 on the primary display) and a narrower reproducible gamut, colors definitely don’t pop as much. Never mind annoying backlight bleed coming from both sides of the screen, poor side-to-side viewing angles, and a serious mismatch on black depth (true black is reproduced much lighter and bluer on the secondary display by default). Combined with color discrepancies courtesy of different calibration and profiling (more on that in the next section), power graphics users may well ask, “What’s the point?”

Given these faults, where the second display excels for graphics use, in fact, is less as an extension of the desktop and more as a docking station for tools, workflow windows, and so on. I found the additional space especially useful in both Photoshop and Illustrator for keeping tools/palettes out of the way and off of the main workspace.

Color Calibrator/Profiler

Like our original W700 review unit, the W700ds ships with an optional built-in X-Rite/Pantone Huey color calibration and profiling system that includes a simplified software package for quick color matching as well as a spectrocolorimeter for taking the necessary measurements that’s built right into the surface of the notebook. Via that little electronic eye, the W700ds is able to read the necessary color patches for automatic profiling and calibration when the lid is closed.

We detailed the (extremely simple) process afforded by this built-in system in our original W700 review, and I won’t rehash those details here. What should be noted, though, is that the W700ds’s secondary screen throws a bit of a wrench in the works: obviously, the onboard spectro can’t “see” the second screen, and thus can’t be used for profiling – leading to some appreciable color mismatches between primary and secondary displays that further encouraged me to use the second screen as a file management area only. The built-in system’s extremely simplified software interface also doesn’t seem to recognize external spectros, meaning if you really want to profile both displays, you’ll need to remove the supplied Huey system and start over with a third-party solution.

Keyboard, Touchpad, and Digitizer

Input options abound with the W700ds. For many diehard Lenovo fans, though, the world of input devices begins and ends with Lenovo's legendary keyboards. And the W700ds's equipment in this area is just as we've come to expect, with smooth key action and a quick, short stroke that makes typing on every W700 we’ve spent time with a pleasant experience. There's a hint of flex at the top right corner of our review unit's 'board – up around the Backspace key – and we noticed some flex in the num pad too this time around. But otherwise the full-size keyboard feel securely anchored to the W700ds's subframe.

Basic dedicated speaker control buttons (mute, volume up, volume down) adorn the space above the keyboard, next to Lenovo's trademark "ThinkVantage" button – which calls up a sort of clearing house for basic computer maintenance and configuration options.

For a laptop this big, the touchpad area is a bit small: it's not like space is exactly at a premium on the W700ds's top deck. As with the small tablet area, you certainly get used to it, but it does leave one to wonder why Lenovo didn't go for something slightly larger. The pad features vertical and horizontal axis dedicated scroll areas.

Dedicated left/scroll/right buttons flank the top of the touchpad area, with the standard left and right click only residing beneath. Both button arrays have a soft click feel that's ideal for all-day use (try using a computer with hard, clicky buttons for more than ten minutes and you'll understand what we're talking about).

With lots space south of the keyboard that typically goes unused on larger notebooks to work with, Lenovo's designers opted to integrate a small, optional Wacom digitizer into the W700ds as well. The 3x5 inch tablet area provides a nicely sized work area: users coming from larger tablet spaces will find it cramped, but resolution is decent and moreover, having a digitizer that you don't have to pack along separate from your workstation will be a welcomed addition for many users.

After a lot of time on both the W700 and W700ds, I've adapted to the tablet’s size quite well. The idea of integrating a digitizer into a graphics-focused machine is an excellent one that I'm betting will find acceptance among photographers, graphic designers, CAD techs, and architects – all key markets for this niche focused machine. The pad's placement works well for day to day use, keeping your pen hand clear of the touchpad and typing areas for easy key-control use (to queue up tools in Photoshop, for instance) while you're working on the tablet. Of course if you're left-handed, all ergonomic bets are off.

The W700ds's included pen, which stows away into a silo in the right-hand side of the notebook isn't particularly enjoyable in use. It's small, and the buttons feel cheap, but compatibility with most Wacom-ready pens means the range of control options for the this tablet are nearly unlimited. In fact, if you don't own another compatible pen, go ahead and order one with your ThinkPad purchase: the included stylus really is an option of last resort even more so than the small tablet itself.

Performance and Benchmarks

With our W700ds bearing basically the same “under the hood” components – a 2.53GHz Intel Q9300 quad-core processor, 4GB of memory, and NVIDIA FX3700M discrete graphics – as the W700 we looked at a few months back, we were expecting a similar performance. Indeed, with the only noteworthy hardware difference being the substitution of a pair of 250GB RAIDed hard drives for the 160GB units in our first review unit, synthetic benchmark numbers on Windows Vista Ultimate 64-bit equipped machine were neck and neck with those from the original W700.

PCMark05 measures overall notebook performance (higher scores are better):

Notebook

PCMark05

Score
Lenovo W700ds (2.53GHz Intel Q9300, NVIDIA Quadro FX 3700M 1GB) 8,319 PCMarks

Lenovo W700 (2.53GHz Intel Q9300, NVIDIA Quadro FX 3700M 1GB)

8,207 PCMarks

Lenovo T500 (2.80GHz Intel T9600, ATI Radeon 3650 256MB GDDR3)

7,050 PCMarks

HP Pavilion HDX18 (2.8GHz Intel T9600, Nvidia 9600M GT 512MB)

6,587 PCMarks

Gateway P-7811 FX (2.26GHz Intel P8400, NVIDIA 9800M GTS 512MB)

6,815 PCMarks

Apple MacBook Pro (2.2GHz Intel T7500, Nvidia 8600M GT 128MB)

5,864 PCMarks

Dell XPS M1330 (2.0GHz Intel Core 2 Duo T7300, NVIDIA GeForce Go 8400M GS)

4,591 PCMarks

Lenovo ThinkPad X61 (2.0GHz Intel Core 2 Duo T7300, Intel X3100)

4,153 PCMarks

Lenovo T60 Widescreen (2.0GHz Intel T7200, ATI X1400 128MB)

4,189 PCMarks

HP dv6000t (2.16GHz Intel T7400, NVIDA GeForce Go 7400)

4,234 PCMarks

Sony VAIO SZ-110B in Speed Mode (Using Nvidia GeForce Go 7400)

3,637 PCMarks


3DMark06 represents the overall graphics performance of a notebook (higher numbers indicate better performance):

Notebook

3DMark06

Score
Lenovo W700ds (2.53GHz Intel Q9300, NVIDIA Quadro FX 3700M 1GB) 11,530 3DMarks

Lenovo W700 (2.53GHz Intel Q9300, NVIDIA Quadro FX 3700M 1GB)

11,214 3DMarks

Lenovo T500 (2.80GHz Intel T9600, ATI Radeon 3650 256MB GDDR3)

4,371 3DMarks

Gateway P-7811 FX (2.26GHz Intel P8400, NVIDIA 9800M GTS 512MB)

9,355 3DMarks

HP Pavilion HDX18 (2.8GHz Intel T9600, Nvidia 9600M GT 512MB)

4,127 3DMarks

Apple MacBook Pro (2.2GHz Intel T7500, Nvidia 8600M GT 128MB)

3,321 3DMarks

Dell XPS M1330 (2.0GHz Intel Core 2 Duo T7300, NVIDIA GeForce Go 8400M GS 128MB)

1,408 3DMarks

Samsung Q70 (2.0GHz Core 2 Duo T7300 and nVidia 8400M G GPU)

1,069 3DMarks

Asus F3sv-A1 (Core 2 Duo T7300 2.0GHz, Nvidia 8600M GS 256MB)

2,344 3DMarks

Alienware Area 51 m5550 (2.33GHz Core 2 Duo, nVidia GeForce Go 7600 256MB

2,183 3DMarks

wPrime is a program that forces the processor to do recursive mathematical calculations; the advantage of this program is that it is multi-threaded and can use all four processor cores at once, thereby giving more accurate benchmarking measurements than Super Pi.

Notebook

wPrime 32M

time
Lenovo W700ds (Intel Core 2 Extreme Q9300 @ 2.53GHz) 15.398s

Lenovo W700 (Intel Core 2 Extreme Q9300 @ 2.53 GHz)

15.771s

Lenovo T500 (Intel Core 2 Duo T9600 @ 2.80GHz)

27.471s

Gateway P-7811 FX (Core 2 Duo P8400 @ 2.26GHz)

33.366s

HP Pavilion HDX18 (Core 2 Duo T9600 @ 2.8GHz)

27.416s

HP Pavilion dv6500z (AMD Turion 64 X2 TL-60 @ 2.0GHz)

40.759s

Toshiba Tecra M9 (Core 2 Duo T7500 @2.2GHz)

37.299s

HP Compaq 6910p (Core 2 Duo T7300 @ 2GHz)

40.965s

Zepto 6024W (Core 2 Duo T7300 @ 2GHz)

42.385s

Lenovo T61 (Core 2 Duo T7500 @ 2.2GHz)

37.705s

Alienware M5750 (Core 2 Duo T7600 @ 2.33GHz)

38.327s

Hewlett Packard DV6000z (Turion X2 TL-60 @ 2.0GHz)

38.720s


HDTune storage drive performance test:

As before, users interested in the graphics performance torture tests we subjected our original W700 to can check that review for full details. Basically, we ran a similar set of graphics performance-evaluating tasks in Photoshop CS3 this time around with nearly identical results: rendering 10,000-pixel gradients in under two seconds is well within the W700ds’s reach, and in test after test that even some high-end desktops would balk at, the W700ds posted performance numbers that no stationary workstation would be ashamed of.

In reviewing the W700, I described its performance as “sickeningly fast.” Having put the W700ds through the same battery of evaluations, I’m confident that – short of anything packing desktop processing hardware, that is – the W700ds’s top configuration offers as much performance as you can conceivable pack into a mobile platform at the moment.

Multimedia

In spite of being possessed of the raw power to chew up and spit out any graphics task put in its path, the W700 platform remains a generally poor choice for conventional multimedia tasks. Plenty of video output options and a pleasing screen might lead you to believe otherwise, but at the end of that day the user experience for watching movies or listening to music really isn't so nice. First, as noted previously, the W700ds has a dearth of dedicated multimedia buttons by the standards of its class: you can adjust the volume, and function-plus-key commands provide access to start/stop commands, but it's clear that music and movies weren't on the minds of Lenovo's design team when they put this machine together. Like their smaller enterprise notebooks, this one's all business.

While full-screen DVD playback in Windows Media Player was perfectly smooth, the W700ds also suffers from a pair of tiny, top-mounted speakers that could be described as minimally effective at best. Even with a good EQ for tweaking, the W700ds's audio playback lacks everything but midrange; the resulting sound quality is somewhere between what you'll get out of a pair of overdriven headphones from ten feet away and a weak AM radio broadcast. To their credit, the ThinkPad's speakers were surprisingly clean all the way up to their top power setting, and put out plenty of volume besides. There's just no life – no bass response or treble sparkle – to the sound at all.

A front-mounted headphone jack delivers clean, rich, static-free audio with plenty of power, partially atoning for the ThinkPad's speaker-side deficiencies. Overall, multimedia performance was acceptable, but at this price (and size) it certainly seems that Lenovo could have added some better speakers at the very least.

Ports and Features

Numerous connections are another reason users buy large desktop-replacement notebooks, and in this regard the W700ds delivers. The ThinkPad's five USB ports and Firewire are certainly not too much to ask for, and out back the W700ds gives a nice range of options for making output connections as well.

Although our test unit didn't come so configured, users can also opt for a built-in Compact Flash reader in place of dual Express Cards or a Smart Card/Express Card combo. Having the ability to pull images directly from either SD (via the front-side SD slot) or CF is a feature that I wish more graphics machines offered; I'm betting the majority of photographers will opt for this configuration.


Front: Wi-Fi hard switch, SD reader, headphone out, microphone in


Right: USB 2.0 ports (3), modem, pen silo, DVD/CD-RW


Left: IEEE 1394 (Firewire), USB 2.0 ports (2), Express Card 34, Smart Card


Back: Display Port, VGA out, DVI out, Gigabit Ethernet, power

Battery Life

The W700ds uses the same nine-cell lithium-ion battery that powers the rest of the W700 models. When we tested the original W700, we found battery life to be somewhere in the neighborhood of two hours off the plug for normal computing if you were careful, but just some over an hour for DVD playback. The difference, of course, is that the W700ds has – if you so choose, at least – a pair of screens to drive.

The W700ds has Lenovo’s very good power management system, but even so, expect about a 20 percent drop in life with the second display powered up and both screens at mid-level brightness. On a full battery using both screens for light computing, I was able to get 1 hour 46 minutes of juice out of the cell before the battery status was critical. Power off the second screen, though, and the W700ds’s numbers shoot back up to right around what we saw with the original W700.

All in all, this would be a truly disappointing performance were this machine designed for lots of true on-the-road work. As a desktop replacement, though, two-plus hours of battery life (depending on how heavy-handed you are with power management, and assuming you’re willing to give up the second display while working off the plug) is par for the course at the very least.

Heat and Noise

We were impressed the first time around with the W700’s heat control, and in this case the W700ds is a nearly identical story, with no measured temp on this machine topping 100 degrees Fahrenheit even after heavy loading. Not bad at all for a machine with this kind of hardware.

Likewise, the W700ds’s fan, while clearly audible when the machine is being pushed, never put out the kind of “high-speed hair dryer” noises we’ve come to expect from the cooling systems of larger, high-performance notebooks. That said, the fan does seem to be running at some speed almost constantly when the W700ds is powered up. In a quiet apartment, this constant addition to the background noise might be unwelcome, but in a noisy office, it was barely noticed.

Conclusion

Since my first look at the original W700, I’ve never tried to hide my feeling that Lenovo has done a lot right with this platform when it comes to designing a mobile workstation for graphics pros. Putting the question of its astronomical cost aside for the moment – after all, this will be a legitimate business expense with long-term return for most potential W700 buyers – Lenovo has put together a machine that, for commercial graphics or design work, really is almost impossible to beat.

In the time since our first W700 review, though, I’ve also had the chance to spend a lot more time with the original W700, and to get to know the W700ds seen here as well. And out of this experience, the general consensus around the NBR office seems to be that the W700 is a very good purpose-designed machine that only comes up short insofar as it only goes 90 percent of the way. For large-level systems – processor, primary display, graphics, memory – the W700ds checks all the right boxes. Rather, it’s the little things – like a second screen that’s a poor match, in terms of color and contrast, to the main display, or the thoughtful but not fully thought-out onboard tablet surface – that come to the fore.

Maybe I’m picking nits: after all, short of a desktop in notebook’s clothing or a one-off system, when it comes to raw performance the ThinkPad W700ds is tops. Likewise, the second display adds a lot of workflow value for graphics pros and others that will, quite literally, make the addition worth its weight. While I still believe, then, that for its core market, the W700ds is unquestionably a success, it’s certainly not perfect. And at this price, perfection may not be too much to expect.

Pros:

  • Desktop-like performance in a notebook – now with two screens!
  • Build quality still as good as always
  • Very good primary display with built-in color calibrator
  • Keyboard is as good as the one on my desk

Cons:

  • Built-in calibrator doesn’t support second display
  • Small onboard digitizer features a terrible pen
  • Multimedia performance doesn’t quite cut it
  • Performance upgrades will elevate the price significantly

More info click here :
http://nino-computer.co.nr#utm_source=googlier.com/page/2019_10_08/112229&utm_campaign=link&utm_term=googlier&utm_content=googlier.com
http://baliforever4u.blogspot.com#utm_source=googlier.com/page/2019_10_08/112229&utm_campaign=link&utm_term=googlier&utm_content=googlier.com
http://malangoke.wordpress.com#utm_source=googlier.com/page/2019_10_08/112229&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

          

New: Humble "Computer Productivity & Coding" Bundle

 Cache   

New: Humble "Computer Productivity & Coding" Bundle

View full article on Epic Bundle »


Work smarter, not harder.

An all-new bundle by Humble Bundle and Mercury Learning. Pay what you want starting at $1 - Pay more, get more! Normally, the total cost for this bundle is as much as $941.

Get the complete bundle here!

Hint: Don't miss the epic Humble MONTHLY bundle ❤

Included content:

  • 3D Printing
  • Artificial Intelligence in the 21st Century 2/E
  • AutoCAD 2019 Beginning and Intermediate
  • AutoCAD 2020 3D Modeling
  • Autodesk Revit 2020 Architecture
  • Basic Electronics Video Tutorials
  • C Programming
  • Classic Game Design 2/E
  • Cloud Computing Basics
  • Data Cleaning
  • Embedded Vision
  • Excel Functions & Formulas 5E
  • Frank Luna
  • Game Development Using Python
  • HDL with Digital Design
  • HTML 5 Programming Video Tutorials
  • Introduction to 3D Game Programming with DirectX12
  • MS Excel 2016
  • Microsoft Access 2019 Programming with VBA, XML, and ASP
  • Microsoft Excel 2019 Programming with VBA, XML, and ASP
  • Microsoft Office 2013/365 and Beyond
  • Multimedia Web Design
  • Photoshop Photo Restoration Video Tutorials
  • Python 3
  • SVG Programming Video Tutorials
  • Software Testing
  • TensorFlow 2

Read the full article here:
https://www.epicbundle.com/bundle/humble-computer-productivity-coding-bundle#utm_source=googlier.com/page/2019_10_08/112582&utm_campaign=link&utm_term=googlier&utm_content=googlier.com


More Epic Bundle - Follow us...



          

Technology Time

 Cache   
When: Thursday, October 10, 2019 - 1:00 PM - 1:00 PM
Where: Long Branch

Need help using Internet, email, downloading audio and e-books, using apps, your tablets or smart phones? Learn more about basic computing, using smart phones and other technologies. Come with your e-book readers, computer, tablet, or smart phone and any technology questions every Tuesday 1:00pm to 3:00pm for one to one help. Tech time meets in the Digital Media Lab.
          

Maximum Matchings in Geometric Intersection Graphs. (arXiv:1910.02123v1 [cs.CG])

 Cache   

Authors: Édouard Bonnet, Sergio Cabello, Wolfgang Mulzer

Let $G$ be an intersection graph of $n$ geometric objects in the plane. We show that a maximum matching in $G$ can be found in $O(\rho^{3\omega/2}n^{\omega/2})$ time with high probability, where $\rho$ is the density of the geometric objects and $\omega>2$ is a constant such that $n \times n$ matrices can be multiplied in $O(n^\omega)$ time. The same result holds for any subgraph of $G$, as long as a geometric representation is at hand. For this, we combine algebraic methods, namely computing the rank of a matrix via Gaussian elimination, with the fact that geometric intersection graphs have small separators. We also show that in many interesting cases, the maximum matching problem in a general geometric intersection graph can be reduced to the case of bounded density. In particular, a maximum matching in the intersection graph of any family of translates of a convex object in the plane can be found in $O(n^{\omega/2})$ time with high probability, and a maximum matching in the intersection graph of a family of planar disks with radii in $[1, \Psi]$ can be found in $O(\Psi^6\log^{11} n + \Psi^{12 \omega} n^{\omega/2})$ time with high probability.


          

Unambiguous separators for tropical tree automata. (arXiv:1910.02164v1 [cs.FL])

 Cache   

Authors: Thomas Colcombet, Sylvain Lombardy

In this paper we show that given a max-plus automaton (over trees, and with real weights) computing a function $f$ and a min-plus automaton (similar) computing a function $g$ such that $f\leqslant g$, there exists effectively an unambiguous tropical automaton computing $h$ such that $f\leqslant h\leqslant g$. This generalizes a result of Lombardy and Mairesse of 2006 stating that series which are both max-plus and min-plus rational are unambiguous. This generalization goes in two directions: trees are considered instead of words, and separation is established instead of characterization (separation implies characterization). The techniques in the two proofs are very different.


          

On Tractable Computation of Expected Predictions. (arXiv:1910.02182v1 [cs.LG])

 Cache   

Authors: Pasha Khosravi, YooJung Choi, Yitao Liang, Antonio Vergari, Guy Van den Broeck

Computing expected predictions has many interesting applications in areas such as fairness, handling missing values, and data analysis. Unfortunately, computing expectations of a discriminative model with respect to a probability distribution defined by an arbitrary generative model has been proven to be hard in general. In fact, the task is intractable even for simple models such as logistic regression and a naive Bayes distribution. In this paper, we identify a pair of generative and discriminative models that enables tractable computation of expectations of the latter with respect to the former, as well as moments of any order, in case of regression. Specifically, we consider expressive probabilistic circuits with certain structural constraints that support tractable probabilistic inference. Moreover, we exploit the tractable computation of high-order moments to derive an algorithm to approximate the expectations, for classification scenarios in which exact computations are intractable. We evaluate the effectiveness of our exact and approximate algorithms in handling missing data during prediction time where they prove to be competitive to standard imputation techniques on a variety of datasets. Finally, we illustrate how expected prediction framework can be used to reason about the behaviour of discriminative models.


          

The Role of A-priori Information in Networks of Rational Agents. (arXiv:1910.02239v1 [cs.DC])

 Cache   

Authors: Yehuda Afek, Yishay Mansour, Shaked Rafaeli, Moshe Sulamy

Until now, distributed algorithms for rational agents have assumed a-priori knowledge of $n$, the size of the network. This assumption is challenged here by proving how much a-priori knowledge is necessary for equilibrium in different distributed computing problems. Duplication - pretending to be more than one agent - is the main tool used by agents to deviate and increase their utility when not enough knowledge about $n$ is given. The a-priori knowledge of $n$ is formalized as a Bayesian setting where at the beginning of the algorithm agents only know a prior $\sigma$, a distribution from which they know $n$ originates. We begin by providing new algorithms for the Knowledge Sharing and Coloring problems when $n$ is a-priori known to all agents. We then prove that when agents have no a-priori knowledge of $n$, i.e., the support for $\sigma$ is infinite, equilibrium is impossible for the Knowledge Sharing problem. Finally, we consider priors with finite support and find bounds on the necessary interval $[\alpha,\beta]$ that contains the support of $\sigma$, i.e., $\alpha \leq n \leq \beta$, for which we have an equilibrium. When possible, we extend these bounds to hold for any possible protocol.


          

Towards Deployment of Robust AI Agents for Human-Machine Partnerships. (arXiv:1910.02330v1 [cs.LG])

 Cache   

Authors: Ahana Ghosh, Sebastian Tschiatschek, Hamed Mahdavi, Adish Singla

We study the problem of designing AI agents that can robustly cooperate with people in human-machine partnerships. Our work is inspired by real-life scenarios in which an AI agent, e.g., a virtual assistant, has to cooperate with new users after its deployment. We model this problem via a parametric MDP framework where the parameters correspond to a user's type and characterize her behavior. In the test phase, the AI agent has to interact with a user of unknown type. Our approach to designing a robust AI agent relies on observing the user's actions to make inferences about the user's type and adapting its policy to facilitate efficient cooperation. We show that without being adaptive, an AI agent can end up performing arbitrarily bad in the test phase. We develop two algorithms for computing policies that automatically adapt to the user in the test phase. We demonstrate the effectiveness of our approach in solving a two-agent collaborative task.


          

Computer-mediated Empathy. (arXiv:1910.02368v1 [cs.HC])

 Cache   

Authors: Sang Won Lee

While novel social networks and emerging technologies help us transcend the spatial and temporal constraints inherent to in-person communication, the trade-off is a loss of natural expressivity. While empathetic interaction is already challenging in in-person communication, computer-mediated communication makes such empathetically rich communication even more difficult. Are technology and intelligent systems opportunities or threats to more empathic interpersonal communication? Realizing empathy is suggested not only as a way to communicate with others but also to design products for users and facilitate creativity. In this position paper, I suggest a framework to breakdown empathy, introduce each element, and show how computing, technologies, and algorithms can support (or hinder) certain elements of the empathy framework.


          

ChaosNet: A Chaos based Artificial Neural Network Architecture for Classification. (arXiv:1910.02423v1 [cs.LG])

 Cache   

Authors: Harikrishnan Nellippallil Balakrishnan, Aditi Kathpalia, Snehanshu Saha, Nithin Nagaraj

Inspired by chaotic firing of neurons in the brain, we propose ChaosNet -- a novel chaos based artificial neural network architecture for classification tasks. ChaosNet is built using layers of neurons, each of which is a 1D chaotic map known as the Generalized Luroth Series (GLS) which has been shown in earlier works to possess very useful properties for compression, cryptography and for computing XOR and other logical operations. In this work, we design a novel learning algorithm on ChaosNet that exploits the topological transitivity property of the chaotic GLS neurons. The proposed learning algorithm gives consistently good performance accuracy in a number of classification tasks on well known publicly available datasets with very limited training samples. Even with as low as 7 (or fewer) training samples/class (which accounts for less than 0.05% of the total available data), ChaosNet yields performance accuracies in the range 73.89 % - 98.33 %. We demonstrate the robustness of ChaosNet to additive parameter noise and also provide an example implementation of a 2-layer ChaosNet for enhancing classification accuracy. We envisage the development of several other novel learning algorithms on ChaosNet in the near future.


          

Distributed filtered hyperinterpolation for noisy data on the sphere. (arXiv:1910.02434v1 [math.CA])

 Cache   

Authors: Shao-Bo Lin, Yu Guang Wang, Ding-Xuan Zhou

Problems in astrophysics, space weather research and geophysics usually need to analyze noisy big data on the sphere. This paper develops distributed filtered hyperinterpolation for noisy data on the sphere, which assigns the data fitting task to multiple servers to find a good approximation of the mapping of input and output data. For each server, the approximation is a filtered hyperinterpolation on the sphere by a small proportion of quadrature nodes. The distributed strategy allows parallel computing for data processing and model selection and thus reduces computational cost for each server while preserves the approximation capability compared to the filtered hyperinterpolation. We prove quantitative relation between the approximation capability of distributed filtered hyperinterpolation and the numbers of input data and servers. Numerical examples show the efficiency and accuracy of the proposed method.


          

Fast Detection of Outliers in Data Streams with the $Q_n$ Estimator. (arXiv:1910.02459v1 [cs.DS])

 Cache   

Authors: Massimo Cafaro, Catiuscia Melle, Marco Pulimeno, Italo Epicoco

We present FQN (Fast $Q_n$), a novel algorithm for fast detection of outliers in data streams. The algorithm works in the sliding window model, checking if an item is an outlier by cleverly computing the $Q_n$ scale estimator in the current window. We thoroughly compare our algorithm for online $Q_n$ with the state of the art competing algorithm by Nunkesser et al, and show that FQN (i) is faster, (ii) its computational complexity does not depend on the input distribution and (iii) it requires less space. Extensive experimental results on synthetic datasets confirm the validity of our approach.


          

Convergence of the likelihood ratio method for linear response of non-equilibrium stationary states. (arXiv:1910.02479v1 [math.NA])

 Cache   

Authors: Ting Wang, Gabriel Stoltz, Petr Plechac

We consider numerical schemes for computing the linear response of steady-state averages of stochastic dynamics with respect to a perturbation of the drift part of the stochastic differential equation. The schemes are based on Girsanov's change-of-measure theory to reweight trajectories with factors derived from a linearization of the Girsanov weights. We investigate both the discretization error and the finite time approximation error. The designed numerical schemes are shown to be of bounded variance with respect to the integration time, which is a desirable feature for long time simulation. We also show how the discretization error can be improved to second order accuracy in the time step by modifying the weight process in an appropriate way.


          

A Novel Technique of Noninvasive Hemoglobin Level Measurement Using HSV Value of Fingertip Image. (arXiv:1910.02579v1 [eess.IV])

 Cache   

Authors: Md Kamrul Hasan, Nazmus Sakib, Joshua Field, Richard R. Love, Sheikh I. Ahamed

Over the last decade, smartphones have changed radically to support us with mHealth technology, cloud computing, and machine learning algorithm. Having its multifaceted facilities, we present a novel smartphone-based noninvasive hemoglobin (Hb) level prediction model by analyzing hue, saturation and value (HSV) of a fingertip video. Here, we collect 60 videos of 60 subjects from two different locations: Blood Center of Wisconsin, USA and AmaderGram, Bangladesh. We extract red, green, and blue (RGB) pixel intensities of selected images of those videos captured by the smartphone camera with flash on. Then we convert RGB values of selected video frames of a fingertip video into HSV color space and we generate histogram values of these HSV pixel intensities. We average these histogram values of a fingertip video and consider as an observation against the gold standard Hb concentration. We generate two input feature matrices based on observation of two different data sets. Partial Least Squares (PLS) algorithm is applied on the input feature matrix. We observe R2=0.95 in both data sets through our research. We analyze our data using Python OpenCV, Matlab, and R statistics tool.


          

Improving Performance of Multiagent Cooperation Using Epistemic Planning. (arXiv:1910.02607v1 [cs.MA])

 Cache   

Authors: Abeer Alshehri (1 and 2), Tim Miller, Liz Sonenberg ((1) School of Computing and Information Systems, University of Melbourne, Victoria, Australia (2) Department of Computer Science and Information Systems, King Khalid University, Abha, Saudi Arabia)

In most multiagent applications, communication is essential among agents to coordinate their actions, and thus achieve their goal. However, communication often has a related cost that affects overall system performance. In this paper, we draw inspiration from studies of epistemic planning to develop a communication model for agents that allows them to cooperate and make communication decisions effectively within a planning task. The proposed model treats a communication process as an action that modifies the epistemic state of the team. In two simulated tasks, we evaluate whether agents can cooperate effectively and achieve higher performance using communication protocol modeled in our epistemic planning framework. Based on an empirical study conducted using search and rescue tasks with different scenarios, our results show that the proposed model improved team performance across all scenarios compared with baseline models.


          

Finding Neighbors in a Forest: A b-tree for Smoothed Particle Hydrodynamics Simulations. (arXiv:1910.02639v1 [cs.DC])

 Cache   

Authors: Aurélien Cavelan, Rubén M. Cabezón, Jonas H. M. Korndorfer, Florina M. Ciorba

Finding the exact close neighbors of each fluid element in mesh-free computational hydrodynamical methods, such as the Smoothed Particle Hydrodynamics (SPH), often becomes a main bottleneck for scaling their performance beyond a few million fluid elements per computing node. Tree structures are particularly suitable for SPH simulation codes, which rely on finding the exact close neighbors of each fluid element (or SPH particle). In this work we present a novel tree structure, named \textit{$b$-tree}, which features an adaptive branching factor to reduce the depth of the neighbor search. Depending on the particle spatial distribution, finding neighbors using \tree has an asymptotic best case complexity of $O(n)$, as opposed to $O(n \log n)$ for other classical tree structures such as octrees and quadtrees. We also present the proposed tree structure as well as the algorithms to build it and to find the exact close neighbors of all particles. We assess the scalability of the proposed tree-based algorithms through an extensive set of performance experiments in a shared-memory system. Results show that b-tree is up to $12\times$ faster for building the tree and up to $1.6\times$ faster for finding the exact neighbors of all particles when compared to its octree form. Moreover, we apply b-tree to a SPH code and show its usefulness over the existing octree implementation, where b-tree is up to $5\times$ faster for finding the exact close neighbors compared to the legacy code.


          

FLEXI: A high order discontinuous Galerkin framework for hyperbolic-parabolic conservation laws. (arXiv:1910.02858v1 [cs.CE])

 Cache   

Authors: Nico Krais, Andrea Beck, Thomas Bolemann, Hannes Frank, David Flad, Gregor Gassner, Florian Hindenlang, Malte Hoffmann, Thomas Kuhn, Matthias Sonntag, Claus-Dieter Munz

High order (HO) schemes are attractive candidates for the numerical solution of multiscale problems occurring in fluid dynamics and related disciplines. Among the HO discretization variants, discontinuous Galerkin schemes offer a collection of advantageous features which have lead to a strong increase in interest in them and related formulations in the last decade. The methods have matured sufficiently to be of practical use for a range of problems, for example in direct numerical and large eddy simulation of turbulence. However, in order to take full advantage of the potential benefits of these methods, all steps in the simulation chain must be designed and executed with HO in mind. Especially in this area, many commercially available closed-source solutions fall short. In this work, we therefor present the FLEXI framework, a HO consistent, open-source simulation tool chain for solving the compressible Navier-Stokes equations in a high performance computing setting. We describe the numerical algorithms and implementation details and give an overview of the features and capabilities of all parts of the framework. Beyond these technical details, we also discuss the important, but often overlooked issues of code stability, reproducibility and user-friendliness. The benefits gained by developing an open-source framework are discussed, with a particular focus on usability for the open-source community. We close with sample applications that demonstrate the wide range of use cases and the expandability of FLEXI and an overview of current and future developments.


          

A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis. (arXiv:1910.02923v1 [cs.LG])

 Cache   

Authors: Samuel Budd, Emma C Robinson, Bernhard Kainz

Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end-user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning - to choose the best data to annotate for optimal model performance; (2) Interpretation and Refinement - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Related Areas - research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.


          

Skyrmion Logic System for Large-Scale Reversible Computation. (arXiv:1806.10337v2 [cond-mat.mes-hall] UPDATED)

 Cache   

Authors: Maverick Chauwin, Xuan Hu, Felipe Garcia-Sanchez, Neilesh Betrabet, Alexandru Paler, Christoforos Moutafis, Joseph S. Friedman

Computational reversibility is necessary for quantum computation and inspires the development of computing systems in which information carriers are conserved as they flow through a circuit. While conservative logic provides an exciting vision for reversible computing with no energy dissipation, the large dimensions of information carriers in previous realizations detract from the system efficiency, and nanoscale conservative logic remains elusive. We therefore propose a non-volatile reversible computing system in which the information carriers are magnetic skyrmions, topologically-stable magnetic whirls. These nanoscale quasiparticles interact with one another via the spin-Hall and skyrmion-Hall effects as they propagate through ferromagnetic nanowires structured to form cascaded conservative logic gates. These logic gates can be directly cascaded in large-scale systems that perform complex logic functions, with signal integrity provided by clocked synchronization structures. The feasibility of the proposed system is demonstrated through micromagnetic simulations of Boolean logic gates, a Fredkin gate, and a cascaded full adder. As skyrmions can be transported in a pipelined and non-volatile manner at room temperature without the motion of any physical particles, this skyrmion logic system has the potential to deliver scalable high-speed low-power reversible Boolean and quantum computing.


          

Complexity Landscape of Computing the Anti-Ramsey Numbers. (arXiv:1810.08004v3 [cs.CC] UPDATED)

 Cache   

Authors: Saeed Akhoondian Amiri, Alexandru Popa, Mohammad Roghani, Golnoosh Shahkarami, Reza Soltani, Hossein Vahidi

The anti-Ramsey numbers are a fundamental notion in graph theory, introduced in 1978, by Erd\" os, Simonovits and S\' os. For given graphs $G$ and $H$ the anti-Ramsey number $\textrm{ar}(G,H)$ is defined to be the maximum number $k$ such that there exists an assignment of $k$ colors to the edges of $G$ in which every copy of $H$ in $G$ has at least two edges with the same color.

Usually, combinatorists study extremal values of anti-Ramsey numbers for various classes of graphs. There are works on computational complexity of the problem when $H$ is a star. Along this line of research, we study the complexity of computing the anti-Ramsey number $\textrm{ar}(G,P_k)$, where $P_k$ is a path of length $k$. First we observe that when $k$ is close to $n$ the problem is hard; hence, the challenging part is the computational complexity of the problem when $k$ is a fixed constant.

We provide a deep characterization of the problem for paths of constant length. Our first main contribution is to prove that computing $\textrm{ar}(G,P_k)$ for every integer $k>2$ is NP-hard. We obtain this by providing several structural properties of such coloring in graphs. We investigate further and show that even approximating $\textrm{ar}(G,P_3)$ to a factor of $n^{-1/2 - \epsilon}$ is hard already in $3$-partite graphs, unless $NP{}={}ZPP$.

Given the hardness of approximation and parametrization of the problem, it is natural to study the problem on restricted graph families. Along this line, we first introduce the notion of color connected coloring, and, employing this structural property, we obtain a linear time algorithm to compute $\textrm{ar}(G,P_k)$, for every integer $k$, when the host graph, $G$, is a tree. We have introduced several techniques in our algorithm that we believe might be helpful in providing approximation algorithms for other restricted families of graphs.


          

Multiple Learning for Regression in big data. (arXiv:1903.00843v2 [cs.LG] UPDATED)

 Cache   

Authors: Xiang Liu, Ziyang Tang, Huyunting Huang, Tonglin Zhang, Baijian Yang

Regression problems that have closed-form solutions are well understood and can be easily implemented when the dataset is small enough to be all loaded into the RAM. Challenges arise when data is too big to be stored in RAM to compute the closed form solutions. Many techniques were proposed to overcome or alleviate the memory barrier problem but the solutions are often local optimal. In addition, most approaches require accessing the raw data again when updating the models. Parallel computing clusters are also expected if multiple models need to be computed simultaneously. We propose multiple learning approaches that utilize an array of sufficient statistics (SS) to address this big data challenge. This memory oblivious approach breaks the memory barrier when computing regressions with closed-form solutions, including but not limited to linear regression, weighted linear regression, linear regression with Box-Cox transformation (Box-Cox regression) and ridge regression models. The computation and update of the SS array can be handled at per row level or per mini-batch level. And updating a model is as easy as matrix addition and subtraction. Furthermore, multiple SS arrays for different models can be easily computed simultaneously to obtain multiple models at one pass through the dataset. We implemented our approaches on Spark and evaluated over the simulated datasets. Results showed our approaches can achieve closed-form solutions of multiple models at the cost of half training time of the traditional methods for a single model.


          

The Mode of Computing. (arXiv:1903.10559v2 [cs.AI] UPDATED)

 Cache   

Authors: Luis A. Pineda

The Turing Machine is the paradigmatic case of computing machines, but there are others, such as Artificial Neural Networks, Table Computing, Relational-Indeterminate Computing and diverse forms of analogical computing, each of which based on a particular underlying intuition of the phenomenon of computing. This variety can be captured in terms of system levels, re-interpreting and generalizing Newell's hierarchy, which includes the knowledge level at the top and the symbol level immediately below it. In this re-interpretation the knowledge level consists of human knowledge and the symbol level is generalized into a new level that here is called The Mode of Computing. Natural computing performed by the brains of humans and non-human animals with a developed enough neural system should be understood in terms of a hierarchy of system levels too. By analogy from standard computing machinery there must be a system level above the neural circuitry levels and directly below the knowledge level that is named here The mode of Natural Computing. A central question for Cognition is the characterization of this mode. The Mode of Computing provides a novel perspective on the phenomena of computing, interpreting, the representational and non-representational views of cognition, and consciousness.


          

On Modeling ASR Word Confidence. (arXiv:1907.09636v2 [cs.CL] UPDATED)

 Cache   

Authors: Woojay Jeon, Maxwell Jordan, Mahesh Krishnamoorthy

We present a new method for computing ASR word confidences that effectively mitigates ASR errors for diverse downstream applications, improves the word error rate of the 1-best result, and allows better comparison of scores across different models. We propose 1) a new method for modeling word confidence using a Heterogeneous Word Confusion Network (HWCN) that addresses some key flaws in conventional Word Confusion Networks, and 2) a new score calibration method for facilitating direct comparison of scores from different models. Using a bidirectional lattice recurrent neural network to compute the confidence scores of each word in the HWCN, we show that the word sequence with the best overall confidence is more accurate than the default 1-best result of the recognizer, and that the calibration method greatly improves the reliability of recognizer combination.


          

LuNet: A Deep Neural Network for Network Intrusion Detection. (arXiv:1909.10031v2 [cs.AI] UPDATED)

 Cache   

Authors: Peilun Wu, Hui Guo

Network attack is a significant security issue for modern society. From small mobile devices to large cloud platforms, almost all computing products, used in our daily life, are networked and potentially under the threat of network intrusion. With the fast-growing network users, network intrusions become more and more frequent, volatile and advanced. Being able to capture intrusions in time for such a large scale network is critical and very challenging. To this end, the machine learning (or AI) based network intrusion detection (NID), due to its intelligent capability, has drawn increasing attention in recent years. Compared to the traditional signature-based approaches, the AI-based solutions are more capable of detecting variants of advanced network attacks. However, the high detection rate achieved by the existing designs is usually accompanied by a high rate of false alarms, which may significantly discount the overall effectiveness of the intrusion detection system. In this paper, we consider the existence of spatial and temporal features in the network traffic data and propose a hierarchical CNN+RNN neural network, LuNet. In LuNet, the convolutional neural network (CNN) and the recurrent neural network (RNN) learn input traffic data in sync with a gradually increasing granularity such that both spatial and temporal features of the data can be effectively extracted. Our experiments on two network traffic datasets show that compared to the state-of-the-art network intrusion detection techniques, LuNet not only offers a high level of detection capability but also has a much low rate of false positive-alarm.


          

Entropy from Machine Learning. (arXiv:1909.10831v2 [cond-mat.stat-mech] UPDATED)

 Cache   

Authors: Romuald A. Janik

We translate the problem of calculating the entropy of a set of binary configurations/signals into a sequence of supervised classification tasks. Subsequently, one can use virtually any machine learning classification algorithm for computing entropy. This procedure can be used to compute entropy, and consequently the free energy directly from a set of Monte Carlo configurations at a given temperature. As a test of the proposed method, using an off-the-shelf machine learning classifier we reproduce the entropy and free energy of the 2D Ising model from Monte Carlo configurations at various temperatures throughout its phase diagram. Other potential applications include computing the entropy of spiking neurons or any other multidimensional binary signals.


          

HolDCSim: A Holistic Simulator for Data Centers. (arXiv:1909.13548v2 [cs.DC] UPDATED)

 Cache   

Authors: Fan Yao, Kathy Ngyugen, Sai Santosh Dayapule, Jingxin Wu, Bingqian Lu, Suresh Subramaniam, Guru Venkataramani

Cloud computing based systems, that span data centers, are commonly deployed to offer high performance for user service requests. As data centers continue to expand, computer architects and system designers are facing many challenges on how to balance resource utilization efficiency, server and network performance, energy consumption and quality-of-service (QoS) demands from the users. To develop effective data center management policies, it becomes essential to have an in-depth understanding and synergistic control of the various sub-components inside large scale computing systems, that include both computation and communication resources. In this paper, we propose HolDCSim, a light-weight, holistic, extensible, event-driven data center simulation platform that effectively models both server and network architectures. HolDCSim can be used in a variety of data center system studies including job/task scheduling, resource provisioning, global and local server farm power management, and network and server performance analysis. We demonstrate the design of our simulation infrastructure, and illustrate the usefulness of our framework with several case studies that analyze server/network performance and energy efficiency. We also perform validation on real machines to verify our simulator.


          

Computing and Home Solutions Advisor- Seasonal - Best Buy Canada - Prince Albert, SK

 Cache   
Set up and maintain product demos. As Canada's fastest-growing specialty retailer of consumer electronics, Best Buy ensures it offers one of the best work…
From Best Buy - Fri, 04 Oct 2019 20:32:20 GMT - View all Prince Albert, SK jobs
          

Lead Platform Management Engineer - General Motors - Warren, MI

 Cache   
Partner with leadership on internal teams to document and improve internal processes via ongoing testing. Must have Expertise in HPC/AI/ML/DL/parallel computing…
From General Motors - Wed, 10 Jul 2019 21:52:56 GMT - View all Warren, MI jobs
          

¿Por qué los servicios financieros para empresas están por las nubes?

 Cache   

Explicamos cómo evoluciona el sector bancario y de qué manera se adapta a nuevas tecnologías como la nube. Contamos los motivos por los que cada vez más servicios financieros se decantan por el cloud computing La nueva normativa europea impulsa que las empresas se decanten por servicios en la nube En los últimos años, es […]

The post ¿Por qué los servicios financieros para empresas están por las nubes? appeared first on Sage Advice España.


          

Software Engineer - Cloud Computing - Observability

 Cache   
Software Engineer - Cloud Computing - Observability','190095421','!*!Our Global Technology Infrastructure group is a team of innovators rewarded with innovators who love technology as much as you do. Together, youll use a disciplined, innovative and a business focused approach to develop a wide variety of high-quality products and solutions. Youll wo
          

Public cloud computing is early on its journey to core of the bank

 Cache   
none
          

Oracle plans to hire 2,000 workers in cloud computing expansion, Reuters reports

 Cache   
Oracle plans to hire 2,000 workers in cloud computing expansion, Reuters reports
More
          

Le groupe Publicis officialise le lancement d'Epsilon France

 Cache   
La structure réunira les 750 collaborateurs des quatre entités data marketing du groupe : Soft Computing, Publicis ETO, Publicis Media Data Sciences et les équipes françaises d'Epsilon, racheté en juillet dernier.
          

Intel vs. Nvidia: Which Pays Software Engineers More?

 Cache   

For years, PCs loaded with Intel processors dominated our computing lives. But nothing in tech remains the same; the rise of mobile and the cloud allowed […]

The post Intel vs. Nvidia: Which Pays Software Engineers More? appeared first on Dice Insights.


          

Reserve Bank of Australia

 Cache   
Investment now nudging one percent of GDP.

It was just one graph, but it said it all.

Prices paid by businesses for software in Australia might be falling, but enterprises down under are spending money at close to dotcom levels on code – only this time it's to optimise their operations rather than chasing a meme.

That’s the take from one of the most remarkable snapshots of local investment in software yet, produced not by an analyst firm or vendor, but the by the Assistant Governor for the Reserve Bank of Australia’s Economic division, Dr Luci Ellis.

It’s a highly significant number, not least because it (partially) documents the broader role of computing in the national economy.

In fact, software investment levels in Australia are now primed to punch through one percent of nominal GDP (gross domestic product) that stands at $19 trillion.

And in case you wondered where the skills shortage came from, the software cut of that pie is easy to measure and equates to (roughly) $19 billion.

It's a a sustained trend that, over the last ten years or so, shows more local money is broadly and consistently being poured into software smarts – and as a core part of the economy rather than because of a sugar hit or cyclical factor.

Based on RBA and Australian Bureau of Statistics (ABS) data, Ellis bowled-up the chart and stats – Computer Software Investment; Share of Nominal GDP – in a speech last Friday that sought to demystify how the central bank sources, charts and uses data to make economic predictions.

Notably, the chart (below) isn’t part of a regular series, but was put together specifically to illustrate what to factor in or out when trying to determine what indicators to use when plotting where the economy is headed.

In the past that might have been new car sales, spend on industrial plant and equipment, retail turnover … but now it’s software, at least for the purposes of Ellis’ speech, memorably titled “Lumps, Bumps and Waves”.

She's not afraid to chunk down the concepts or simplify the lingo either.

“Simple, non-technical descriptions can show why it's useful to know something about the underlying shape of a particular special factor. If something is a lump or a bump, it doesn't mark the beginning of a change in trend. Its effect may not last,” Ellis observed.

“In developing our forecasts or setting monetary policy, we might want to look through those lumps and bumps, and focus instead on the longer-term trends.”

Even more simply, it's "the signal in the noise".

"We have to be careful to avoid seeing paradigm shifts in every wiggle in the data. But structural change permeates the economy, and always will," Ellis said.

But these days software spend isn’t just a business trend, it’s a broader wave of change triggered by factors external to the economy,

Ellis notes there are a few ‘bumps’ on the chart “culminating in a peak around the dot-com boom and Y2K rectification work” where it peaks at just over one percent of nominal GDP in the very early noughties before taking a very unceremonious slide until around year before the GFC.

Then, around 2012-2013, software investment takes off like a rocket – Amazon Web Services put its first DC on Australian soil in 2012 – before near vertical growth gently tapers but still stays strong.

“More recently, this type of investment has again been increasing faster than the economy as a whole, as firms adopt mobile, cloud and other new technologies,” Ellis observes.

The thing to remember is that the graph is tracking software investment across the economy rather than just against previous sales, revenue or pricing.

So even if software, especially cloud and SaaS, fell in price compared to on-prem, it became more accessible and affordable, presumably at the expense of hardware.

There’s good reason for RBA to be sticking its nose into Australia’s software sector too.

Not least because things once loosely categorised as ‘digital’ are now influencing the shape of the broader economy, and with it the outlook for jobs and growth.

Australia’s conundrum is that wages growth has remained stuck and some parts of the economy are doing far better on the digital front than others.

The influence of software and technology on the economy, or the so-called tech effect, was called out by RBA Governor Philip Lowe in June 2018.

The picture Lowe painted at that time was a paradox where there was still spare capacity in the labour market, yet firms found it harder to get suitable workers, with little or no translation to increases wages.

Ellis’ chart is the latest valuable piece in figuring out that puzzle.

Got a news tip for our journalists? Share it with us anonymously here.

          

Arm unveils consortium for autonomous vehicle computing

 Cache   
Dipti Vachani, senior vice president of automotive and embedded at Arm, announced the Autonomous Vehicle Computing Consortium.
          

Satya Nadella looks to the future with edge computing

 Cache   
Speaking today at the Microsoft Government Leaders Summit in Washington, DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.” While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in […]
          

Game Changers: In conversation with Queensland entrepreneur Trent Davis

 Cache   

Trent Davis had several businesses before he was 22 years old, ranging from IT support to commercial childcare centre management systems which he started in high school. Starting his fifth company Netbox Blue in 1999, he took an organic growth approach. Now, Trent has built Netbox Blue into a formidable company with over 30 staff across Australia, selling to the world.

As the founder and Chief Technology Officer, Trent leads the development team at Netbox Blue, concentrating on intellectual property in the areas of distributed cloud based computing and next generation social media information governance and compliance. The technology is widely used by business, government and schools and has also been adapted to suit the home environment. Trent was an Ernst & Young 'Young Entrepreneur of the Year' winner in 2006 and was named by FMH magazine as one of the Most Influential Australians under 30.

Part of the Game Changers series, a Queensland Business Leaders Hall of Fame initiative presented by SLQ, QUT Business School and the Queensland Library Foundation.

When: Wed 19 Mar 2014, 6:00 pm - 08:00 pm
Venue: SLQ Auditorium 1, level 2


          

macOS Catalina: the MacStories review

 Cache   
macOS Catalina has been reviewed, and taking over from John Siracusa’s legendary Mac OS X reviews at Ars Technica is MacStories. The Mac isn’t in crisis, but it isn’t healthy either. Waiting until the Mac is on life support isn’t viable. Instead, Apple has opted to reimagine the Mac in the context of today’s computing landscape before its survival is threatened. The solution is to tie macOS more closely to iOS and iPadOS, making it an integrated point on the continuum of Apple’s devices that respects the hardware differences of the platform but isn’t different simply for the sake of difference. Transitions are inherently messy, and so is Catalina in places. It’s a work in process that represents the first steps down a new path, not the destination itself. The destination isn’t clear yet, but Catalina’s purpose is: it’s a bridge, not an island. You know where to get Catalina, but it might be a good idea to wait a few point releases before diving in.
          

Densitron to feature UReady 2U display at Inter BEE 2019

 Cache   
Kent, UK, 8 October 2019 – Densitron, a creator of HMI technologies, Broadcast intelligent display solutions, and a global leader in display, monitor, and embedded computing solutions, today announced that it will introduce the UReady 2U full surface rack display unit for broadcast applications in Hall 4, Stand 4414, at Inter BEE 2019 taking place ...
          

AT&T Announces Winners of 5G Hackathon, Sponsored by Ericsson, IBM & Samsung

 Cache   
AT&T hosted a 5G hackathon, featuring a winning application called FitStream that supports 5G, artificial intelligence on edge computing to allow yoga and dance instructors to interact with students.
          

D-Wave unveils name of next-gen quantum system (with weapons lab as customer)

 Cache   
Dilution refrigerator
D-Wave Systems says its next-generation, 5,000-qubit quantum computing system will be called Advantage, to recognize the business advantage it hopes its customers will derive from the company’s products and services. The Burnaby, B.C.-based company also announced that Los Alamos National Laboratory in New Mexico has signed a contract to upgrade to Advantage on its premises once it’s ready to go. Advantage-based computing is due to become available via D-Wave’s Leap quantum cloud service in mid-2020. “This is the third time we will have upgraded our D-Wave system,” Irene Qualters, associate lab director for simulation and computation at Los Alamos, said… Read More
          

Hacking for Dummies eBook, 6th Edition - Free for a Limited Time (Regular Price $30) @ Tradepub

 Cache   
Hacking for Dummies eBook, 6th Edition - Free for a Limited Time (Regular Price $30) @ Tradepub

Stop hackers before they hack you!

In order to outsmart a would-be hacker, you need to get into the hacker’s mindset and with this book, thinking like a bad guy has never been easier. Get expert knowledge on penetration testing, vulnerability assessments, security best practices, and ethical hacking that is essential in order to stop a hacker in their tracks.

This no-nonsense book helps you learn how to recognize the vulnerabilities in your systems so you can safeguard them more diligently—with confidence and ease.

Get up to speed on Windows 10 hacks  
Learn about the latest mobile computing hacks
Get free testing tools   
Find out about new system updates and improvements

There’s no such thing as being too safe — and this resourceful guide helps ensure you’re protected.


          

One-Dimensional Objects Morph Into New Dimensions

 Cache   
Tue, 10/08/2019

A line is the shortest distance between two points, but "A-line," a 4D printing system developed at Carnegie Mellon University, takes a more circuitous route. One-dimensional, "line"-shaped plastic structures produced with the A-line system can bend, fold and twist themselves into predetermined shapes when triggered by heat.

3D-printed objects that later change shape are the very definition of 4D printing. But the process takes on special qualities when the objects can fit through narrow openings. A rod inserted through a narrow bottleneck, for instance, might transform into a hook to fish an object out of the bottle. Or a long, thin fastener inserted through holes in the seat of a chair might lock a chair leg into place.

The A-line method also can be useful in making compliant devices, such as coil springs and tweezers. These are difficult to produce in final form using a 3D printer, but can be printed readily as rods that assume final form when dipped in hot water.

Making sticks that morph into new objects is a feat that Lining Yao, assistant professor in CMU's Human-Computer Interaction Institute, and her colleagues in the Morphing Matter Labhave accomplished using an ordinary, hobbyist-grade 3D printer and a single type of thermoplastic material.

"It's not printing the line that's difficult, but it's developing the software tool that enables you to design, simulate and fabricate the line," Yao explained.

The group used polylactic acid, or PLA — the most common material used in 3D printing — to produce their objects. PLA shrinks in reaction to heat along the direction in which it was printed, said Guanyun Wang, a post-doctoral fellow in the Morphing Matter lab. That makes it possible to control how an object's shape will morph based on the spacing of active and passive segments, the thickness of segments and on the printing direction of each segment, he explained.

The A-line platform developed by Yao's team includes a library of eight bending directions that can be combined to produce simple or complex geometries. It also includes a customized design tool to help users combine these different types of bends to achieve desired shapes.

Ye Tao, a visiting scholar at the HCII from Zhejiang University, said the team triggered the bending by immersing the engineered sticks into water heated to about 170 degrees Fahrenheit. Morphing also can be triggered by a heat gun, with embedded carbon fiber for resistive heating or with steam via hollow channels in the sticks.

As with other 4D-printed objects, one advantage of the A-line rods is that they can be shipped as a flat pack and triggered on site to become tent supports, chair frames or sculptures. But Yao envisions some applications that are peculiar to line-shaped objects. By using electrical field responsive hydrogels instead of PLA, for instance, it might be possible to develop a biocompatible line that a surgeon could snake through narrow body spaces and remotely transform into surgical tweezers. By controlling electrical fields, it might also be possible to control the tweezer movement.

"Through this work, we hope to enlarge the design space of 4D printing technology," Yao said. "We encourage designers to think about additional novel uses of A-line."

research paper describing A-Line was presented earlier this year at CHI 2019, the Association for Computing Machinery's Conference on Human Factors in Computing Systems. The Richard King Mellon Foundation supported this research through the CMU Manufacturing Futures Initiative. In addition to Yao, Wang and Tao, the research team included Ozguc Bertug Capunaman and Humphrey Yang, both master's students in the School of Architecture.

 

For More Information

Byron Spice | 412-268-9068 | bspice [at] cs.cmu.edu#utm_source=googlier.com/page/2019_10_08/145717&utm_campaign=link&utm_term=googlier&utm_content=googlier.com
Virginia Alvino Young | 412-268-8356 | vay [at] cmu.edu#utm_source=googlier.com/page/2019_10_08/145717&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

News type

News
          

SCS Students Named 2020 Siebel Scholars

 Cache   
Tue, 10/08/2019

Six Carnegie Mellon University students — five of them from the School of Computer Science — have been named 2020 Siebel Scholars, a highly competitive award that supports top graduate students in the fields of business, computer science, energy science and bioengineering.

Established in 2000 by the Thomas and Stacey Siebel Foundation, the Siebel Scholars program awards grants to 16 universities in the United States, China, France, Italy and Japan. The top graduate students from 27 partner programs are selected each year as Siebel Scholars and receive a $35,000 award for their final year of studies. On average, Siebel Scholars rank in the top five percent of their class, many within the top one percent.

Among the 93 total scholars are School of Computer Science students Michael Madaio, Eric Wong, Ken Holstein, Junpei Zhou and Amadou Latyr Ngom. They're joined by Elizabeth Reed, a Ph.D. student in the Department of Engineering and Public Policy.

Human-Computer Interaction Institute (HCII) Ph.D. candidate Michael Madaio researches the design of algorithmic systems in the public sector, focusing on literacy education in developing countries. He was a research intern at the United Nations Institute for Computing and Society, and Microsoft Research's Fairness, Accountability, Transparency and Ethics in Artificial Intelligence group. He completed his master's degree in digital media studies at Georgia Institute of Technology, and a master's in education and a bachelor's in English literature at the University of Maryland, College Park.

Eric Wong is pursuing his Ph.D. in machine learning. In 2012 he began researching the problem of molecular energy optimization, developing specialized kernels for geometrically structured data. He is currently interning at Bosch to bring advancements into the automotive industry with work on real sensor systems, both visual and physical.

Ken Holstein, a fifth-year HCII Ph.D. student, is also a fellow of the Program in Interdisciplinary Educational Research (PIER). He has interned at Microsoft Research and holds a bachelor's degree in psychology from the University of Pittsburgh and master's in human–computer interaction from CMU.

Language Technologies Institute master's student Junpei Zhou researches social good by using natural language processing and computer vision techniques. He has worked on flu forecasting and a public safety project to automatically pick up tweets to help police officers better handle emergency events. He has interned at Google and Alibaba, and holds a bachelor's degree in computer science from Zhejiang University.

Amadou Latyr Ngom is pursuing his master's degree in the Computer Science Department at CMU. His research interests include applying compiler techniques to accelerate query execution for in-memory database management systems. He has interned at Zillow and Pure Storage, and graduated with a bachelor's degree in computer science from CMU.

"Every year, the Siebel Scholars continue to impress me with their commitment to academics and influencing future society. This year's class is exceptional, and once again represents the best and brightest minds from around the globe who are advancing innovations in healthcare, artificial intelligence, the environment and more," said Thomas M. Siebel, chair of the Siebel Scholars Foundation. "It is my distinct pleasure to welcome these students into this ever-growing, lifelong community, and I personally look forward to seeing their impact and contributions unfold."

For More Information

Byron Spice | 412-268-9068 | bspice [at] cs.cmu.edu#utm_source=googlier.com/page/2019_10_08/145719&utm_campaign=link&utm_term=googlier&utm_content=googlier.com
Virginia Alvino Young | 412-268-8356 | vay [at] cmu.edu#utm_source=googlier.com/page/2019_10_08/145719&utm_campaign=link&utm_term=googlier&utm_content=googlier.com

News type

News
          

Accountant (Account Payable)

 Cache   
• Able to analyze and inspect the pro forma invoices clearly and distinctly • Must be able to thoroughly check the followings before proceeding to payment: quantities, units and measurements, unit prices, total amount and terms of payment. Supplier Profiles-Information, bank account details, customer whereabouts. • Capable of ensuring proper payment procedures are executed firstly by checking the company’s fund and reassuring if there are adequate amount. If there is a sufficient money inside the accounts, you will need to apply for the approval from COF to prepare the remittance • Capable of scrutinizing and performing analysis on purchase orders to ensure duplicated order quantities, reviewing sales & purchase contracts as well as commercial invoices to ensure the particulars are accurate, precise and satisfied the conditions • Experienced in computing data entries both manually and with the aid of software whenever it is required such as performing data entries departmental level • Able to peruse and comprehend payment terms such as (L\C, Telegraphic Transfer, Cheque) as well as contract terms to avoid any misconception. • Able to initiate and prepare payments to suppliers and vendors accordingly based on understanding of the contracts, terms and company polices • Able to conduct analysis & came up with rationale determination in order to report back to management when the payment is in multi-currency by examination past exchange rates to avoid unsolicited losses • Able to amicably comminute with vendors and supplier as well as giving them details on status of transactions (remittances) on timely basis. • Filing and keeping records all the important documents and transaction notes (TT notes) including invoices, contracts and etc. • Being diligent and meticulous when handling commercial invoices to ensure the lack of errors • Review all invoices for appropriate documentation and approval prior to payment • Match invoices to checks, obtain all signatures for checks and distribute checks accordingly • Reconcile vendor statements, research and correct discrepancies • Assist in month end closing • Maintain files and documentation thoroughly and accurately, in accordance with company policy and accepted accounting practices • Records goods and services that it receives and the payments it owes, such as inventory from a supplier or other expenses, records each account payables as a liability • Applying accounting principles and procedures to analyze transactions, balances and financial information • Develop a strong understanding of business, inventory flow and systems • Recommends financial actions by analyzing accounting options • Summarizes current financial status by collecting information • Substantiates financial transactions by auditing documents • Reconciles financial discrepancies by collecting and analyzing account information • Secures financial information by completing data base backups • Maintains financial security by following internal controls • Other duties assigned by Management
          

Quantum Supremacy? Yes and No!

 Cache   

Quantum Supremacy Is and Is Not

How quantum is that?! The RadioFreeHPC team discusses the Google/NASA paper, titled "Quantum Supremacy Using a Programmable Superconducting Processor", that was published and then unpublished. But it's the internet and everything is a "digital tattoo", so there are copies out there (see below).

The paper, right in its title, and at least in that draft form, claimed Quantum supremacy. "Doing what?" we hope you ask. Well, nothing particularly significant, and decidedly quantum-friendly. You might even call it "embarrassingly quantum" since quantum is all about probability functions and this experiment samples the probability distribution of a repeated experiment. But it's not nothing. 

One scary consequence of quantum supremacy is its ability to readily factorize large numbers which could be used to unscramble encrypted data. But A) this is not what happened, B) it's not expected to happen any time soon (think years), and C) it will depend on the specific encryption algorithm. We must say, however, that the paper looks pretty good. Here's the abstract. Click on the title to read it all:

Quantum supremacy using a programmable superconducting processor

Google AI Quantum and collaborators The tantalizing promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here, we report using a processor with programmable superconducting qubits to create quantum states on 53 qubits, occupying a state space 253∼1016. Measurements from repeated experiments sample the corresponding probability distribution, which we verify using classical simulations. While our processor takes about 200 seconds to sample one instance of the quantum circuit 1 million times, a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task. This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm.


LANL gets the First 5,000 Qubit D-Wave

Meanwhile, D-Wave announced that its new 5,000 qubit quantum computer has found its first home at the Los Alamos National Laboratory (LANL). Qubits are different from vendor to vendor in terms of the underlying technology and implementation. Shahin lists several.


@RadioFreeHPC Update

So proud of you all! At the time of this writing, @RadioFreeHPC has soared to about 16 followers. We're pretty much there. Thank you!


Henry Newman's Why No One Should be Online, Ever.

Henry tells the fascinating story of Krebs thwarting the nefarious schemes of a professional hacker who aimed to frame him and actually mailed him narcotics. The mastermind behind it was was arrested and imprisoned for unrelated charges. Henry is really turning this into a good news segment. Dan isn't encouraged, however.


Catch of the Week

Shahin talks about using consumer electronics to build supercomputers, mentioning the recent 1,060 node Raspberry Pi cluster built by Oracle, reminiscent of the one LANL did in 2017. AFRL build a 1,760 node cluster of PlayStations, based on the IBM/Sony/Toshiba Cell processor, in 2010 following similar efforts starting in the mid 2000s. He also recalls similar projects he may have had something to do with: SGI's Project Molecule and Project Kelvin (for cooling) in 2008 (also here), and also a cluster of JavaStations at Sun in the late 90s.

Dan discusses a UCLA project to use the thermoelectric effect and build "a device that makes electricity at night using heat radiating from the ground". Intriguing, but looks a tad too pricey for what it can deliver right now.

Speaking of Intriguing, Henry talks about DNA storage. Incredible data density, but don't ask what file system it uses or whether you can have it on a USB stick any time soon. Dan and Shahin seem to have more fun with this topic than Henry!


Listen in to hear the full conversation.

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter  


Next Page: 10000

© Googlier LLC, 2019