Next Page: 10000

          

Watch Clouds on Mars Drift by in Supercomputer Simulations

 Cache   
Weather models are a daily staple of life on Earth, but they can go interplanetary as well, sometimes with a boost from Earth's most sophisticated computers.
          

Supercomputer in Maribor to be among world’s most powerful

 Cache   
By pressing a large red button at the main offices of the University of Maribor on Wednesday, the prototype HPC RIVR @ UM supercomputer was symbolically booted.
          

Parallel Data Lab Receives Computing Cluster from Los Alamos National Lab

 Cache   

A photo of a computer cluster

Carnegie Mellon University has received a supercomputer from Los Alamos National Lab (LANL) that will be reconstructed into a computing cluster operated by the Parallel Data Lab (PDL) and housed in the Data Center Observatory. This new cluster will augment the existing Narwhal, also from LANL and made up of parts of the decommissioned Roadrunner supercomputer technology, the fastest supercomputer in the world from June 2008 to June 2009.

This new supercomputer, tentatively named Wolf, will be an important part of educating CMU's next generation of computer science professionals, researchers and educators. The system recently was retired from LANL's open institutional computing environment. While no longer efficient for simulation science, it still has high value as a training tool and for computer science research. Wolf is made up of 616 computing nodes, each containing two eight-core Intel Xeon Sandy Bridge processors, totaling 9,856 processing cores across the entire cluster. The cluster interconnect is QDR InfiniBand, providing a network that is 30 times faster than Narwhal. Altogether, it will have the capability of about 200 teraflops, where one teraflop represents one trillion computations per second.

"Wolf's processing cores are each significantly faster than the previous system, and it consists of about 50 percent more computing nodes," said George Amvrosiadis, assistant research professor of electrical and computer engineering and the Parallel Data Lab (PDL). "We will be retiring the Narwhal nodes. Our experienced PDL team, with Jason Boles leading the installation effort, is doing this gradually to make sure everything works as expected."

In the five years since they received Narwhal from LANL, the researchers of the Parallel Data Lab have developed several projects with the computing cluster in service of educating the world's next thought leaders in several areas of computer science including: scalable storage, cloud computing, machine learning and operating systems.

 


          

Atos grows Santos Dumont Supercomputer capacity fivefold making it largest in Latin America

 Cache   

Atos, a global leader in digital transformation, announces the extension of the capacity of the Santos Dumont supercomputer. Based on Atos' BullSequana technology, this machine, was originally delivered in 2015 to the National Laboratory for Scientific Computing (LNCC). This update confirms its position as the most powerful supercomputer in Latin America dedicated to research and puts it back on the world's Top500 list. The Supercomputer is installed in Petrópolis, Rio de Janeiro, and with...

Read the full story at https://www.webwire.com/ViewPressRel.asp?aId=250819


          

Nvidia Quietly Adds the V100s to the Tesla Product Range

 Cache   

The chances are that you have probably heard about the Nvidia Tesla range of graphics cards, but have likely never encountered one. That is no criticism of your PC techiness as the Tesla range (not to be confused with the car brand) is largely aimed towards the workstation/supercomputer market. In other words, it’s all about […]

The post Nvidia Quietly Adds the V100s to the Tesla Product Range appeared first on eTeknix.


          

Ayar Labs chosen as optical partner in Intel’s DARPA PIPES project

 Cache   
Optical communications accelerator also announces customer sampling program this week at Supercomputing.
          

Un nuovo modello per prevedere i terremoti

 Cache   


Un nuovo modello per prevedere i terremoti

Il nuovo modello numerico individua la fonte del pre-cursore ai segnali sismici. Questa ricerca potrebbe un giorno consentire una previsione accurata dei terremoti.

Le simulazioni numeriche hanno individuato la fonte dei segnali acustici emessi da guasti sollecitati nelle macchine da laboratorio per i terremoti. Il lavoro scompone ulteriormente la fisica che guida i guasti geologici, conoscenza che un giorno potrebbe consentire una previsione accurata dei terremoti.

Il dottor Ke Gao, (1) geofisico computazionale del gruppo Geophysics presso il Los Alamos National Laboratory, spiega: “precedenti studi di apprendimento automatico hanno scoperto che i segnali acustici rilevati da un terremoto possono essere utilizzati per prevedere quando si verificherà il prossimo terremoto. Questo nuovo lavoro di modellazione ci mostra che il crollo delle catene di stress all'interno del tallone del terremoto emette quel segnale in laboratorio, indicando meccanismi che potrebbero anche essere importanti sulla Terra.” Ke Gao è l'autore principale dell'articolo pubblicato su Physical Review Letters. (2)

Le catene di stress sono ponti composti da granuli che trasmettono le sollecitazioni da un lato all'altro di un blocco guasti.

Il dottor Gao lavora in un team di Los Alamos che ha identificato il segnale acustico predittivo nei dati provenienti sia dai terremoti di laboratorio sia dalle regioni situate in Nord America, Sud America e Nuova Zelanda. Il segnale indica accuratamente lo stato di stress nell'errore, indipendentemente dalla lettura del segnale.

Per studiare la causa dei segnali acustici, il team ha condotto una serie di simulazioni numeriche sui supercomputer utilizzando il codice HOSS (Hybrid Optimization Software Suite) sviluppato da Los Alamos. Questo nuovo strumento numerico è una metodologia ibrida, il metodo combinato a elementi finiti discreti. Unisce le tecniche sviluppate mediante tecniche a elementi discreti, per descrivere le interazioni grain-to-grain e con tecniche ad elementi finiti, per descrivere le sollecitazioni in funzione della deformazione all'interno dei grani e della propagazione delle onde lontano dal sistema granulare. Le simulazioni imitano accuratamente le dinamiche dell'evoluzione dei guasti sismici, come il modo in cui i materiali all'interno della sgorbia si frantumano e si scontrano tra loro e come le catene di sollecitazione si formano e si evolvono nel tempo attraverso interazioni tra materiali sgorbianti adiacenti.

Los Alamos ha finanziato un programma pluriennale di svariati milioni di dollari composto da esperimenti di modellistica numerica e sforzi di apprendimento automatico per: sviluppare e testare un approccio altamente innovativo; per sondare il ciclo del terremoto; per rilevare e localizzare lo stress guasti che si stanno avvicinando al fallimento.

Il laboratorio nazionale Los Alamos, (3) un istituto di ricerca multidisciplinare impegnato in scienze strategiche per conto della sicurezza nazionale, è gestito da Triad, un'organizzazione di scienze della sicurezza nazionale orientata al servizio pubblico di proprietà paritaria dei suoi tre membri fondatori: Battelle Memorial Institute (Battelle), Texas A&M University System (TAMUS) e i reggenti dell'Università della California (UC) per il Dipartimento dell'Energia nazionale per la sicurezza nucleare.

Los Alamos migliora la sicurezza nazionale garantendo la sicurezza e l'affidabilità delle scorte nucleari statunitensi, sviluppando tecnologie per ridurre le minacce provenienti dalle armi di distruzione di massa e risolvendo problemi legati all'energia, all'ambiente, alle infrastrutture, alla salute e ai problemi di sicurezza globale.

Riferimenti:

(1) Ke Gao

(2) From Stress Chains to Acoustic Emission

(3) Los Alamos National Lab: National Security Science

Descrizione foto: queste simulazioni prima e dopo mostrano il collasso di una catena di stress dopo un terremoto di laboratorio. - Credit: Los Alamos National Laboratory.

Autore traduzione riassuntiva e adattamento linguistico: Edoardo Capuano / Articolo originale: Numerical model pinpoints source of pre-cursor to seismic signals


          

SC’19: Mistral auf Platz 80 der TOP500-Liste

 Cache   

Während der Supercomputing Conference (SC‘19) wurde am 19. November in Denver/USA die 54. Auflage der TOP500-Liste veröffentlicht. DKRZ Supercomputer Mistral erreicht darauf auch nach vier Jahren Betrieb einen respektablen 80. Platz. Betrachtet man die Speicherkapazität der Datenzentren liegt Mistral auf Platz 4 weltweit.


          

Sistema de pronósticos climáticos de supercómputo mejoran las cosechas

 Cache   
IBM y su filial The Weather Company anunciaron la implementación mundial de un nuevo sistema de pronósticos meteorológicos basados en supercomputación que proporcionarán predicciones más actuales y de mayor calidad en partes del mundo que nunca antes habían tenido acceso a datos climáticos anticipados. Conocido como IBM GRAF, Global High-Resolution Atmospheric Forecasting, el sistema funciona […]
          

Two Phase Immersion Liquid Cooling at Supercomputing 2019

 Cache   

It would now appear we are saturated with two phase immersion liquid cooling (2PILC) – pun intended. One common element from the annual Supercomputing trade show, as well as the odd system at Computex and Mobile World Congress, is the push from some parts of the industry towards fully immersed systems in order to drive cooling. Last year at SC19 we saw a large number of systems featuring this technology – this year the presence was limited to a few key deployments.


          

Spotted at Supercomputing 2019: A 256 GB Gen-Z Memory Module

 Cache   

As a millennial, everything in the media that ‘Gen Z’ does often gets lumped into the millennial category. Thankfully there’s another type of Gen-Z in the world: the cache coherent memory-semantic standard. Where standards like CXL are designed to work inside a node, CXL is meant to work between nodes, providing a switched fabric or a point-to-point connectivity for memory, storage, accelerators, and even other servers.

Earlier this year we saw the announcement of a Gen-Z switch which provides a fabric backbone to which hardware can be connected. The switch allows for fabric management, switching, routing, and security, and allows hardware configuration mixes of storage, compute, and accelerators. We found one such add-on at this year’s Supercomputing: the ZMM, or Gen-Z Memory Module.

What we had in front of us was actually a dummy unit for show purposes, but it is a 3-inch wide memory device that adds additional distributed memory to the network such that different nodes can take advantage of it when needed. Inside is 256 GB of DRAM, providing 30 GB/s bandwidth through the Gen-Z interface: that’s the equivalent of dual channel DDR4-1866. The total latency is listed as 400 ns which is an order of magnitude slower than main memory. Ultimately this is slower than traditional memory-controller supported DRAM, but aims to be faster than network attached storage.


We also saw marketing for a PCIe Gen 5.0 compatible Gen-Z Connector

With these modules, ultimately the goal of Gen-Z is to have a 4U unit in a rack that customers can install any number of memory modules, storage drives, accelerators, or other compute resources, without worrying about exactly where they are in the system or how the system can access them. The Gen-Z consortium is aiming for ‘rack-scale compatibility’, and wants to be able to make these rack-level adjustments seemless to existing ecosystems without OS changes.

Related Reading


          

Linux for Beginners: The Science of Linux Operating System and Programming Tools for Installation, Configuration and Command Line

 Cache   
Название: Linux for Beginners: The Science of Linux Operating System and Programming Tools for Installation, Configuration and Command Line with a Basic Guide on Networking, Cybersecurity, and Ethical Hacking
Автор: Darwin Growth
Издательство: Amazon Digital Services LLC
Год: 2019
Формат: epub/azw3/pdf(conv.)
Страниц: 158
Размер: 10.6 Mb
Язык: English

Linux is a free and freely distributed operating system inspired by the UNIX system, written by Linus Torvalds with the help of thousands of programmers. UNIX is an operating system developed in 1991, one of whose greatest advantages is that it is easily portable to different types of computers, so there are UNIX versions for almost all types of computers, from PC and Mac to workstations and supercomputers.
Unlike other operating systems, such as MacOS (Apple operating system), UNIX is not intended to be easy to use, but to be extremely flexible. It is generally as easy to use as other operating systems, although great efforts are being made to facilitate its use.
          

AMD: „Zen 3“ mehr als 15 % schneller als „Zen 2“

 Cache   
Nächste Mikroarchitektur "Zen 3" keine Weiterentwicklung, sondern komplett neu

Im Rahmen der „Supercomputing 2019“ Konferenz hat AMD einige Details zu den kommenden CPU-Architekturen verraten. Dabei ging es natürlich um „Zen 3“ im Vergleich zur aktuellen „Zen 2“ Mikroarchitektur, aber auch um...
          

148,6 Petaflops: Ranking: IBM-Rechner bleibt schnellster Supercomputer

 Cache   
Die Liste der weltweit schnellsten Rechner der Welt ist seit einem halben Jahr zumindest an der Spitze kaum verändert. Nach wie vor dominieren Anlagen aus den USA und China die Weltrangliste. Doch auch Deutschland ist in den Top Ten vertreten.
          

Azure Weekly Issue 252 - 24th November 2019

 Cache   

Now that the Ignite dust has settled, we've had a sweep through the newsletter and spruced up the categories. You'll see a few new categories (Hybrid, Mixed Reality and Windows Virtual Desktop), and you'll see some new services as part of their corresponding category. Make sure you have a flick through to see what's changed.

This week we've been spoiled with a number of exciting announcements. Our favourite, has to be that Change feed support is now available in preview for Azure Blob Storage, but we've also been told that:

This week, Gregor Suttie has written a three-part series about Microsoft Security Code Analysis for Azure Devops and Tobias Zimmergren has also written about the same topic, in his blog: Automate Azure DevOps code security analysis with the Microsoft Security Code Analysis extensions. Finally, are you up for a daily serverless challenge in December? Have a read of this: Merry and Bright with Azure Advocates 25 Days of Serverless.


          

Maior supercomputador da América Latina recebe upgrade

 Cache   

Petrobras e parceiros investem em ampliação da capacidade do maior supercomputador da América Latina A Petrobras e seus parceiros do Consórcio de Libra investiram R$ 63 milhões na ampliação da capacidade de processamento do Maior supercomputador da América Latina; Agora o Santos Dumont, que passa a liderar o ranking dos computadores de mais alto desempenho ...

O post Maior supercomputador da América Latina recebe upgrade apareceu primeiro em OverBR.


          

tedu honked https://honk.tedunangst.com/u/tedu/h/m8WKpvR82N5Bn2L17y

 Cache   
Watching the CPU to see how many bubbles are boiled seems like a pretty slick way to monitor system load. Current load average is rolling boil...

https://www.anandtech.com/show/15166/two-phase-immersion-liquid-cooling-at-supercomputing-2019
          

Supercomputing 2019 a "dvoufázové ponorné kapalinové chlazení"

 Cache   
articlepicture
Na letošní akci Supercomputing 2019 se mimo jiné v praxi ukázalo také chlazení 2PILC, čili Two Phase Immersion Liquid Cooling. Jedná se o variaci na téma ponorného kapalinového chlazení s využitím nových médií.

          

Parallel Data Lab Receives Computing Cluster from Los Alamos National Lab

 Cache   

A photo of a computer cluster

CMU’s Parallel Data Lab received a supercomputer from Los Alamos National Lab (LANL) that will be reconstructed into a computing cluster and play an important role in educating the next generation of computer science professionals, researchers, and educators.
          

HPC and AI Are Changing the World

 Cache   
During the recent SC19 supercomputing conference, the top semiconductor and systems vendors discussed and demoed the highest-performance computing solutions in the world. While it’s easy to imagine these platforms solving some of the most challenging problems, and simulating everything from the human genome to climate change, there are thousands of other applications that can benefit […]
          

INL supercomputer to help predict the weather 

 Cache   
Idaho National Laboratory (INL) isn’t due to get its new supercomputer until next month, but it already has plans for it. Boise State University, Idaho Power and INL recently announced a collaboration to advance high-performance computing, weather modeling and workforce development in Idaho. The project calls for the collection and analysis of weather data by ...
          

Atos inaugura un nuevo Laboratorio de Supercomputación en Francia

 Cache   
La infraestructura ha sido diseñada para ofrecer ahorros energéticos del 75%, gracias a un sistema de enfriamiento de bajo consumo.
          

El Barcelona Supercomputing Center gestionará el Espai Barça

 Cache   
Esta iniciativa forma parte del proyecto IoTwins, que está financiado por la Comisión Europea y se basa en el IoT, la IA y las simulaciones.
          

ExaNoDe desarrolla prototipo basado en integración 3D para superordenadores

 Cache   
Investigadores del Barcelona Supercomputing Center contribuyen con un entorno de programación para el prototipo ExaNode.
          

Austro-Supercomputer startet offiziell den Betrieb

 Cache   
Österreichs leistungsfähigster Computer hat am Montag offiziell seinen Betrieb aufgenommen: Der „Vienna Scientific Cluster 4“ (VSC-4) ist mit einer Rechenleistung von 2,7 Petaflops viermal so leistungsstark wie das Vorgängermodell VSC-3. Der acht Millionen Euro teure Supercomputer ist ein Gemeinschaftsprojekt von fünf Universitäten und steht für wissenschaftliche Berechnungen zur Verfügung.
          

Intel Wins IO500 10-node Challenge with DAOS

 Cache   

In this video from SC19, Kelsey Prantis from Intel describes how the DAOS parallel file system won the IO500 10-node Challenge with Intel Optane DC persistent memory. As an all-new parallel file system, DAOS will be a key component of the the upcoming Aurora supercomputer coming to Argonne National Laboratory in 2021. "Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI.”

The post Intel Wins IO500 10-node Challenge with DAOS appeared first on insideHPC.


          

Fujitsu Begins Shipping Fugaku Supercomputer

 Cache   

Today Fujitsu announced that the company has commenced shipping the supercomputer Fugaku. Jointly developed with RIKEN, Fugaku is slated to start general operation between 2021 and 2022. "Fugaku will be comprised of over 150,000 Arm-based Fujitsu A64FX CPUs with a proprietary "Tofu' interconnect. The system was developed with the aim of achieving up to 100 times the application performance of the K computer with approximately three times the power consumption."

The post Fujitsu Begins Shipping Fugaku Supercomputer appeared first on insideHPC.


          

Job of the Week: Systems Administrators for Servers, Clusters and Supercomputers at D.E. Shaw Research

 Cache   

D.E. Shaw Research is seeking Systems Administrators for Servers, Clusters and Supercomputers in our Job of the Week. "Our research effort is aimed at achieving major scientific advances in the field of biochemistry and fundamentally transforming the process of drug discovery. Exceptional sysadmins sought to manage systems, storage, and network infrastructure for a New York–based interdisciplinary research group."

The post Job of the Week: Systems Administrators for Servers, Clusters and Supercomputers at D.E. Shaw Research appeared first on insideHPC.


          

Dell Introduces Solutions to Advance High Performance Computing and AI Innovation

 Cache   

According to a recent press release, “At Supercomputing 2019, Dell Technologies is introducing several new solutions, reference architectures and portfolio advancements all designed to simplify and accelerate customers’ high performance computing (HPC) and artificial intelligence (AI) efforts. Continued adoption of AI to solve real-world problems has spurred growth across the HPC industry. According to a […]
The post Dell Introduces Solutions to Advance High Performance Computing and AI Innovation appeared first on DATAVERSITY.


          

Fujitsu y RIKEN consiguen el primer puesto en el Green500 con el prototipo de su supercomputadora Fugaku

 Cache   
Demuestra el rendimiento de ahorro de energía más alto del mundo
          

Assinado acordo para instalação de supercomputador

 Cache   
Empresa Comum Europeia para a Computação de Alto Desempenho assinou contratos para a instalação de futuros supercomputadores europeus. Um deles (Deucalion) vai reforçar a capacidade portuguesa nesta área. O Deucalion será instalado até ao final de 2020 e terá a capacidade de executar 10 mil...
          

Cristina Romera, investigadora ComFuturo, premio L’Oréal-UNESCO for Women in Science

 Cache   

El programa L’Oréal-Unesco For Women in Science ha otorgado esta mañana sus cinco premios anuales a los proyectos desarrollados por mujeres menores de 40 años que investigan sobre algunos de los grandes desafíos científicos de la Humanidad. Unos proyectos elegidos por su carácter innovador, su impacto y contribución científica, desde campos como la biomedicina, la biotecnología, biología computacional, la genómica de plantas o la ciencia marina, entre otros. “El mundo necesita ciencia y la ciencia necesita mujeres que, como ellas, contribuyan a avanzar en la resolución de los innumerables retos existentes en el mundo. Un año más las premiadas demuestran la indiscutible calidad de la ciencia en nuestro país, y en concreto, de la ciencia desarrollada por las mujeres”, ha señalado, durante la entrega, Juan Alonso de Lomas, presidente de L’Oréal España.

Uno de los grandes desafíos galardonados en la XIV edición de Premios a la Investigación ha sido el trabajo desarrollado por Cristina Romera, investigadora ComFuturo de la FGCSIC, en el Instituto de Ciencias del Mar del CSIC (Barcelona), por estudiar nuevas formas de degradación del plástico marino. Su objetivo es analizar las condiciones medioambientales que favorecen la migración de compuestos orgánicos de los microplásticos vertidos al mar, para conocer sus efectos en los microorganismos marinos y descubrir qué bacterias degradan el carbono liberado por el plástico.

Junto a Cristina Romera, ha resultado galardonada la investigadora en biomedicina Marta Melé, del Barcelona Supercomputing Center (Barcelona); el tercer premio ha recaído en el Centro Nacional de Investigaciones Cardiovasculares (CNIC, Madrid) por la investigación de Sara Cogliati. El proyecto de Patricia Fernández Calvo en el Centro de Biotecnología y Genómica de Plantas de la Universidad Politécnica de Madrid ha sido  otro de los premiados. El último gran desafío galardonado es la investigación que desarrolla Verónica Torrano en el departamento de Bioquímica y Biología Molecular de  la Universidad del País Vasco.

Todos los proyectos ganadores han sido elegidos por un jurado compuesto por María Blasco, directora del Centro Nacional de Investigaciones Oncológicas (CNIO); María Vallet-Regí, catedrática de Química Inorgánica en la Facultad de Farmacia de la Universidad Complutense de Madrid; Rafael Garesse, rector de la Universidad Autónoma de Madrid;  y Francis Mójica, microbiólogo y profesor titular de Fisiología, Genética y Microbiología en la Universidad de Alicante.

 

 




Next Page: 10000

© Googlier LLC, 2019