NERSC Helps BELLA Team Take Another Step Towards Tabletop Accelerators

Berkeley Lab scientists have created the first-ever, two-stage laser-plasma accelerator powered by independent laser pulses. This achievement was made possible in part by simulations run at DOE’s National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab. 

In an experiment packed with scientific firsts, Wim Leemans and his BELLA Center colleagues have now demonstrated that a laser pulse can accelerate an electron beam and couple it to a second laser plasma accelerator, where another laser pulse accelerates the beam to higher energy—a fundamental breakthrough in advanced accelerator science. The results are reported in the »Feb. 1 issue of Nature.

Running on a Cray supercomputer at NERSC, the highly efficient INF&RNO code for modeling laser and plasma interactions could turn a day’s experimental data into a simulation almost overnight, like “dailies” on a movie set. Among many other questions, intricacies of laser timing could be explored; focusing the energetic but ragged beam from the gas jet could be simulated even as the serendipitous discovery of how to actually do it was becoming a reality.

“Through matching to the experimental observations, simulation can see everything,” says Carlo Benedetti of the BELLA Center’s simulation team, who led development of INF&RNO. “We can see how the laser beam is behaving and understand which electrons are the ones being accelerated.”

Reminder: CS Staff All-Hands Meeting 10 a.m., Feb. 4

Computing Sciences Associate Lab Director Kathy Yelick invites all staff to an all-hands meeting at 10 a.m. Thursday, Feb. 4. The meeting will be held in the Bldg. 50 auditorium. Topics on the agenda include the CS Strategic Plan, News from Washington, Computing Sciences recent move into Wang Hall and Lab Directed Research and Development awards.

Remote Registration Still Open for NERSC’s ‘Sold-out’ Advanced OpenMP Tutorial

The response to NERSC’s free, one-day training on advanced OpenMP has been overwhelming. Onsite seats are no longer available for the Thursday, Feb. 6 training by some of the field’s leading experts, but remote attendance is still available for the all-day event.

Helen He and Alice Koniges, both members of the OpenMP Language Committee, say that NERSC is hosting this  advanced workshop because OpenMP is critical to scientists preparing to run their codes on NERSC’s manycore system, Cori, and others like it. “Most applications can run on systems like Cori with simple porting, but to get high performance, scaling and portability, Hybrid MPI/OpenMP is the recommended programming model,” said He.

Intel’s Michael Klemm and LLNL’s Bronis R. de Supinski, who also chairs the OpenMP Language Committee, will teach the course which features hands-on work using NERSC systems Cori and Babbage.

In August 2015, Ruud van der Pas, coauthor of the book Using OpenMP book, also taught a basic OpenMP tutorial at NERSC. “After inviting him to give an OpenMP tutorial at HPCS 2015, I also asked him to come to NERSC,” said He. “Both these beginning and advanced tutorials allow our users to enjoy excellent training without the costs of attending conferences,” Koniges said.

Today: Berkeley Institute for Data Sciences’ Open House

Today at 3pm, the Berkeley Institute for Data Science (BIDS) is holding an open house and ice cream social to introduce BIDS to the campus community. Whether you are already part of the BIDS community or are completely unfamiliar with what BIDS does, you are invited to learn more about what BIDS does, catch up with fellows and staff, and meet other data science enthusiasts. The social will be held on the UC Berkeley campus in room 190 of the Doe Library.

This Week’s CS Seminars

 

NERSC Brownbag
LBL Before NERSC: Supercomputing in the 1970s and ‘80s

Wednesday, Feb. 3, 12–1pm, Wang Hall – Bldg 59, Rm 3101
Moderator: Richard Friedman; Panelists: Jeremy Knight, Marty Itzkowitz, Bill Gage, Bill Benson, William Johnston, Nancy Johnston and Brian Higgins

The opening of Wang Hall at LBL actually marks the return of supercomputing to the hill. Back in the late 1960’s building 50B housed one of the world’s great high performance computer centers of its day, supported by the AEC and later the DOE. By the 70s and early 80s, the LBL computer center, with it’s Control Data 6600s and 7600, nascent ARPANET node, and homebrew BKY operating system, won praise from researchers and the NSF for its facilities and staff. Similar centers were at NYU, CERN, NCAR, LANL, and LLNL. At this brown bag session, some of the LBL system group members from that era will join in conversation to talk about what the world of supercomputing at LBL was like almost 50 years ago — a world without UNIX, C, Java, and laptops — and describe how the BKY multiprocessing system was developed. Look for a very lively discussion, and a reunion of sorts of the people who did pioneering work at LBL.

CITRIS Research Exchange
Towards an Inclusive and Participatory Design Process

Wednesday, Feb. 3, 12–1pm, Banato Auditorium, Sutardja Dai Hall
Elizabeth Goodman, 18F

Elizabeth Goodman is a design researcher at 18F, the Government Services Administrations’ bid to modernize government’s delivery of digital services. She studies how people live to design and develop tools that support them. Over the course of a project, she might translate relevant scholarly work into action; break down large, complicated questions into smaller projects to prioritize them; map the cultural and social landscapes that new products and services will inhabit; and facilitate the participation of future users and other project constituencies in decision-making.

Applied Math Seminar
Homogenization of Thermo-mechanical Continua Using Extensive Physical Quantities: Theory and simulation

Wednesday, Feb. 3, 3:30–4:30pm, 939 Evans Hall, UC Berkeley
Kranthi Mandadapu, UC Berkeley

The macroscopic thermomechanical behaviour of heterogeneous media may depend strongly on their microstructure. For example, the macroscopic behavior of polycrystals depends on the size and orientation of underlying single crystals. Another situation is the complex rheological behavior of vesicle suspensions. In such cases, the constitutive behavior of the microstructure needs to be considered when modelling the bulk material response.

In this talk, I will describe a homogenization method to connect the microscopic and macroscopic scales based on extensive physical quantities assuming that both scales can be modeled using continuum mechanics. The method extends to the continuum-on-continuum setting the celebrated approach of Irving & Kirkwood, which forms the basis for upscaling atomistic variables of classical statistical mechanics, such as position, momentum and interatomic forces to continuum variables, such as stress and heat flux. An application of this extended method will be explored within the context of finite element-based homogenization of solids in quasi-static conditions. Finally, I will discuss the microscale inertial or dynamical effects on the macroscale behavior for a 1-dimensional elastic-layered medium.

CAMERA Seminar
How Should We Acquire and Interpret Data in X-ray Tomography?

Thursday, Feb. 4, 9–10am, Wang Hall – Bldg 59, Rm 4102
Doga Gursoy, Argonne National Laboratory

Tomography is a broad name for describing the process of reconstructing the interior of an object from multiple measurements taken from the outside. In this talk, after briefly explaining the data formation process of tomography and the mathematical formalism behind it, I’ll mainly focus on how we can change the way we collect data to obtain the most useful information from our samples under limited experimental conditions constrained by time and radiation dose. I’ll compare the traditional and the contemporary approaches for various tomographic reconstruction methods, and discuss how these new approaches can be adopted for imaging of dynamical systems where the sample can evolve in time.

CAMERA Seminar
Recent Advances in Filter-based Tomographic Reconstruction Methods

Friday, Feb. 5, 9–10am, Wang Hall – Bldg 59, Rm 4102
Daniel Pelt,
Centrum Wiskunde en Informatica Amsterdam, The Netherlands

Various advanced tomographic reconstruction methods are available to improve reconstruction quality in the case of incomplete or noisy projection data. Most of these methods, however, are computationally expensive and difficult to implement, and are therefore not used routinely at experimental facilities. Filtered backprojection, on the other hand, is fast, easy to implement, robust, and very popular, but its reconstruction quality degrades if the data has a low signal-to-noise ratio, or if only a small number of projections are available. We have recently developed a range of new reconstruction methods that improve the quality of filtered backprojection by changing the convolution filter. Different approaches of changing the filter can be used, each with their own advantages and disadvantages. The reconstruction quality of these new methods is often on par with slower, more advanced reconstruction methods, but because the methods are based on filtered backprojection, existing efficient implementations can be used at experimental facilities to implement them with minimal effort. Bio (optional) – Daniël M. Pelt received the M.Sc. degree in mathematics from the University of Utrecht, The Netherlands, in 2010. He is currently pursuing the Ph.D. degree at Centrum Wiskunde en Informatica, Amsterdam, The Netherlands, focusing on filter-based reconstruction algorithms for limited-data tomography problems. He is also involved in the development of the ASTRA toolbox, an open-source toolbox for tomographic reconstruction.

BIDS Data Science Lecture Series
Great Exploitations: Data Mining, Legal Modernization, and the NSA

Friday, Feb. 5, 1:10–2:30pm, 190 Doe Library, UC Berkeley
Matthew L. Jones, Columbia University

We cannot understand the programs revealed by Edward Snowden and other whistleblowers without understanding a broader set of historical developments before and after 9/11. With the growing spread of computation into everyday transactions from the 1960s into the 1990s, corporations and governments collected exponentially more information about consumers and citizens. To contend with this deluge of data, computer scientists, mathematicians, and business analysts created new fields of computational analysis, colloquially called “data mining,” designed to produce knowledge or intelligence from vast volume. Facing the growth of the Internet and the increasing availability of high-quality cryptography, national security lawyers within the Department of Justice and the National Security Agency (NSA) began developing what was called a “modernization” of surveillance and intelligence law to deal with technological developments. In addition, in the Clinton era, concerns about terrorist attacks on the United States came to focus heavily on the need to defend computer systems and networks. Protecting the “critical infrastructure” of the United States, the argument ran, required new domestic surveillance to find insecurities and opened the door to much greater Department of Defense capability domestically and new NSA responsibilities. Tools for assessing domestic vulnerabilities lent themselves easily to discerning—and exploiting—foreign ones, and traditions of acquiring and exploiting any foreign sources of communication prompted the NSA to develop ever more invasive ways of hacking into computers and networks worldwide. The job of the NSA is just “to exploit” communications networks—to make them available to policymakers; to do this, its lawyers “exploited” the law as well as technology. “Great Exploitations” tells a history of how we came to exploit communications, law, bureaucracy, and the fear of terrorism and how we might choose to do so differently.