Updated Workflows Maximize Science at New LHC

After a massive upgrade, the Large Hadron Collider (LHC), the world’s most powerful particle collider can now produce up to 1 billion collisions and generate up to 10 gigabytes of data in its quest to push the boundaries of known physics. What’s more, LHC operators project a ten-fold increase over the next decade.

To deal with the new data deluge, researchers working on one of the LHC’s largest experiments—ATLAS—are relying on updated workflow management tools developed primarily by a group of researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab). CRD’s Paolo Calafiura and Vakhtang Tsulaia in Berkeley Lab ATLAS Software group lead the development of new software tools to speed up the analysis of the data by leveraging the capabilities of next-generation Department of Energy (DOE) supercomputers like the National Energy Research Scientific Computing Center’s (NERSC’s) Cori system, as well as DOE’s current Leadership Computing Facilities, to analyze ATLAS data. »Read more about the LHC’s updated workflows.

DOE Announces First ‘HPC for Manufacturing’ Industry Partnerships

The U.S. Department of Energy (DOE) recently announced $3 million for 10 new projects that will enable private-sector companies to use high-performance computing resources at DOE’s national laboratories to tackle major manufacturing challenges. The awards are part of DOE’s High-Performance Computing for Manufacturing (HPC4Mfg) program, which was rolled out last year. Berkeley Lab will work with GLOBALFOUNDRIES to optimize the design of transistors under a project entitled “Computation Design and Optimization of Ultra-Low Power Device Architectures.” Other new HPC4Mfg projects range from improving turbine blades in aircraft engines and cutting heat loss in electronics to reducing waste in paper manufacturing and improving fiberglass production. Each of the projects will receive approximately $300,000 to fund the national labs to partner closely with each chosen company to provide expertise and access to HPC systems aimed at high-impact challenges.

Music Urges Action on Climate Change

Not many scientists can boast that their data is so beautiful it’s been set to music, but long-time NERSC user and CRD collaborator Bill Collins could. Collins is one of the scientists, artists and musicians behind the Climate Music Project, a music-and-video piece about the causes and effects of climate change that was performed for the second time on Friday at the Chabot Space and Science Center in Oakland. KQED’s Cy Musiker wrote about the genesis and purpose of the project.

New Video Shows How Lab’s National User Facilities Work Together to Power Science

A new video produced by the Joint Genome Institute answers the question “What is a National User Facility?” using as an example how JGI, NERSC and ESnet work together to advance the frontiers of science. »Watch “National User Facilities: Empowering the Worldwide Research Ecosystem.”

This Week’s CS Seminars

CITRIS Research Exchange
Web of Systems with Florian Michahelles

Wednesday, Feb. 24, 12 to 1pm, Banatao Auditorium in Sutardja Dai Hall, UC Berkeley campus
Florian Michahelles, Siemens Web of Things

Just imagine, every sensor, and every machine has its own IP address. This opens formidable opportunities, for instance, in manufacturing, building technologies, energy management and healthcare. In order to realize this vision: machines have to speak the same language. This talk will introduce a machine-centric interpretation of the Internet of Things: in a Web of Systems sensor nodes and machines will decide themselves where to process and who to share data with. The “Web of Systems” is just about to move from the lab into pilot projects. This talk will present some of the recent outcomes of the Siemens’ lab in Berkeley in collaboration with UC Berkeley. »Free registration required for lunch.  »Live webcast available.

Applied Math Seminar
Asynchronous Parallel Computing in Signal Processing and Machine Learning

Wednesday, Feb. 24, 3:30 to 4:30pm, 939 Evans Hall, UC Berkeley Campus
Wotao Yin, University of California, Los Angeles

The performance of each CPU core stopped improving around 2005. The Moore’s law, however, continues to apply – not to single-thread performance – but the number of cores in each computer. Today, workstations are with 64 cores, graphic cards with thousands of GPU cores, and some cellphones with eight cores are sold at affordable prices. To benefit from this multi-core Moore’s law, we must parallelize our algorithms. We study asynchronous parallel computing at a high-level of abstraction: finding a fixed point to a nonexpansive operator. It underlies many models in numerical linear algebra, optimization, and other areas of scientific computing. To solve this problem, we propose ARock, an asynchronous parallel algorithmic framework, in which a set of agents (machines, processors, or cores) update randomly selected coordinates of the unknown variable in an asynchronous parallel fashion. As special cases of ARock, novel algorithms for linear equation systems, machine learning, distributed and decentralized optimization are introduced. We show that if the nonexpansive operator has a fixed point, then with probability one the sequence of points generated by ARock converges to a fixed point. Convergence rates are also given. Very encouraging numerical performance of ARock is observed on sparse logistic regression and other large-scale problems. This is joint work with Zhimin Peng (UCLA), Yangyang Xu (IMA), and Ming Yan (Michigan State).

CS Luis Alvarez Fellowship Seminar
The Next Generation of Brain-Computer Interfaces: Responding Implicitly to Users’ Cognitive State

Thursday, Feb. 25, 3 to 4pm, Bldg. 50B, Conf. Rm. 4205
Beste Filiz Yuksel, Luis Alvarez Fellowship Candidate, Tufts University

The human and computer are both complex machines, capable of sophisticated functions, yet there is a very narrow bandwidth of communication between them. A new generation of brain computer interfaces (BCIs) are currently being developed that can increase this communication bandwidth by passively detecting users’ cognitive state and responding appropriately in real-time. In my talk, I present two examples using a musical BCI. The first increases learning speed and accuracy in pianists by increasing task difficulty as cognitive workload decreases. The other aids users in the tricky task of creativity by adding and removing musical harmonies based on cognitive workload during musical improvisation. I will discuss the broader future implications of this next generation of user interfaces.

BIDs Data Science Lecture Series
Quantitative Finance: The Empirical Frontier

Friday, Feb. 26, 1:10 to 2:30pm, 190 Doe Library, UC Berkeley Campus
Jeffrey R. Bohn, State Street Global Exchange, GX Labs

The convergence of high-performance computing technologies, advances in data science, and increased data availability is materially impacting finance, as other domains. This discussion will survey the development of quantitative finance and the issue of characterizing asset returns and risks. In this context, we will explore the limitations of  traditional mean-variance approaches and how to address these limitations for multi-asset class portfolios through the use of factor modeling and stochastic simulations boosted by high-performance computing. We introduce specific experimental architectures and approaches as well as data visualization techniques developed to provide a contemporary empirical framework for financial portfolio modeling.