Goodwill Computing Lab is designing novel methods to improve the operational efficiency and cost effectiveness of large-scale parallel computing systems and quantum computing systems.

The basic principle behind our research is very simple: hypothesize, measure, and validate. The three specific steps are: (1) pose simple, fundamental questions about a system’s function, and formulate hypothesis about how the system may be functioning, (2) conduct systematic experiments to learn new insights about the system’s function or behavior, and (3) design new techniques that leverage these new insights to solve am important problem and validate the solution on real-systems.

We develop new analytical models, tools, and devise novel techniques that improve the reliability, power-efficiency, and resource-utilization of large-scale data-centric systems. Our techniques and tools benefit many large-scale data-intensive applications that produce, analyze, and manage terabytes of data per day on large supercomputers. We also enthusiastically apply resilience, high performance computing, and data analytics expertise to emerging inter-disciplinary research domains.

Goodwill Computing Lab also focuses on preparing next generation of students and educators to take advantage of parallel computing systems to solve problems of societal importance. We design and create new educational activities to train next generation of HPC researchers.