Jim Fischer, electrical engineering ‘74, was a key player in several groundbreaking initiatives during his 40-year career at the NASA Goddard Space Flight Center, including managing the team that designed and built a revolutionary cluster computing prototype — the foundational architecture for most of the world’s top 500 supercomputers and today’s cloud computing.
Known as the Beowulf Computing Cluster, the project was inducted into the Space Technology Hall of Fame in April 2022, in Colorado Springs, Colorado. Fischer accepted the honor with Thomas Sterling, the “father of Beowulf” and professor at Indiana University.
The Beowulf cluster relies on commodity computer hardware, Ethernet and open source Linux software to minimize cost. Beowulf systems are the basis for supercomputers that process immense amounts of data and run models that lead to scientific breakthroughs across many fields.
“The usefulness of this system was so obvious that there was just a natural inclination to double it, and double it again, to see what you could do,” Fischer said. “Whenever I hear about science advancement enabled by high-end computing, in biological sciences, geosciences, environmental sciences or mathematical and physical sciences, I see Beowulf as part of that work.”
The ‘tortoise’
From Concord, NC, Fischer first connected with NASA in 1971 for his co-op internship during his sophomore year at NC State. He joined NASA Goddard full-time in 1974 after graduation. He remains grateful to John Hamme, director of NC State’s Cooperative Education Program, who placed him with NASA.
At NASA, Fischer thrived on taking difficult tasks and seeing them to completion. In his first 10 years as an electrical engineer at Goddard, he was one of the seven-person team that developed the Massively Parallel Processor (MPP). By 1983, he gave up his technical work and became the team manager because no one else wanted to lead.
“In the tortoise and the hare story, I’m the tortoise,” he said. “I’m methodical, not competitive.”
The MPP exceeded the capabilities of other supercomputers at the time and drew the attention of scientists across the U.S. In 1985, Fischer prepared the MPP for remote operational use by dozens of scientific teams who applied it to numerical simulations of complex physical and biological processes, generation of interactive visual displays, satellite image analysis and knowledge-based systems.
“The MPP went from being a curiosity to being seen as a solution,” Fischer said. It is now on display under the wing of the Space Shuttle at the Steven F. Udvar-Hazy Center, part of the National Air and Space Museum.
The Beowulf revolution
Fischer’s success with the MPP laid the groundwork for his leadership of the Beowulf project. By 1992, he was managing the Earth and Space Sciences (ESS) Project, which funded Grand Challenge Investigator Teams to adapt their important scientific applications — in Earth, space and planetary science as well as astrophysics — to run on high-end parallel computing testbeds.
The Goddard team he assembled included computer scientists, modelers and Thomas Sterling as system evaluator. Fischer set a goal to develop a workstation that sustained one gigaflop at a cost of $50,000 to complement the large parallel computers they were using — and Sterling came up with a novel solution, which became Beowulf.
Whenever I hear about science advancement enabled by high-end computing… I see Beowulf as part of that work.”
Jim Fischer
“When Thomas brought the idea to me in 1993, I could see immediately that each system would have the lowest possible cost because all the hardware was commodity, and the software was open source Linux,” Fischer said. “The low cost would allow multiple systems, then impossible. Having lots of similar systems would enable many software developers.”
Over the next four years, Fischer removed obstacles and brought visibility to the work. The Goddard team augmented Linux to support parallel execution, tested it with existing applications, and then told the world how to build a Beowulf. Soon ESS Investigator Teams were building their own Beowulf clusters and using them for scientific research.
Within just a few years, Beowulf became widely adopted and dramatically reduced the cost of high-end computing.
A lasting impact
Between 1992 and 2005, Fischer’s ESS Project team funded, guided and supported several dozen Grand Challenge Investigator Teams to migrate their applications to high-end parallel computers. One that has had a lasting impact is the Earth System Modeling Framework (ESMF).
ESMF is an integrated model framework coupling previously incompatible atmospheric, ocean, land surface and sea ice components on a global scale. In 1999, at least seven government agencies and academic institutions had 10 different modeling systems with components that couldn’t be interchanged. To remedy that, the National Center for Atmospheric Research (NCAR) collaborated with the Massachusetts Institute of Technology and NASA Goddard to develop ESMF.
Today, most of the climate and weather modeling systems in the U.S. use ESMF, including NASA’s Goddard Earth Observing System models, NCAR’s Community Earth System Model, NOAA’s Unified Forecast System and Navy models, to name a few. ESMF continues to evolve in response to the changing computational environment and new model requirements.
During Fischer’s career, NASA’s operational supercomputing capacity has increased more than eight orders of magnitude, most recently with evolved Beowulf systems.
“Computation has become the third pillar of science, a new method of discovery,” Fischer said. “The best discoveries are yet to come.”