By Steve Lunderberg and Steve Frandzel
Oregon State University’s College of Engineering is now home to some of the most powerful university computing resources worldwide, with the acquisition of an advanced supercomputer cluster built with NVIDIA systems.
Scott Ashford, Kearney Dean of Engineering, says this strategic investment in high-performance computing resources will prove essential as the college pushes forward its work in critically important and rapidly developing areas such as robotics and driverless vehicles.
“The computing power we now possess will accelerate our research in artificial intelligence and machine learning, while exposing our computer science students to the most advanced technology available in higher education,” Ashford said. “We are committed to continuing to drive innovation in both research and education and will do so by providing faculty and students with the state-of-the-art capabilities needed.”
In the summer of 2019, the college invested $2.6 million in a high-performance artificial intelligence computing cluster built using six NVIDIA DGX-2 systems, which are in operation at the Kelley Engineering Center. NVIDIA DGX-2 is the world’s first artificial intelligence system to deliver 2 petaflops of performance — an astonishing 2 quadrillion floating point operations per second — in a single node. DGX-2 integrates 16 of the world’s most advanced data center accelerators, the NVIDIA V100 Tensor Core GPU with NVIDIA NVSwitch technology, to tackle the largest data sets and most complex AI models.
NVIDIA Corporation, a world leader in computing systems for AI and machine learning, was founded by Oregon State alumnus Jensen Huang (’84 B.S., Electrical Engineering).
The new infrastructure will allow computer science students access to the top tier of technology. In addition to AI and machine learning, the high-performance computing made possible by the graphical processing power will enhance the college’s work in parallel programming and medical imaging, among other fields of study, Ashford added.
“Researchers doing groundbreaking science demand the right instruments,” said Huang, NVIDIA’s chief executive officer. “For AI researchers, that instrument is a graphics processing supercomputer. Oregon State’s investment reflects the university’s commitment to its researchers and seriousness to lead in AI research.” Mike Bailey, professor of computer science, says the extraordinary computational power of the new computing cluster will speed up graphics rendering times dramatically.
“Instead of taking several hours to produce stunning, realistic images, it’s going to take seconds, or, at most, a couple of minutes,” said Bailey, whose expertise includes high-performance computer graphics, GPU programming, and scientific visualization.
“That extra time will allow our students to explore more complex and sophisticated rendering methods.”
Bailey also sees his work benefiting Oregon State’s Carlson College of Veterinary Medicine to enhance its MRI imaging capabilities. An MRI system extracts two-dimensional image slices, which Bailey’s students combine into 3D volumes to help researchers better visualize what the insides of an animal look like. “The new cluster will help to make that, and other sophisticated volume data analyses, possible,” he said.
The speed of the NVIDIA cluster will allow Cindy Grimm, associate professor of mechanical engineering, to significantly accelerate the pace of machine learning simulations needed to train robots to detect, grasp, and manipulate objects. Robotic hands are still notoriously bad at mundane tasks that humans perform easily, like picking up a cup of coffee or sensing how full it is and adjusting its movements accordingly.
“These supercomputing clusters will enable us to run millions of simulated machine learning iterations instead of just a handful of low-repetition trials, so we’ll be better able to virtually re-create the multitude of variables that a robotic hand will encounter when it needs to pick up an object,” Grimm said. “The more repetitions you can manage, the better machine learning tends to work.”
Photo credit: Gale Sumida, NVIDIA