At MIT’s Lincoln Laboratory, a new supercomputer has arrived. TX-GAIN, capable of two AI-exaflops, fuses more than six hundred GPUs into a single, coordinated pulse of processing power. Built not just to calculate but to comprehend, it promises to accelerate breakthroughs in medicine, climate modeling, and beyond.
The new TX-Generative AI Next (TX-GAIN) system has come online: a supercomputer capable of the equivalent of two quintillion operations every second! That’s enough to make yesterday’s loading bar feel like ancient history turtle speed.
To put this into context: Japan’s Fugaku claimed the title of world’s fastest supercomputer in 2020. NVIDIA’s DGX GH200, launched in 2023, then redrew the AI map by linking 256 Grace Hopper chips into a unified architecture. Now, just two years later, TX-GAIN pushes that curve again. It's proof that what once took decades of progress now happens between coffee refills, and it’s the hardware that makes that leap visible.
Built from more than six hundred NVIDIA accelerators, TX-GAIN isn’t just built for raw throughput. It's designed to help researchers model complex systems such as weather and materials, or disease and defense, in ways ordinary machines are unable to match.
"TX-GAIN will enable our researchers to achieve scientific and engineering breakthroughs," says Jeremy Kepner, head of the Lincoln Laboratory Supercomputing Center (LLSC). "The system will play a large role in supporting generative AI, physical simulation, and data analysis across all research areas.2
Running advanced generative models, TX-GAIN enables systems that don’t just recognize patterns, they create them. Instead of simply labeling a photo as a dog or a cat, researchers can now train models that compose music, draft text, or even invent new molecules.
Lincoln Lab teams are currently using that creative power to fill gaps in weather data, spot anomalies in network traffic, and design new materials and medicines by simulating chemical interactions that would historically take months in a physical lab.
"TX-GAIN is allowing us to model not only significantly more protein interactions than ever before, but also much larger proteins with more atoms," says Rafael Jaimes, of the lab’s Counter–Weapons of Mass Destruction Systems Group. "This new computational capability is a game-changer for protein characterization efforts in biological defense."
What once demanded a team fluent in arcane code is now within reach for anyone with a research question. Thanks to TX-GAIN’s interactive interface, scientists can run enormous models with the ease of opening a laptop.
"The LLSC has always tried to make supercomputing feel like working on your laptop," Kepner says. "With our user-friendly approach, people can run their model and get answers quickly from their workspace."
And the impact isn’t just confined to the lab’s walls. TX-GAIN is already powering collaborations across the MIT ecosystem; from the Haystack Observatory and Center for Quantum Engineering to the Department of Air Force–MIT AI Accelerator.
Even the facility itself is part of the experiment: housed in Holyoke, Massachusetts, it operates in an energy-efficient data center where new software has cut AI training power use by up to 80%.
"The LLSC provides the capabilities needed to do leading-edge research, while in a cost-effective and energy-efficient manner," Kepner adds.
From the transistor pioneers of the 1950s to the dense GPU networks of today, computing’s pace is climbing an exponential curve. TX-GAIN marks the next step in that continuum.