Inquiry & Impact

The ‘atomic bomb’ of AI-driven science

Agentic AI panel discussion at the Science Center on Apr. 16, 2026. From left to right, David Parkes, Rodrigo Córdova Rosado, Doug Finkbeiner, Matt Schwartz, Michael Brenner and Chris Stubbs. Photo by Jodi Hilton for Harvard Faculty of Arts and Sciences

Harvard scientists describe promise and peril of accelerating technologies

Kermit Pattison

With artificial intelligence, a theoretical physicist completed a research paper in only two weeks — a project that otherwise would have taken him about four months and a graduate student up to two years.

Another scholar used an AI model to scrape a dictionary of a rare Native American language — and within hours built an engine that could perform instant translations.

Other researchers have employed AI to speed up the scanning of brain microanatomy sevenfold, adjust the Rubin Observatory telescope in response to changes in ambient temperature, or automatically generate 198,000 lines of computer code to answer a question in string theory.

Harvard researchers are now using agentic AI — a new class of autonomous systems that can reason and perform complex operations — to perform cutting-edge science. In a provocative discussion exploring both the promise and peril of AI, a panel of four scientists spoke last week before an overflow crowd at the Science Center.

“This is an atomic bomb,” said Rodrigo Córdova Rosado, a postdoctoral fellow at the Center for Astrophysics. “If you talk to people who work in these frontier labs, every single one of them talks like they’re part of a Manhattan Project.”

The panelists described employing AI to write papers, mine huge volumes of data, automate writing code, and accelerate research as much as tenfold. These systems remain prone to errors and hallucinations and still require close supervision by experts, but already have proved one of the most disruptive innovations of our lifetimes. David Parkes, dean of the Harvard John A. Paulson School of Engineering and Applied Sciences, noted that the capabilities of AI systems are doubling every seven months.

“I've been going around saying, ‘The sky is falling! The sky is falling!’ because I really feel that things are changing and there's no going back,” said Professor of Physics Matthew Schwartz. “It's completely transformative to the way that we do science.”

Last December, Schwartz demonstrated the revolutionary potential of these new tools, writing a paper using Claude Opus 4.5 the way he would guide a second-year graduate student through a research project.

When he asked frontier AI models to complete the task, all initially failed. Claude repeatedly faked results, skipped steps, or failed to incorporate its own work — so Schwartz persistently prompted the machine to correct its own work. He spent more than 50 hours on oversight, exchanged more than 51,000 messages, and reviewed 110 drafts. Meanwhile, he also performed his own calculations to check the work, but did not reveal them to the model.

After two weeks, Schwartz completed the paper and published it in January. Without AI, the same project would have taken him three to five months.

“This may be the most important paper I’ve ever written — not for the physics, but for the method,” he wrote in a blog describing the experiment. “There is no going back.”

Michael Brenner, Catalyst Professor of Applied Mathematics and Applied Physics and of Physics, described how his team employed AI to break through the bottleneck of manually writing software code for challenges such as forecasting COVID-19 hospitalizations or mapping brain activity in zebrafish. The group used coding agents to devise multiple solutions and “tree search” methods to identify the best performers.

“I've never become addicted to anything in my life,” said Brenner, “but I'm addicted to this.”

Córdova Rosado described being astonished by the power of Claude Code when he began to employ it for his astronomical research. A citizen of the Osage nation, he also used Claude to scrape a dictionary of the Osage language and within hours trained a large language model (LLM) to provide instant translations.

“Give it enough information … and suddenly it’s capable of holding a conversation with you in a language that 20 people speak in the world,” he said.

But these capabilities also pose grave dangers. Córdova Rosado warned that these powerful tools are “hacking at the foundations of empirical science in ways that I think are incredibly dangerous.” Because AI can perform an ever-expanding body of tasks, younger generations may have less incentive to learn the fundamentals of science.

“This is a revolution in how we’re going to be using tools to promote pedagogy across this university, every other university, and every other institution of learning, period,” said Córdova Rosado. “If we do not get a handle on how we’re going to use these things going forward, I think the wave is just going to take us under. We’re at risk of teaching a generation to never learn how to think.”

At this point, AI systems have sped the pace of research, but produced few insights beyond those attained by humans. But the boundaries are shifting rapidly. Moderator Christopher Stubbs, the Samuel C. Moncher Professor of Physics and of Astronomy and FAS senior advisor on AI, wondered whether someday AI systems will announce: “I understand quantum gravity now, but I just can’t explain it to you because you’re just not smart enough.”

“Yeah, I think that's going to happen,” Schwartz answered, “and I think it’s going to happen sooner rather than later.”

But some panelists noted that the speed of innovation does not necessarily spell doom for human researchers. Without question, scientific research and education will be severely disrupted, but we also may witness a new era of explosive discoveries.

“We’re just going to get more done, basically,” said Brenner. He predicted the standards for innovative research will rise: Students no longer will be able to earn Ph.D.s for boring research projects that can be done faster by machines; instead, they will be obliged to explore questions that truly expand knowledge.

“Now what you should do is actually do science — try to answer questions about the world that are important and meaningful,” said Brenner.

Coincidentally, the panel occurred on the day when Anthropic released a more powerful version of its agentic coding tool, Claude Opus 4.7. Panelist Douglas Finkbeiner, Professor of Astronomy and of Physics, who is currently on leave to work at Anthropic, acknowledged that progress is moving incredibly fast, but suggested that humans will continue to play essential roles steering these systems, even if we do not fully comprehend how they work.

“The thing I've noticed is the number of things that people want to explore just grows exponentially,” he said. “Every time we get the answer to something, it raises three more questions … I think it's entirely possible that human curiosity is unbounded, and the number of interesting questions unbounded, and that we need more people, not fewer.”

But science must adapt. Instead of being drowned by the AI tidal wave, we must teach ourselves to ride it.

“Even though we feel like that frontier is running away from us exponentially, actually our tools for keeping up with it also are improving,” added Finkbeiner. “I wouldn’t count humans out.”

You Might Also Like