Physicist uses quantum tool to unlock AI ‘black box’

Physicist uses quantum tool to unlock AI ‘black box’

A University of Utah physicist has applied methods from quantum field theory to solve a machine learning model, uncovering new scaling laws that help predict AI performance.

At a Glance

  • Physicist Zhengkang Zhang applied theoretical particle physics methods to understand the complex inner workings of artificial intelligence models, which are often considered technological “black boxes” by developers.
  • His research focused on solving for scaling laws, which are crucial power-law relationships that predict how a model’s performance improves as its size or training dataset increases.
  • Zhang used Feynman diagrams, a visual calculation tool from quantum mechanics, to manage the infinitely complex mathematics involved in solving a recently proposed machine learning model.
  • This innovative approach yielded new and more precise scaling laws that extend beyond previous research, providing more accurate predictions for how AI systems will behave at a larger scale.
  • Published in Machine Learning: Science and Technology, the work highlights how physicists can contribute to the responsible development and understanding of increasingly influential artificial intelligence technologies.

As artificial intelligence becomes deeply integrated into modern society, from self-driving cars to medical diagnostics, a critical question remains: How does it actually work? This “black box” problem, where even creators do not fully understand an AI’s decision-making process, is now being tackled by an unlikely field: theoretical physics. In a new study, Zhengkang (Kevin) Zhang, an assistant professor of physics and astronomy at the University of Utah, is applying the tools of particle physics to demystify the complex inner workings of machine learning and make its development more predictable.

At the heart of many AI systems are neural networks, intricate webs of simple calculations that learn patterns from enormous datasets. A significant challenge for developers is predicting how a model will improve as it gets bigger. Training these models is incredibly expensive and energy-intensive, so understanding “scaling laws”—rules that describe how performance changes when the model or dataset size is doubled, for instance—is crucial for efficiency. “We want to be able to predict how much better the model will do at scale,” said Zhang in a university press release, explaining the need to move beyond costly trial-and-error methods.

To solve this problem, Zhang turned to a powerful tool from his field, known as Feynman diagrams. Invented in the 1940s by the renowned physicist Richard Feynman to calculate the interactions of subatomic particles, these diagrams offer a visual way to manage impossibly complex equations. Zhang used this technique to solve a theoretical machine learning model, successfully calculating its scaling laws with greater precision than ever before. His work extends previous research by accounting for key parameters that regularize a model’s behavior, making the findings more applicable to real-world scenarios.

Published in the journal Machine Learning: Science and Technology, Zhang’s research not only provides new insights into AI but also underscores the growing need for interdisciplinary collaboration. As society grapples with algorithms that influence human behavior, understanding the machines we build is more important than ever. “That’s the danger of how AI is going to change humanity,” Zhang warned. “We humans build machines that we are struggling to understand, and our lives are already deeply influenced by these machines.”


References

  • Zhang, Z. (2025). Neural scaling laws from large- N field theory: Solvable model beyond the ridgeless limit. Machine Learning: Science and Technology, 6(2), 025010. https://doi.org/10.1088/2632-2153/adc872

Related Posts