Artificial intelligence systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as AI systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars. Now, the MIT spinout Themis AI is helping quantify model uncertainty and correct outputs before they cause bigger problems. The company’s Capsa platform can work with any machine-learning model to detect and correct unreliable outputs in seconds. It works by modifying AI models to enable them to detect patterns in their data processing that indicate ambiguity, incompleteness, or bias. “The idea is to take a model, wrap it in Capsa, identify the uncertainties and failure modes of the model, and then enhance the model,” says Themis AI co-founder and MIT Professor Daniela Rus, who is also the director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We’re excited about offering a solution that can improve models and offer guarantees that the model is working correctly.” Rus founded Themis AI in 2021 with Alexander Amini ’17, SM ’18, PhD ’22 and Elaheh Ahmadi ’20, MEng ’21, two former research affiliates in her lab. Since then, they’ve helped telecom companies with network planning and automation, helped oil and gas companies use AI to understand seismic imagery, and published papers on developing more reliable and trustworthy chatbots. “We want to enable AI in the highest-stakes applications of every industry,” Amini says. “We’ve all seen examples of AI hallucinating or making mistakes. As AI is deployed more broadly, those mistakes could lead to devastating consequences. Our software can make these systems more transparent.” Helping models know what they don’t know Rus’ lab has been researching model uncertainty for years. In 2018, she received funding from Toyota to study the reliability of a machine learning-based autonomous driving solution. “That is a safety-critical context where understanding model reliability is very important,” Rus says. In separate work, Rus, Amini, and their collaborators built an algorithm that could detect racial and gender bias in facial recognition systems and automatically reweight the model’s training data, showing it eliminated bias. The algorithm worked by identifying the unrepresentative parts of the underlying training data and generating new, similar data samples to rebalance it. In 2021, the eventual co-founders showed a similar approach could be used to help pharmaceutical companies use AI models to predict the properties of drug candidates. They founded Themis AI later that year. “Guiding drug discovery could potentially save a lot of money,” Rus says. “That was the use case that made us realize how powerful this tool could be.” Today Themis is working with companies in a wide variety of industries, and many of those companies are building large language models. By using Capsa, the models are able to quantify their own uncertainty for each output. “Many companies are interested in using LLMs that are based on their data, but they’re concerned about reliability,” observes Stewart Jamieson SM ’20, PhD ’24, Themis AI’s head of technology. “We help LLMs self-report their confidence and uncertainty, which enables more reliable question answering and flagging unreliable outputs.” Themis AI is also in discussions with semiconductor companies building AI solutions on their chips that can work outside of cloud environments. “Normally these smaller models that work on phones or embedded systems aren’t very accurate compared to what you could run on a server, but we can get the best of both worlds: low latency, efficient edge computing without sacrificing quality,” Jamieson explains. “We see a future where edge devices do most of the work, but whenever they’re unsure of their output, they can forward those tasks to a central server.” Pharmaceutical companies can also use Capsa to improve AI models being used to identify drug candidates and predict their performance in clinical trials. “The predictions and outputs of these models are very complex and hard to interpret — experts spend a lot of time and effort trying to make sense of them,” Amini remarks. “Capsa can give insights right out of the gate to understand if the predictions are backed by evidence in the training set or are just speculation without a lot of grounding. That can accelerate the identification of the strongest predictions, and we think that has a huge potential for societal good.” Research for impact Themis AI’s team believes the company is well-positioned to improve the cutting edge of constantly evolving AI technology. For instance, the company is exploring Capsa’s ability to improve accuracy in an AI technique known as chain-of-thought reasoning, in which LLMs explain the steps they take to get to an answer. “We’ve seen signs Capsa could help guide those reasoning processes to identify the highest-confidence chains of reasoning,” Amini says. “We think that has huge implications in terms of improving the LLM experience, reducing latencies, and reducing computation requirements. It’s an extremely high-impact opportunity for us.” For Rus, who has co-founded several companies since coming to MIT, Themis AI is an opportunity to ensure her MIT research has impact. “My students and I have become increasingly passionate about going the extra step to make our work relevant for the world,” Rus says. “AI has tremendous potential to transform industries, but AI also raises concerns. What excites me is the opportunity to help develop technical solutions that address these challenges and also build trust and understanding between people and the technologies that are becoming part of their daily lives.”