AI tool generates high-quality images faster than state-of-the-art approaches

Researchers fuse the best of two popular methods to create an image generator that uses less energy and can run locally on a laptop or smartphone.

Adam Zewe | MIT News • mit
March 21, 2025 ~8 min

Like human brains, large language models reason about diverse data in a general way

A new study shows LLMs represent different data types based on their underlying meaning and reason about data in their dominant language.

Adam Zewe | MIT News • mit
Feb. 19, 2025 ~8 min


A new way to create realistic 3D shapes using generative AI

Researchers propose a simple fix to an existing technique that could help artists, designers, and engineers create better 3D models.

Adam Zewe | MIT News • mit
Dec. 4, 2024 ~7 min

A causal theory for studying the cause-and-effect relationships of genes

By sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.

Adam Zewe | MIT News • mit
Nov. 7, 2024 ~7 min

Enhancing LLM collaboration for smarter, more efficient solutions

“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.

Alex Shipps | MIT CSAIL • mit
Sept. 16, 2024 ~8 min

A framework for solving parabolic partial differential equations

A new algorithm solves complicated partial differential equations by breaking them down into simpler problems, potentially guiding computer graphics and geometry processing.

Alex Shipps | MIT CSAIL • mit
Aug. 28, 2024 ~7 min

Method prevents an AI model from being overconfident about wrong answers

More efficient than other approaches, the “Thermometer” technique could help someone know when they should trust a large language model.

Adam Zewe | MIT News • mit
July 31, 2024 ~7 min

MIT researchers advance automated interpretability in AI models

MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.

Rachel Gordon | MIT CSAIL • mit
July 23, 2024 ~10 min


Reasoning skills of large language models are often overestimated

New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.

Rachel Gordon | MIT CSAIL • mit
July 11, 2024 ~6 min

Understanding the visual knowledge of language models

LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.

Alex Shipps | MIT CSAIL • mit
June 17, 2024 ~6 min

/

9