Ecologists find computer vision models’ blind spots in retrieving wildlife images

Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.

Alex Shipps | MIT CSAIL • mit
Dec. 20, 2024 ~9 min

Need a research hypothesis? Ask AI.

MIT engineers developed AI frameworks to identify evidence-driven hypotheses that could advance biologically inspired materials.

Zach Winn | MIT News • mit
Dec. 19, 2024 ~10 min


MIT engineers grow “high-rise” 3D chips

An electronic stacking technique could exponentially increase the number of transistors on chips, enabling more efficient AI hardware.

Jennifer Chu | MIT News • mit
Dec. 18, 2024 ~8 min

MIT researchers introduce Boltz-1, a fully open-source model for predicting biomolecular structures

With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.

Adam Zewe | MIT News • mit
Dec. 17, 2024 ~7 min

Study reveals AI chatbots can detect race, but racial bias reduces response empathy

Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.

Alex Ouyang | Abdul Latif Jameel Clinic for Machine Learning in Health • mit
Dec. 16, 2024 ~6 min

Teaching a robot its limits, to complete open-ended tasks safely

The “PRoC3S” method helps an LLM create a viable action plan by testing each step in a simulation. This strategy could eventually aid in-home robots to complete more ambiguous chore requests.

Alex Shipps | MIT CSAIL • mit
Dec. 12, 2024 ~6 min

Researchers reduce bias in AI models while preserving or improving accuracy

A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.

Adam Zewe | MIT News • mit
Dec. 11, 2024 ~7 min

Study: Some language reward models exhibit political bias

Research from the MIT Center for Constructive Communication finds this effect occurs even when reward models are trained on factual data.

Ellen Hoffman | Media Lab • mit
Dec. 10, 2024 ~7 min


Enabling AI to explain its predictions in plain language

Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.

Adam Zewe | MIT News • mit
Dec. 10, 2024 ~7 min

Citation tool offers a new approach to trustworthy AI-generated content

Researchers develop “ContextCite,” an innovative method to track AI’s source attribution and detect potential misinformation.

Rachel Gordon | MIT CSAIL • mit
Dec. 9, 2024 ~8 min

/

68