Into AI

Into AI

Share this post

Into AI
Into AI
A Squad Of Open-Source LLMs Can Now Beat OpenAI’s Closed-Source GPT-4o
Copy link
Facebook
Email
Notes
More

A Squad Of Open-Source LLMs Can Now Beat OpenAI’s Closed-Source GPT-4o

A deep dive into how the Mixture-of-Agents (MoA) model leverages the collective strengths of multiple open-source LLMs and beats

Dr. Ashish Bamania's avatar
Dr. Ashish Bamania
Jul 12, 2024
∙ Paid
2

Share this post

Into AI
Into AI
A Squad Of Open-Source LLMs Can Now Beat OpenAI’s Closed-Source GPT-4o
Copy link
Facebook
Email
Notes
More
1
Share
Image generated with DALL-3

There has been a constant battle between open-source and proprietary AI.

The war has been fierce, so much so that Sam Altman once said on his visit to India that developers can try to build AI like ChatGPT, but they will never succeed in this pursuit.

Thanks for reading Byte Surgery! Subscribe for free to receive new posts and support my work.

But Sam has been proven wrong.

A team of researchers recently published a pre-print research article in ArXiv that shows how multiple open-source LLMs can be assembled together to achieve state-of-art performance on multiple LLM evaluation benchmarks, surpassing GPT-4 Omni, the apex model by OpenAI.

They called this model Mixture-of-Agents (MoA).

They showed that a Mixture of Agents consisting of just open-source LLMs scored 65.1% on AlpacaEval 2.0 compared to 57.5% by GPT-4 Omni.

This is highly impressive!

This means that the future of AI is no longer in the hands of big tech building software behind closed doors but is more …

Keep reading with a 7-day free trial

Subscribe to Into AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Dr. Ashish Bamania
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More