A Squad Of Open-Source LLMs Can Now Beat OpenAI’s Closed-Source GPT-4o
A deep dive into how the Mixture-of-Agents (MoA) model leverages the collective strengths of multiple open-source LLMs and beats
There has been a constant battle between open-source and proprietary AI.
The war has been fierce, so much so that Sam Altman once said on his visit to India that developers can try to build AI like ChatGPT, but they will never succeed in this pursuit.
But Sam has been proven wrong.
A team of researchers recently published a pre-print research article in ArXiv that shows how multiple open-source LLMs can be assembled together to achieve state-of-art performance on multiple LLM evaluation benchmarks, surpassing GPT-4 Omni, the apex model by OpenAI.
They called this model Mixture-of-Agents (MoA).
They showed that a Mixture of Agents consisting of just open-source LLMs scored 65.1% on AlpacaEval 2.0 compared to 57.5% by GPT-4 Omni.
This is highly impressive!
This means that the future of AI is no longer in the hands of big tech building software behind closed doors but is more …
Keep reading with a 7-day free trial
Subscribe to Into AI to keep reading this post and get 7 days of free access to the full post archives.