We Have Finally Found A Solution To An Extremely Energy Efficient AI
A deep dive into the 'L-Mul' or Linear complexity multiplication algorithm that makes our existing AI models faster and more energy-efficient than ever before.
Running AI models is expensive and costs the environment.
The average electricity consumption of running ChatGPT in early 2023 was 564 MWh each day.
This is equivalent to the total daily electricity usage of 18,000 families in the United States.
It is also estimated that, in the worst-case scenario, Google’s AI service could consume as much electricity as Ireland's.
This is quite a lot! But why does AI need so much energy?
Neural network internals work with floating point parameters, which involve high-dimensional tension multiplications, element-wise multiplications, and linear transformations.
And these operations are energy-expensive.
If we could tweak the amount of computation needed with these operations in these neural networks, we could save a lot of energy and speed them up.
Amazingly, researchers of a recent pre-print published in ArXiv have proposed to solve exactly this.
They created an algorithm called ‘L-Mul’, or the linear complexity multiplication al…
Keep reading with a 7-day free trial
Subscribe to Into AI to keep reading this post and get 7 days of free access to the full post archives.