Into AI

Into AI

Share this post

Into AI
Into AI
You Don't Need Backpropagation To Train Neural Networks Anymore

You Don't Need Backpropagation To Train Neural Networks Anymore

A deep dive into the 'NoProp' algorithm that eliminates the need for Forward pass and Backpropagation to train neural networks, and learning to code it from scratch.

Dr. Ashish Bamania's avatar
Dr. Ashish Bamania
Apr 25, 2025
∙ Paid
15

Share this post

Into AI
Into AI
You Don't Need Backpropagation To Train Neural Networks Anymore
2
5
Share
Image generated with Google ImageFX

Backpropagation, first introduced in 1986, is one of the critical algorithms that underlie the training of all popular ML models that we use today.

It is simple, easy to implement and effective in training large neural networks.

Although widely accepted as the best method for this, it comes with some disadvantages, including high memory usage during training and difficulty in parallelising training due to the sequential nature of the algorithm.

Is there an algorithm that can still train neural networks effectively, and comes without these disadvantages?

A team of researchers from the University of Oxford has just introduced one that eliminates the need for backpropagation.

Their algorithm, called NoProp, does not even require a Forward pass and works on the principles followed by Diffusion models to train each layer of a neural network independently without passing gradients.

Here is a story where we take a deep dive into how this algorithm works, learn ab…

Keep reading with a 7-day free trial

Subscribe to Into AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Dr. Ashish Bamania
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share