Backpropagation
An algorithm that calculates how much each weight in a neural network contributed to errors, enabling the network to learn from mistakes.
How Neural Networks Learn
Backpropagation is the engine of neural network training. When a network makes a prediction, we measure how wrong it was. Backpropagation then figures out which weights were most responsible for that error, so we know how to adjust them.
The name comes from propagating errors backward through the network. Start at the output, calculate the error, then work backward layer by layer, computing how much each weight contributed. It's like tracing a mistake back through a supply chain to find what went wrong where.
The Math Behind Learning
Technically, backpropagation computes gradients using the chain rule from calculus. Each weight gets a gradient telling it two things: which direction to change (increase or decrease) and how much impact a change would have on the error.
This might sound like a small detail, but it's what made deep learning possible. Before efficient backpropagation, training neural networks with many layers was computationally impractical. The algorithm gives us a way to assign credit (or blame) through arbitrarily deep networks, which is essential when models have billions of parameters.
Every time you use a modern AI model, you're benefiting from backpropagation having adjusted trillions of weight updates during training.