Efficient backpropagation (BP) is a fundamental technique in deep learning, first introduced by Seppo Linnainmaa in 1970, building on earlier concepts by Henry J. Kelley in 1960 and others. Despite its origins, BP faced skepticism for decades before gaining acceptance as a viable training method for deep neural networks, which can now efficiently optimize complex models. The article highlights the historical development of BP and addresses misconceptions surrounding its invention and application in neural networks.