Computer chips are both the most complex things ever mass produced by humans and the most disruptive to our lives.
It theorised that the maximum number of components that manufacturers could “cram” onto a sliver of silicon – before which the rising risk of failure made it uneconomic to add more – was doubling at a regular pace every two years.
Its author, Gordon Moore, suggested this could be extrapolated to forecast the rate at which more complicated chips could be built at affordable costs.
The insight – later referred to as Moore’s Law – became the bedrock for the computer processor industry, giving engineers and their managers a target to hit.
Intel – the firm Mr Moore went on to co-found – says the law will have an even more dramatic impact on the next 20 years than the last five decades put together.
Although dubbed a “law”, computing’s pace of change has been driven by human ingenuity rather than any fixed rule of physics.
“Moore’s observation” would be a more accurate, if less dramatic, term. In fact, the rule itself has changed over time.
Mr Moore’s article predicted a time when computers would be sold alongside other consumer goods.
While Moore’s 1965 paper talked of the number of “elements” on a circuit doubling every year, he later revised this a couple of times, ultimately stating that the number of transistors in a chip would double approximately every 24 months.
For most people, imagining exponential growth – in which something rapidly increases at a set rate in proportion to its size, for example doubles every time – is much harder than linear growth – in which the same amount is repeatedly added.
Moore retired in 1997, but Intel still follows his lead.
In 2013, the firm’s ex-chief architect Bob Colwell made headlines when he predicted Moore’s Law would be “dead” by 2022 at the latest.
The issue, he explained, was that it was difficult to shrink transistors beyond a certain point.
Specifically, he said it would be impossible to justify the costs required to reduce the length of a transistor part, known as its gate, to less than 5nm (1nm = one billionth of a metre).
In simple terms, a transistor is a kind of tiny switch that is triggered by an electrical signal. By turning them on and off at high speeds, computers are able to amplify and switch electronic signals and electrical power, making it possible for them to carry out the calculations needed to run software.