When I was a teenager in the early 1990s I bought my first computer. It was a Macintosh Classic II which ran at 16MHz. A little over a year later I upgraded to Mac with a 25MHz processor. About a year later I upgraded to a PowerMac 6100 running at 60MHz. Two years later I bought a PowerMac G3 running at 233MHz. These were all desktop machines. Two years later I was using a PowerBook G3 laptop running at 400MHz. Four years later I bought a PowerBook G4 laptop running at 667MHz.
Each of the above upgrades represented about a doubling in CPU performance every few years. Such was computing in the 1990s.
To put that in perspective the computer I owned in 1997 ran 20 times faster than the computer I owned in 1993, only four years earlier. The following chart shows the growth (note the vertical axis is logarithmic):
Having such dramatic CPU performance increases in such a short period of time was heady. Most people own their computer for 3–4 years. Imagine your next computer being 20 times faster than the last! Or if you kept a computer for 7–8 years it being 100 times faster!
Such speed increases don’t just make the computer faster it completely changes the way we use computers. In 1993 when I bought my first computer, Microsoft Word didn’t provide spell checking as you type, because it would’ve been too slow. Now we expect spell checking everywhere in the operating system.
Each step in performance from year to year enabled vastly different uses of computers. Initially it was primarily text/command-line interfaces in the 1970s/1980s when CPUs ran at 1MHz. In the 1980s with 8MHz CPUs it was possible to have a GUI operating system such as the Mac. 16MHz and higher saw full colour displays, postage stamp size videos, and wireframe 3D graphics. The PowerPC which started at 60MHz, enabled high quality 3D graphics, decent-sized video, and so on.
However today, CPU speeds have largely plateaued. Some say it is due to physical limits of the size of the atom. But I want to argue that isn’t the case, it is actually because of lack of demand.
The demand for faster CPUs
Where is the demand for faster CPUs?
In the early days computers were very limited. Doing basic word processing or spreadsheeting was primeval. Fast CPUs were necessary even for the most basic tasks. Around 10 years ago computers hit a plateau where most users could do basic productivity tasks without requiring more CPU power.
Games always benefit from extra processor performance, but the main requirement for games is graphics performance which can be provided by gains in GPU processing power (which has continued to increase).
For the last decade there hasn’t really been a demand for more CPU power. We have extremely realistic games and faster video editing tools. One of the benchmarks Steve Jobs used to show off, was a script of common PhotoShop tasks to demonstrate how fast a computer was. It wasn’t unusual in the 1990s for a graphic designer to buy a $10,000 computer just to edit photos. Today we can edit photos with the most basic computers.
But I believe there could be a demand for further CPU power, however it is being held back by other things.
Things slower than a CPU
CPUs are very fast, so it is not hard to find things that are slower than a CPU.
Data flows around a motherboard on data buses. The performance of the CPU is hindered by the speed of these buses. The closer the bus is to the CPU the more likely it is to affect it.
RAM
The closest thing to a CPU is its RAM. Initially RAM and CPU operated at the same speed. However, CPU performance increased at a more rapid rate than RAM, and as a result CPU and RAM were operating at different speeds. It wasn’t unusual for RAM to be operated at a quarter the speed of the CPU. My PowerBook G3 processor ran at 400MHz but the memory bus was only 66MHz, 6 times slower!
So CPU performance was moving at a quite a pace and RAM was slowing it down. In recent years as CPU performance has plateaued out, RAM performance has been catching up. In my 12" MacBook with 1.3GHz processor, RAM runs at 1600MHz (faster than the CPU, although the CPU runs at 2.9GHz in turbo mode, 1.8 times faster). You can now get DDR4 RAM running at 3.6GHz.
Hard Drive
After RAM, the next slowest component is the hard drive. CPU and RAM speeds can be timed in microseconds and nanoseconds. Spinning hard drives are measured in milliseconds — thousands of times slower. When data needs to be accessed from the hard drive (which is fairly frequently), it is the slowest bottle neck in the system.
When a computer runs out of RAM (which again is quite frequent), it swaps data in RAM onto the hard drive, this is a very slow process. On older machines you could hear the hard drive grinding away as the cursor spins and you wait for the machine to catch up. This was a very common cause for a slow machine. In addition, booting up, launching an app, copying and saving files, was also excruciatingly slow.
More recently SSDs have replaced hard drives and are considerably faster. One of the slowest aspects of a spinning hard drive, is the seek time to find a file. This mechanical movement can’t compete with the instantaneous random access of solid state memory. SSDs are so fast that they are faster than the connections used for spinning hard drives such as SATA. As a result some computers use direct PCI-e connections. Despite SSDs blazing performance, they are still significantly slower than RAM.
The fastest SSDs can read at 700MB/s and write at 500MB/s. In comparison RAM running at 1.6GHz is able to move 8 bytes per cycle, which is 12.8GB/s. Nonetheless, SSDs are catching up and much faster than older hard drives.
The Network
The next slowest component are connections to the computer. If we look at SSD read speed, it is 2.8Gbps. USB 2.0 only has a bandwidth of 480Mbps, although the more recent USB 3.0 has a bandwidth of 5Gbps.
We don’t use USB to connect computers, we use ethernet. Common 100baseT ethernet has a capacity of 100Mbps. Most computers today have gigabit ethernet with a 1Gbps capacity. We can see that this is too slow for an SSD.
The Internet
Perhaps the biggest cause for a lack of demand for increase in CPU performance is the internet.
I first got on the internet in late 1994 with a connection which ran at 2400bps. I then found a provider I could connect at 9600bps. Finally modems maxed out at 57600bps. Broadband started at 256kbps, going up to 512kbps and 1.5Mbps. ADSL2 takes it to 8Mbps and up to around 25Mbps (although unlikely). Newer fibre to the home or node options can provide up to 100Mbps.
Most people have connections somewhere between 1Mbps and 25Mbps. At these speeds the internet is fast and the main differentiating thing that can be done is watch high resolution video on demand.
If we all had 100Mbps internet it doesn’t really change how we use the internet. Most likely we could stream multiple 4k streams (requiring about 25Mbps each), but it doesn’t change the way we use the internet.
Changing the way we do things
I believe that when computers get faster two things happen:
- Everything feels faster,
- Entirely new ways of interacting with a computer are enabled.
Possibly the most significant increase of computing performance in the last few years has been SSD drives. However, so much of our daily lives is dependent on the internet which is considerably slower than even the slowest parts of our computers. Even the outdated USB 2.0 is at least 20 times faster than most people’s internet connections.
Given how dependent we are on the internet, it is hard to imagine how quadrupling CPU performance could benefit, when the internet is slower than a hard drive from 20 years ago. As an example, computers used to use SCSI to connecto hard drives. The first version of SCSI released in 1986 supported 40Mbps. Most people’s internet today is slower than a hard drive from 30 years ago!
Why is the internet slow?
Believe it or not, the main limit to internet speed is not your physical connection. As an example, try playing a 1080p YouTube video at peak time. Even with a 100Mbps connection it will buffer. Why?
The reason is due to backhaul capacity. Each backhaul has a capacity and that capacity is shared by providers who lease fixed amounts of bandwidth on the connection. If your provider has reached its max capacity then the bandwidth will be shared and you won’t be able to utilise the full bandwidth you are paying for. This goes all the way back to international connections.
But there is no reason why we can’t have faster back hauls. Most (if not all) backhauls are fibre and only limited by the hardware on each end.
But we need to think about theoretical capacity. If we want 1Gbps internet for every user, a node which connects 1000 users will require a 1Tbps capacity. A city would require 1Pbps capacity and a country multiple petabit per second links.
The map below shows the international cables connecting Australia to the world:
If we add up the capacity of those cables we come to about 100Tbps. Unfortunately a lot of the bandwidth of the cables is leased to commercial users. So at peak times for users to be able to experience full gigabit ethernet we need at least 10 times more capacity on the back haul.
Getting gigabit internet into the home is not as big of an issue. Most computers and routers support gigabit ethernet. What’s more is that Cat6 cables can support gigabit transfer rates up to 100 metres, which is probably enough to get to a fibre node. So the consumer end isn’t really where the issues lie (once we move past ADSL).
The future
What will the future look like?
The first step is to get gigabit internet into people’s homes. I believe this is a significant step forwards which will change the way we use the internet. Gigabit internet is about the same speed as the original SATA hard drive capacity. It’s a fifth the speed of USB 3.0. One tenth the speed of HDMI 1.3.
As you can see we start to get to some interesting applications.
HDMI video is completely uncompressed, gigabit internet is one tenth of raw uncompressed high definition video.
It opens applications such as live gaming where you can watch someone else playing a game in realtime and watch their video feeds.
I really don’t know how people are going to use faster speeds but we can think about the architecture.
For the first time the internet will be almost as fast as the RAM in my PowerBook G3. That’s quite fast. Sun’s Scott McNealy famously said, the network is the computer. At gigabit speeds the network will become a part of the computer like never before. We need to start thinking about the internet as one large computer.
Instead of thinking about the internet as slow expensive pipes, we will reach a point where it will be as cheap and sufficient capacity, in the same way we can buy a $300 laptop which seemingly has every modern feature required.
What interesting new applications will this enable?
4 predictions
I see the evolution of the internet and computer hardware in four stages. It all begins with increase in backhaul bandwidth:
- Increase in backhaul bandwidth both locally and internationally, perhaps to 1Pbps international links. (5 years)
- Increase in bandwidth to the home to 1Gbps. (6 years)
- Introduction of new applications that take advantage of the bandwidth. (7 years)
- A new focus on CPU performance. (8 years)
I’ve put a rough estimate of time beside each prediction. The first will take 5 years and the rest one year after that.
Note the last item? The CPU industry has largely been held back by slow internet performance. My prediction is that once the internet bottle neck is significantly reduce, such new and unthought of applications will emerge that the focus will go back onto CPUs. Some might think that CPU innovation is over, but I predict that in around 8 years time there will begin a heavy demand for very high performance CPUs which can maximise the types of applications of the future.