Has computing reached peak complexity?

jolon
5 min readJul 8, 2021

How many programming languages do you know? How many shell commands do you know? How many markup formats do you know? How many frameworks do you know?

Is there a point where the shear complexity of modern computing outweighs its benefits?

Simplicity in the 90s

When I first got into computers in the 90s things were much simpler. Most end user programming was done in BASIC. Maybe some Pascal or Assembler. And that was about it. Shell commands? Well that was built into BASIC.

Hardware was simpler too. The Apple II 6502 CPU had around 50 instructions. 50! If you were happy to learn assembler you could make a computer do anything you like with only knowing 50 things. For comparison modern x86 CPUs have around 3000 instructions! And that’s just the CPU. You’ll need to learn Python, C, C++, JavaScript (name your flavour), Shell (more flavours), Swift, Objective-C, Go, Rust, etc, etc, etc. And each of those languages has gotten progressively more complex. Compare C++ or JavaScript today with the same 25 years ago. Add to that 10 or so frameworks for each language.

Complexity Causes Problems

Obviously computers today are far more capable and user friendly. But I wonder whether we are running into problems due to this complexity?

In the 90s a relatively competent developer might need to know no more than a few hundred things. Today it’s probably in the tens of thousands. In fact it’s probably in the millions, and there is no way anyone can know that much. So development these days is rapidly trying to grasp a new framework or language, Google the best way to do it, implement it, and then move onto the next thing.

My concern with this complexity is whether people are really designing systems and solving problems properly?

A perfect example is Unix. Unix was released in the early 70s, well before the Apple II or DOS or Mac or Windows. However it’s one of the most popular OSes today. Unix underpins iPhones and Android phones, it underpins macOS, almost all web servers run on Unix. And even Windows now is getting a Unix compatibility layer.

Unix isn’t necessarily a bad OS, but it isn’t a great one either. In fact none of the current ‘modern’ OSes are particularly interesting and all are pretty ancient. A much better OS would be something like Haiku which is based on BeOS (somewhat related to Google’s new Fuchsia OS). But unfortunately BeOS never was successful (Apple had the chance to buy BeOS but bought NeXTStep instead, which became Mac OS X). Haiku (and BeOS) boot within a few seconds which is an example of the benefits that can be attained when a system is completely redesigned. Although note that even BeOS is now over 25 years old.

The main reason we aren’t seeing much innovation in operating systems is because they have become so complex. Unix has become the standard, not because it is great, but because it is too much work to start from scratch. The fact that one of the oldest OSes is also the most popular today essentially proves the point that complexity has stopped us from moving forward.

Hardware is Complex

The most basic OS would have keyboard input and text output to a display. Ever tried writing a USB keyboard driver? A keyboard, which is a relatively simple input device that just sends character codes relatively slowly to the computer is immensely difficult to code for. Not because keyboards are complex but because the USB standard is.

I think it would be safe to say that software developers are responsible for the greater complexity. However even if software developers decided to take a simplistic approach they still have to start with:

  • Processors with 3000 instructions
  • Overly complex USB protocols
  • Complex GPUs, etc, etc.

Necessity of Complexity

Some complexity is no doubt necessary. However I do wonder whether things have become so complex we don’t have the capability to go back and simplify things.

Do processors need 3000 instructions? The emerging RISC-V CPU can work with as little as 12 instructions.

Do we need complex USB protocols? My guess is that these protocols were designed by standards committees and not developers. They were developed for features not ease of implementation. Noting that the members of these committees typically are representatives of large companies that have the resources to implement complex standards. I’m sure it can be greatly simplified.

Do we need complex GPUs? Potentially but GPUs have evolved a lot. From direct memory access (DMA) to performing complex 3D graphics, to performing high performance shading algorithms, to general purpose computing. Is it time to ‘reset’ the GPU and make it much simpler?

The shear complexity of software and hardware today often results in things being hacked. Everything is effectively a hack of the previous iteration because the whole system is too complex to redesign and rebuild. Even though rebuilding something from scratch may seem like an insurmountable task, the downsides to not reducing complexity could be far worse.

A simpler future

I do wonder whether there could be a benefit in starting with a clean slate? Start with new simplified hardware and work up from there.

Perhaps a low instruction count RISC-V core. New non-USB peripherals and simplified protocol and simplified, generalised graphics.

And then a simplified software stack on top of that. Ditch LLVM. Perhaps a simple interpreted language (e.g. MicroPython) and build everything from scratch on top.

Without the existing complexity there is the possibility of producing a good design, instead of everything being a compromised hack.

The ultimate result is that developers once again can build things with perhaps knowing 90% of how the system works without needing to look up manuals or Google solutions, before dumping their brain and moving onto the next thing.

Addendum — Parallels to embedded computing trends

A lot of what I have mentioned has parallels to current trends in embedded computing.

For example, RISC-V chips aren’t broadly available, however you can easily flash an FPGA with a RISC-V core. And FPGAs are much more likely to be used in embedded projects.

The new Raspberry Pi Pico is only $4 and is designed to work with MicroPython, which is effectively it’s own OS (not Unix).

Examples of simplified peripherals are also common in the embedded world. The Raspberry PI for example has a DSI port which is a simplified display interface (simpler than HDMI).

Buttons for embedded projects are generally connected directly to GPIO pins (general purpose input output), so programs can just read a memory address to detect if they have been pressed, no complex USB protocol required.

So there are a lot of parallels between what is happening in the embedded world and my thoughts above. So much so I wonder whether one day our desktop computers will migrate over to designs from the embedded world.

To a small extent this is already happening. The ARM processor is simpler than x86 which is important for low power usage on mobile devices, which is why ARM is used in all mobile phones and x86 isn’t used in any of them. However, ARM progressed from mobile phones, to smartphones, to tablets, to laptops, and now to desktop computers (e.g. Apple’s M1 CPU). Unfortunately from a software perspective developers have just put Unix on top of ARM and the rest of the software stack with it which also allows it to support all the existing complex hardware protocols.

--

--