
In 2005, I saw a talk but Alan Kay. His big reveal in the middle was that the talk was using Squeak (bytecode-interpreted Smalltalk) and not PowerPoint for his slides. He showed this off by dropping into an inspector and doing some live coding in the middle. But the slide a couple before that reveal had contained full-motion video (which was still pretty rare in slides back then). The video had been an MPEG-1 video (so not the latest CODEC: MPEG-2 was feasible to decode on the CPU then, MPEG-4 non-AVC was with an optimised implementation). The CODEC was, itself, written in Smtalltalk. Computers are ludicrously fast now. Even the 'slow' Java implementations from the late '90s were an order of magnitude faster than CPython and not that slow on modern hardware. A modern optimising JIT gains you another order of magnitude or so. CHERI's capability model is not quite the shape of hardware capability systems from the '60s (different things got faster at different rates, now compute is almost free but non-local memory accesses are very expensive, whereas the converse was true back then), but the entire field was discarded for 20-30 years because RISC showed that you could make simpler computers fast and do things in software on a fast-and-simple core that outperformed doing them in a more complex implementation. Right up until you start to build complex out-of-order pipelines, at which point you realise that you have a lot of fixed overhead per instruction and doing more work per instruction is where the big performance wins come from.