... From a development point of view.
I’ll admit that I don’t work for the gaming industry or have any game development experience, but I do have professional experience in embedded electronics and skills you learn in there can cross over into gaming console development such as:
- Barebones “close to metal” programming
- Restricted environments where you are expected to achieve great things (I had to work on a microcontroller with a “measly” 512 bytes of RAM and about 10 whole kilobytes of ROM)
- Worrying about the smallest details, like the timing of signals and how long your routines take to complete to squeeze out maximum performance.
So what makes me think the Nintendo 64 is one of the greatest gaming devices of all time, from a developer’s perspective? Because a lot of techniques and solutions incorporated into the Nintendo 64 have been the basis for modern 3D gaming. The hardware itself has a nice list of major features:
- Trilinear mipmapping, the most often touted one
- Edge based anti-aliasing (which we have today as FXAA and MLAA)
- Basic real-time lighting (which implies the N64’s GPU is a hardware T&L GPU)
- The one thing that stands out in my mind though when it comes to the Realty Coprocessor is that it’s probably one of the first, if not the first, GPUs that’s fully programmable. The processor ran on microcodes which developers can tweak to suit their needs. The problem was Nintendo didn’t release the tools on tweaking those codes until late in the N64’s life span. But once they did, a few companies (notably Rare and Factor5) pushed the system to its limits.
But here are some things developers made on the software end that paved the way for modern game engines.
- Level of detail. This is a trick where if a model is sufficiently far away, it gets swapped for a low-poly model. You can see it in action from this YouTube video.
- Smart use of clipping. Nintendo had at least some merit when using cartridges for their speed. This allowed sections of the game world that are not visible to not be rendered until the player gets very close to them. This YouTube video (you’ll have to skip ahead a bit) shows this off.
- Banjo Kazooie had a novel way to push out large textures for detailed environments. One of the issues was that it caused memory fragmentation, which would mean that even though technically enough memory was around, there wasn’t enough contiguous memory to store something in. So they had a real-time memory defragger run during the game.
- Texture streaming. This was done by Factor5 (though I’m sure others did something similar) for Indiana Jones and the Infernal Machine. This allowed them to stream textures being rendered, thus overcoming the 4KB texture memory limit.
- Frame-buffer effects. This is used for effects like motion blur, shadow mapping, “cloaking”, and something that amazes me for some reason or another: render to texture (this allows for live TVs with footage from the game world). For more examples of what devs did, there’s this article:
However, a lot of these software techniques were also implemented in the Sega Saturn and PlayStation. Still, the N64 had a lot going for it once the gloves were off.