Team Nutshell

Mistakes will be made

This article won’t be as technical as the others, but will still talk about something pretty general in programming, especially for big software like game engines.

When we think about “mistakes” in development, we mostly think about bugs, but this article won’t talk about bugs in software. Here, “mistakes” refer to questionable algorithm choices (too slow or consume too much memory) and architecture.

When designing a software, thinking about its architecture is the major first step in development, as this phase will determine how easy it will be to work and maintain it. The bigger your project is, the more important its architecture is.

Maintenance cost is not the only thing impacted by this, as the software’s architecture will have a great impact on the algorithms you will be able to use, depending on what kind of data the current part of the program you are working on has access to.

But even if you spend years on your software’s architecture (which is probably not a good idea, at least for your own sake), mistakes will happen, and you will be forced to refactor a part of your program, as you will definitely reach a point where either:

The first case is the most complicated, as the architecture is part of the software’s core, and replacing it is sometimes considered impossible, or will be a long-running issue, which will drastically slow down the development of other features. Working on this issue will bring big choices to find a solution, one of them being to work on the new architecture as a “side-project” while still working on new features on the “old” architecture, and then trying to merge the features from the “old” architecture branch to the new architecture when it is done (which will also bring conflicts and a new set of issues). This solution works when multiple developers are working on the same project, as one or more can be assigned this huge task to work on the new architecture while the rest of the team can work on the current program. I work alone on my projects, so this solution is not really doable for me. So when the architecture rework is way too time expensive, my choice is to simply ditch this project and start a new one, which is why ONIEngine’s development stopped to start working on NeigeEngine.

The second case is probably “easier” to manage than a complete architecture change but depends a lot on the scope of the system you want to replace. Sometimes, you just want to replace an algorithm for another, better, one. This new algorithm can require external data that the previous didn’t, and it can be an architectural challenge to try to bring these data from one system to this one. Sometimes you will want to replace a big part of the software, like the graphics engine in a game engine, which can be a huge issue, depending on how well your systems are isolated from each other and, when needed, how does a system access information from another. This case is really common when working on game engines, as there are many ways to work on rendering, physics, audio, etc. I like to try new things, and allowing to replace a system for another with a cost of virtually zero was one of the reason NutshellEngine was made and uses dynamic libraries (one for each system) instead of a single executable containing everything. While dynamic libraries can be a solution to this issue, by making the implementation of a system compltely external to the main executable, it is definitely not the solution on all kind of projects, and sometimes, a well-thought architecture can incredibly lower the refactoring cost.

Making mistakes in your architecture and algorithm is normal and will happen at some point in your project. Sometimes, it won’t even be your fault. For example, when the modern graphics APIs (Vulkan, Direct3D 12) arrived, they were so different from the “old” ones (OpenGL, Direct3D 11) that a complete architecture change in the graphics engine was required to use them efficiently.

So how did game engines manage this transition? They sometimes didn’t and still don’t. Some game engines try to map the new API on their current architecture, which were made for the “old” API. This choice brings a lot of issues, especially in terms of performance, as, for example, Vulkan and Direct3D 12 use graphics pipelines and render passes that must be created before using them for rendering, where OpenGL and Direct 3D 11 don’t. Why was the choice to not make a new architecture taken in some game engines? Because they have been in development for years, and changing the graphics engine’s architecture now is no small task, especially as they probably want to support both the old and the new APIs, so completely removing the ancient system to make a new one is out of question.

Knowing that mistakes will be made won’t make you avoid them, but taking it in count when designing your software will help you face them more efficiently, as you are already realizing what kind of issues you will meet when trying to refactor your systems. Isolating systems while still providing a way to access its data (via an interface, for example) is one way to design a software while lowering the cost of completely reworking a system, as the new system will still follow the interface, the other systems using it won’t change at all if they just call functions from this interface. This is how NutshellEngine has been designed, and it works well for me at the moment, allowing me to create completely different graphics modules (and more) without ever touching another part of the program.

Conclusion

Predicting that mistakes will be made is an important thing when developing a software, especially big ones, as it will allow you to architecture it in a way that will make refactoring easier than if you didn’t take it in count. This way of thinking can save you hours of (terribly boring and repeating) work, and sometimes even completely save the development process of your software.