Mesh decimation done right

Olivier Giniaux
10 min readMay 6, 2021

--

Not a mesh decimation done right ;)

When we talk about 3D optimization, reducing the number of triangles in a triangle mesh is often the first thing that comes to our mind. Whether it is a game developer that wants to optimize his game or a 3D artist that wants lighter models, the most common process is today called Mesh Decimation.

What is Mesh Decimation?

Since we are talking about 3D here, we’ll be talking about decimation algorithms in the context of 3D meshes. Also, I’ll be talking about triangles and not n-gons, since triangles are what is being used by all mesh decimation algorithms today.

A Mesh Decimation algorithm is a piece of program that automatically reduces the number of triangles of a mesh, given some parameters. Usually, the main parameter is the number of triangles in the resulting mesh (a number or a percentage of the initial number of triangles).

Such algorithms can be implemented in various ways. First, we need a way to actually reduce the number of triangles.

Removing triangles

There are multiple solutions :

  • By removing vertices (points that connect triangles together) one by one
  • By collapsing edges
  • By collapsing entire triangles

Removing vertices consists of progressively untesselating the mesh. It is rather easy to implement, but comes with a major downside: As a vertex is removed, we do not compensate for the change in topology. A solution would be to alter vertices adjacents to the removed vertex, but it would come at the cost of much greater complexity.

Collapsing entire triangles is a solution that I’ve never been implemented. I listed it because it could work in theory, but it would be very destructive a provide little control over polygon count since the removal of a single triangle could potentially imply many other topological changes.

Collapsing edges (also known as the “Edge Collapse method”) is the most used for today’s best mesh decimation algorithm. When collapsing an edge, we actually merge two vertices into one single vertex, allowing the placement of that new vertex at a location that could compensate for the change in topology. Also, this method goes well with a half-edge mesh data structure, which is a way of organizing meshes in memory to allow easy traversal and connectivity queries.

But now that we have a way to remove triangles, which triangles shall we remove?

Well, this is the most complex part. Let’s say we use edge collapse to remove triangles: the edge collapse method takes an edge (obviously). The general idea is to sort edges from lower to highest contribution to the general object shape and color.

The hard part is: how do we quantify the contribution of an edge to the shape and color of an object?

If you spend some time thinking about this, you’ll probably realize that it is complex and that a heuristical approach is inevitable: unlike b-rep / nurbs 3D models, triangles models are already a geometrical approximation of what the artist had in mind (unless we talk about cubes or tesselation by design, but for such cases decimating doesn’t really make sense).

Over the last four decades (since 3D games emerged), some people thought about it and came up with different ideas to create such an estimator. Let me share with you the common ones:

Distance-based

It’s probably the simplest estimator. The idea is to compute the maximum distance between local shapes before and after collapsing. This method is somewhat limited. Let me show you an example: let’s say we want to collapse edge BC, what solution do you choose?

As you can see, there are many ways to collapse an edge based on distance. Yet, none of these is guaranteed to be optimal, it’s just a guessing game.

Angle-based

Another somewhat simple estimator. In case of edge collapse, the idea is to sort edges by their angle when the edge connects two faces. It works great for removing near coplanar triangles, but it quickly comes tricky when dealing with vertex normals, which may designate an edge as hard or smooth independently from its fold angle.

Quadric error based

The quadric error is an estimator based on a quadric representation of a local surface. It uses matrices and has been proved to be a rather precise estimator. It also has the benefit of allowing the computation of optimal vertex position after collapse (unlike manually choosing between set positions). You can find details about this in the paper “Surface Simplification Using Quadric Error Metrics” by Michael Garland & Paul S. Heckbert.

There have been several variants of this approach that you should be able to find quite easily on the internet.

Today’s available solutions

There are mesh decimation algorithms in almost every polygon-based 3D modeling/editing software. Almost all use the quadric-based edge collapse method, with a few tweaks for each.

I can mention 3DS Max, Blender, Houdini, Simplygon, InstaLOD, Pixyz, Meshlab … Each has its own downside and benefits. The fastest is usually Blender’s implementation, but it’s also probably the worst in terms of results. Pixyz implementation is in my opinion the best in quality, with a better than average performance.

Why shall we improve it?

Year after year, game development is becoming more and more accessible, with more and more tools being available. Modeling is becoming easier with the lightning-fast development of Blender, the level design is becoming easier with Unity, Godot, or Unreal, game programming is becoming more accessible with higher-level languages and no code (eg. Unreal’s blueprints, Unity’s recent acquisition of Bolt, …).

There is however one thing that remains as difficult as it has ever been: Game optimization. The optimization of a game today requires a deep understanding of the CPU/GPU communication, rendering pipeline, rasterization theory, and much more.

I won’t go through all means of optimization here, but a very common way of reducing the GPU load is to reduce the number of triangles to draw on the screen as it lowers the number of computations to be done during rasterization. As said previously, decimating meshes is a way to achieve this goal, whether it be for the creation of more adapted meshes or LODs (Levels Of Details).

This is why a good decimation algorithm is key in the process of making optimization easier. This is not the thing you want to do or tweak manually: it should yield the best results on the first try.

I personally believe decimating is something we could make use of much much more often. For example, decimation could be used instead of doing manual retopology. I believe it can do this job better than any human, but it needs to be better than what we know today (some CG artists will hate me for this, but man I’m doing my best to ease your life)

Building a better mesh decimation algorithm

In order to make a great decimation algorithm, we need to understand what are the issues with existing ones. I won’t point at a specific software because there are too many technical subtleties that I’d inevitably say something wrong at some point. Also, software updates will come more often than the frequency at which I’d be able to update this post.

Let’s go through a list of things that -in my opinion- a perfect decimation algorithm should have, and you’ll probably recognize notice that some points are rarely if not never fulfilled by today’s implementations :

  • Handling of normals
    Taking normals into consideration for the decimation process is quite tricky, but unfortunately mandatory these days since everybody uses vertex normals. Very few papers talk about this and only cover basic triangle removal.
  • Handling of textures
    As normals, there are in today’s game textures applied to models 99% of the time. This usually implies UVs coordinates along with vertices.
  • Handling of animations
    This only concerns rigged models, but it has its own importance since rigged models usually eat a lot of the triangle budget because of the detail often put in character modeling, with all intricacies it implies (hair, clothes, fingers, …).
    When decimating, we usually do not touch the animation itself (motion to apply to rig) but rather to which bone of the rig a vertex is assigned to. Even the best algorithms are often limited in this regard since it’s all about making complex compromises, and a bad assignment can cost a lot in term of visual error (eg a hand close to a leg, where while decimating a vertex finger would be reassigned to a leg bone… I’ll let you imagine the mess when the character starts walking 🤡)
  • Handling borders
    Almost all production-ready implementations take borders into consideration, however, not all do a great job of estimating the importance of that border., mainly because there are several types of borders.
  • Support for additional vertex attributes
    We talk about normals, UVs, bones weights, … but there are in fact many more possible vertex attributes. Actually, it is virtually infinite since this is just user data. All implementations are limited to a handful of possible attributes.
  • Managing instances
    When decimating a model that contains instanced meshes (a single mesh that is used multiple times in the model, at different locations. Eg: a screw). When decimating, you want to decimate an instanced mesh only once, and not the number of times it is instanced, or it will break instancing and prevent rendering optimizations like GPU instancing for instance.
  • Global estimator
    Probably the most unforeseen feature, but still a very important one. When decimating a group of mesh at 50%, a naive implementation would decimate each mesh at 50%, which is highly not recommended!
    Imagine you decimate a sphere and a cube at the same time, at 50%. Removing triangles to the cube would completely destroy it, while we can easily remove triangles on a sphere. In this case, we want to remove 0% out of the cube, and more than 50% of the sphere.
    To achieve this, we need to sort edges to collapse globally instead of per mesh.
  • Fine-tuning over resulting topology
    Under certain circumstances, you want to limit an angle in a triangle to a minimum to avoid lighting artifacts, or edge length to avoid problems while animating or use decimation as a retopology tool.
  • Controlling with quality criterion
    If you have ever decimated a model using any software, there is a high chance you were asked either a number of triangles to remove (or to keep) or a ratio of triangles to remove (or to keep). This is so widespread that what I will tell you may sound wrong to you, but please take some time to think about it :
    Decimating to a target is most of the time not what you want to do.
    Here’s why: never you were asked to a fixed and precise number of triangles. Why would you need exactly to go from 4000 to 2000 triangles for instance? What really matters is to keep enough quality for the model to look good in the context under which you’ll use it.
    Let’s imagine you want to decimate an architectural model only made of nothing else but straight walls: decimating to 50% triangles will completely kill the model. If you were to use a quality target instead, it would simply not decimate further, because the model actually can’t go lower without making your 3D scene ugly.
    This is why in most cases decimation algorithm shall be controlled with a quality criterion instead of a triangle count target.
  • A good heuristic estimation
    Quadric error is one thing, but it is not enough to take all attribute types and topology subtleties into account. This is actually a broad topic, and there isn’t a single way. On my side, I think machine learning could really shine there.

What about performance?

Performance is something I often heard about while talking about decimation. But let’s be honest, it is not really that important. Why? Because decimating a model is usually an offline process, not something that is done during gameplay for example.

I simply think it should be quick enough to not break an artist/game developer workflow and let him switch context. Personnally, I do switch context after something like 10 seconds (I am not very patient 😅), which is already quite a lot for a decimation process to run.

Also, what is important is the algorithmic complexity of the implementation: it shall not be like 0(n³), or it will simply never end when decimating large models.

What cost the most in the decimation process is the cost of computing and re-computing using the estimator and the cost of keeping the list of edges to collapsed sorted. If performance is a feature, trade-offs can be made by sorting less often, making the estimator less strict, or using more heuristics.

After the theory

So we went through a list of things that would make a decimation algorithm near perfect. What now?

Let’s build it! 🛠

I won’t go through every implementation detail, but I am writing another blog post here that explains the data structure I chose.

Also, I chose C# to implement this. Usually, people use C or C++ for those kinds of things, but I am used to C# and I think it is a much better experience to code in that language. Also, it is possible to do quite some low-level stuff with some of the features it offers. I also thought of using Rust for it, but I am not familiar enough with it. Anyway, any language should do, in the end, it will just be a matter of context and performance.

You’ll find an open-source project for mesh decimation there, where I tried implementing everything described in this article:
https://github.com/nanolabo/nanomesh

Thanks for reading! I hope you find this article interesting. Don’t hesitate to share your opinion below 🤗

--

--