Appearance-Driven Automatic 3D Model Simplification

Created on 2023-02-05T02:26:37-06:00

Return to the Index

This card pertains to a resource available on the internet.

This card can also be read via Gemini.

Not a standard mesh optimization tool.

Requires a "reference renderer" that takes a model, camera pivot, and light position. Each one is provided which is then rendered as well as the original target mesh with identical lighting and orientation. The difference between the two is then analyzed for "image loss" such that a psuedo neural network is optimized to reduce this "error" to zero.

Features the optimizer does not understand like ambient occlusion will be seen as error and it will try to bake those features in to the model as well.

Source code follows http://www.mikktspace.com/ conventions for normal maps.

Speed

Very slow. A single process takes "multiple hours" on an "NVidia V100" which is a 20,000$ video card. Cursory glance at the source code suggests that it is definitely research code though. I didn't see any attempts to optimize the run times.

Not sure if this can be bailed out with the use of ex. Butterfly networks. In theory some of these primitives should ultimately be linear maps and so we should definitely be able to evolve a butterfly network that implements those operations.

Not sure if complementary sparsity would help here. There IS some inherent sparsity in that whatever cannot be seen from the example shot does not need to take place in optimization. So in theory only a subset needs to be updated at any given moment. Whether this is easy to do in practice remains to be seen.