Now, there are some notable shortcomings in this example– primarily, as mentioned before, that it takes quite a long time to execute. Because we look through our entire list of Gaussian blobs once for every pixel being calculated, at every iteration, it takes about 40 minutes (for me, on a system with a six-year-old graphics card) for all 10,000 iterations to complete. And this is with a very small number of blobs; I limited the number of blobs used to generate the image to 200, because going beyond that point starts to hang my GPU. And because of the small number of blobs, you can see that the image is pretty fuzzy. We could counter this with more, smaller blobs, but doing that will require some clever changes to improve execution speed. Thankfully, this is exactly the sort of work that GPUs are good at! And now that we’ve got the hang of how gradient descent and gaussian splatting work, we can dive into the optimization work in a follow-on blog post.
0 commit comments