-
Notifications
You must be signed in to change notification settings - Fork 672
Compressing gaussian splats #309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compressing gaussian splats #309
Conversation
As an FYI |
|
Did some cleanup and put up a doc page for it (
|
tests/test_compression.py
Outdated
| "quats": torch.randn(N, 4), | ||
| "opacities": torch.randn(N), | ||
| "sh0": torch.randn(N, 1, 3), | ||
| "sh1": torch.randn(N, 24, 3), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo. shN?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed!
|
Great to see Self-Organizing-Gaussians beeing used! |
|
FYI: I was messing around with the code with Claude, and it seems to inadvertently have caught and fixed a bug related to the compression in simple_trainer.py: It seems correct to me at a glance, but as I am not read in into the code I don't want to open an issue/PR for an accidental Claude change that I haven't looked into, I figured I'd just put it here instead so you can judge it yourself. |
|
Hello, sorry for late party. I have used this method in I must decompress it, so I can convert it to PLY. But, unfortunatelly, the size of PLY is remain same. The question is, any tutorial for getting compressed PLY with this method? |
Yeah, I got this error too. It does not error somehow in my setup with 1 GPU, but if it has more >1 GPU, it caused error It correct code should be |
|
Hi @jefequien, are there anyway to load Checkpoint from Nerfstudio and run just the compression step, instead of training from ground up in gsplat? |
|
@Ben-Mack Assuming Nerfstudio uses the same .pt file as gsplat, it should be simple. Just use simple_trainer.py with --ckpt
|
* plas * sort * working * crappy compression * cool vis * vis grid * vis * compress * sort strategy wip * sort strategy * sort attr * detach * png compression is working * shN png * blur attributes * kmeans * refactor * good psnr * means_u and means_l * count sorted more carefully * cleanup * bug * sort and quantize as post is pretty effective * simple * powers of 2 * simple sort and quantize is best * 6 bits * wip * sort splats * fine tune codebook works * save sorted ckpt * kmeans might not be worth it * wip * simplify * reduce diff * wip * stage * sumarize stats * kwargs * crop * cleanup * 1m * fix shape bug * support any types * minor * 1m * compress and decompress npz * move imports inside of functions * verbose * req txt * support simple trainer as well * sh0 * utils * refactor * png compression * trainers * shape zero * minor * minor * minor * 1m * minor * kwargs * minor * dict -> Dict * dict -> Dict * compression_strategy -> compression * minor * set rank in compress_dir * minor fix * clang * minor * minor * minor * cleanup scripts * doc * minor doc * black * add tests * black * simple trainer reg fix * imageio import as v2 * lpips_net arg * compression scripts and results * fix typo --------- Co-authored-by: Ruilong Li <[email protected]>



Thanks largely to MCMC's improved densification strategy, a simple post-training compression scheme is enough to top the 3D Gaussian Splatting Compression Methods leaderboard.
This PR implements quantization, sorting, and spherical harmonic coefficients K-means clustering for compression.
Relevant links:
https://github.com/fraunhoferhhi/Self-Organizing-Gaussians
https://aras-p.info/blog/2023/09/27/Making-Gaussian-Splats-more-smaller/
Results
Evaluated on MipNeRF360 dataset.
Note: Using K-means centroids and labels to initialize a codebook sets the high score for a given file size, at the cost of another training run. This is not implemented in the PR.
Data format
This PR quantizes
meansto 16 bits,opacities,quats,scales,sh0to 8 bits, andshNto 6 bit. Sorting these fields with PLAS and saving them as PNGs saves a few MBs and generates nice visuals.meansare quantized in log space.shNcoefficients are clustered with K-means into 2**16 clusters and their centroids and labels are saved to a compressed npz file.Bonsai scene's quantized and sorted

sh0png.