I think I found a method to dramatically increase performance on low-end smartphones (like mine…): separate sampling from processing.
I think that currently a part of the processing is performed in realtime while capturing data; but on low-end smartphones this leads to data loss and frames loss! Instead, if there was an option to separate first-stage processing from scanning, full processing power could be dedicated to record frames, accelerometer data and gyroscope data, preventing any data loss.
Once the data are collected, than first-stage processing can start, to build the point cloud.
Then, the second-stage processing for calculating splats can be started.
(I stil have to understand if third-stage processing is VR optimization or something else)
This would also allow selecting splat/mesh processing type after the scanning, rather than in advance.