This repository contains an implementation of the paper "Phase Transitions, Distance Functions, and Implicit Neural Representations". The project focuses on 3D mesh generation from point clouds using neural networks. Unlike explicit geometric representations, Implicit Neural Representations (INRs) learn a continuous function that maps 3D coordinates
The implementation was tested on standard 3D models, including the Armadillo, Bunny, and Dragon.
-
Variation with Point Density: Experiments were conducted comparing input sizes of 10,000 versus 50,000 points. Results indicated that point clouds with higher coordinate data consistently produced better 3D meshes.
-
Impact of Surface Normals: Including normal vectors provided the model with additional geometric information, leading to superior rendering quality, particularly observed in the Bunny model.
-
Fourier Encoding: Adding Fourier encoding allowed the model to capture finer details, such as eyes, noses, and inner ear cavities on the Armadillo model. However, this method introduced training instability in some cases.
Reconstruction quality was measured using Chamfer Distance. Below is a summary of the results:
| Object | Configuration | Chamfer Distance |
|---|---|---|
| Bunny | 10,000 Points (w/ Normals) | 65.358 |
| Armadillo | 10,000 Points | 358.787 |
| Armadillo | 50,000 Points | 182.179 |
| Armadillo | 50,000 Points (w/ Normals) | 196.260 |
During the implementation, several challenges were identified:
-
Large Point Clouds: Mesh generation degraded for larger, complex point clouds like the Dragon, often resulting in failed generations.
-
Training Instability: Large variance in coordinate values caused instability. A proposed solution is to normalize coordinate values to a 0-to-1 range.
-
Batch Size: Due to compute limitations, a batch size of 3,000 was used instead of the recommended 15,000. Increasing this in future iterations should improve generalization.


