This repository implements the method from our paper titled "Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds" by Yujia Liu, Anton Obukhov, Jan Dirk Wegner, and Konrad Schindler.
As shown in the figure above, it takes the raw point cloud of a CAD model scan and reconstructs its surfaces, edges, and corners.
Explore select models from the ABC CAD models dataset, showcasing their reconstruction by our method and competition, on the project page:
To reconstruct your own CAD model, use Colab or your local environment as described below.
To process the CAD models from the assets folder, just clone the repository and run the command below in the repository root.
The process finishes in less than 5 min on a machine with a GPU.
Running without a GPU is also very feasible.
Inspect results in the out
directory.
docker run -it --rm --gpus "device=$CUDA_VISIBLE_DEVICES" -v .:/work/point2cad toshas/point2cad:v1 python -m point2cad.main
Colab eliminates the need to run the application locally and use Docker. However, it may be slower due to the time taken to build the dependencies. Unlike the dockerized environment, the Colab functionality is not guaranteed. Click the badge to start:
If you want to run the process on your own point clouds, add the --help
option to learn how to specify inputs file path and outputs directory path.
Only in the dockerized runtime: both paths must be under the same repository root path.
The code has many native dependencies, including PyMesh. To build from source and prepare a development environment, clone the repository and run the following command:
cd build && sh docker_build.sh
Then simply run from the repository root:
docker run -it --rm --gpus "device=$CUDA_VISIBLE_DEVICES" -v .:/work/point2cad point2cad python -m point2cad.main
If docker is unavailable, refer to PyMesh installation guide to build the environment from source, or simply follow the steps from the Dockerfile or Colab installation script.
CAD model reconstruction from a point cloud consists of two steps: point cloud annotation with surface clusters (achieved by ParseNet, HPNet, etc.), and reconstructing the surfaces and topology.
Pretrained ParseNet models can be found here: for input points with normals and for input points without normals. If it is not working, please use the weights in point2cad/logs. To utilize it, please place the script point2cad/generate_segmentation.py in the ParseNet repository, and execute it in there.
This code focuses on the second part (views 3, 4, 5 from the teaser figure above) and requires the input point cloud in the (x, y, z, s)
format, where each 3D point with x
, y
, z
coordinates is annotated with the surface id s
, such as the example in the assets folder.
The process stores the following artifacts in the output directory (out
by default):
unclipped
: unclipped surfaces ready for pairwise intersection;clipped
: reconstructed surfaces after clipping the margins;topo
: topology: reconstructed edges and corners.
This software is released under a CC-BY-NC 4.0 license, which allows personal and research use only. For a commercial license, please contact the authors. You can view a license summary here.