Transform any image into multiple editable 3D part meshes using advanced AI. PartCrafter revolutionizes 3D modeling with compositional latent diffusion transformers, enabling unprecedented control over generated meshes.
Generation Time
Editable Meshes
F-Score (SOTA)
Get PartCrafter running in minutes with these simple steps
git clone https://github.com/wgsxm/PartCrafter.git
Download the latest version from GitHub
pip install -r requirements.txt
Install Python packages and CUDA dependencies
python download_models.py
Download pre-trained checkpoint files
python generate.py --input image.jpg
Generate 3D meshes from your image
Watch our comprehensive demonstration showing how PartCrafter generates multiple editable 3D meshes from a single image
Official demonstration video showing the complete generation process and results
PartCrafter addresses critical challenges across multiple domains with AI-powered structured 3D generation, enabling professionals to achieve unprecedented productivity and creativity.
Transform concept art into editable 3D models with structured part separation for seamless animation workflows
Key Applications
Convert photos into printable STL files with automatic part separation, perfect for complex assemblies
Key Applications
Generate structured object representations for advanced robotic manipulation and scene understanding
Key Applications
PartCrafter combines cutting-edge AI techniques to achieve unprecedented 3D generation quality and speed
Disentangles every part into its own latent token set, enabling independent editing and replacement while maintaining object coherence.
Local blocks keep intra-part detail preservation while global blocks enforce coherence across parts, preventing intersections or gaps.
Generates 4-16 separate meshes directly from a single RGB image, unlike traditional two-stage 'segment-then-reconstruct' pipelines.
Leverages pre-trained weights for higher fidelity generation with reduced computational requirements and faster training convergence.
Direct output of CAD-ready triangle meshes suitable for animation, physics simulation, and 3D printing without post-processing.
Trained on ~130k meshes from Objaverse, ShapeNet, and ABO with preserved part hierarchy metadata for structured learning.
Comprehensive technical details about PartCrafter's architecture, performance metrics, and system requirements
PartCrafter achieves state-of-the-art results in structured 3D generation while being significantly faster than competing methods
Model | Chamfer Distance ↓ | F-Score ↑ | Generation Time | Requires Segmentation |
---|---|---|---|---|
PartCrafter | 0.1726 | 0.7472 | 34 seconds | No |
HoloPart (2025) | 0.1916 | 0.6916 | 18 minutes | Yes |
TripoSG (backbone) | 0.1821 | 0.7115 | 30 seconds | No |
Benchmarks performed on H20 GPU (40GB). Numbers from official paper and industry evaluations.
We present PartCrafter, a novel approach for generating structured 3D meshes from single RGB images. Unlike existing methods that treat 3D objects as monolithic entities, PartCrafter decomposes objects into semantically meaningful parts, enabling fine-grained editing and manipulation.
@article{partcrafter2025, title={PartCrafter: Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers}, author={Wang, Guangshun and Liu, Xiao and Chen, Wei}, journal={arXiv preprint arXiv:2506.05573}, year={2025} }