Fork of implementation of paper "Graspness Discovery in Clutters for Fast and Accurate Grasp Detection" (ICCV 2021) by Zibo Chen.
- Python 3
- PyTorch 1.8
- Open3d 0.8
- TensorBoard 2.3
- NumPy
- SciPy
- Pillow
- tqdm
- MinkowskiEngine
Get the code.
git clone https://github.com/graspnet/graspness_unofficial.git
cd graspness_unofficialInstall packages via Pip.
pip install -r requirements.txtCompile and install pointnet2 operators (code adapted from votenet).
cd pointnet2
python setup.py installCompile and install knn operator (code adapted from pytorch_knn_cuda).
cd knn
python setup.py installInstall graspnetAPI for evaluation.
git clone https://github.com/graspnet/graspnetAPI.git
cd graspnetAPI
pip install .For MinkowskiEngine, please refer https://github.com/NVIDIA/MinkowskiEngine
Point level graspness label are not included in the original dataset, and need additional generation. Make sure you have downloaded the orginal dataset from GraspNet. The generation code is in dataset/generate_graspness.py.
cd dataset
python generate_graspness.py --dataset_root /data3/graspnet --camera_type kinectoriginal dataset grasp_label files have redundant data, We can significantly save the memory cost. The code is in dataset/simplify_dataset.py
cd dataset
python simplify_dataset.py --dataset_root /data3/graspnetTraining examples are shown in command_train.sh. --dataset_root, --camera and --log_dir should be specified according to your settings. You can use TensorBoard to visualize training process.
Testing examples are shown in command_test.sh, which contains inference and result evaluation. --dataset_root, --camera, --checkpoint_path and --dump_dir should be specified according to your settings. Set --collision_thresh to -1 for fast inference.
We provide trained model weights. The model trained with RealSense data is available at Google drive (this model is recommended for real-world application). The model trained with Kinect data is available at Google drive.
Results "In repo" report the model performance of my results without collision detection.
Evaluation results on Kinect camera:
| Seen | Similar | Novel | |||||||
|---|---|---|---|---|---|---|---|---|---|
| AP | AP0.8 | AP0.4 | AP | AP0.8 | AP0.4 | AP | AP0.8 | AP0.4 | |
| In paper | 61.19 | 71.46 | 56.04 | 47.39 | 56.78 | 40.43 | 19.01 | 23.73 | 10.60 |
| In repo | 61.83 | 73.28 | 54.14 | 51.13 | 62.53 | 41.57 | 19.94 | 24.90 | 11.02 |
If you meet the torch.floor error in MinkowskiEngine, you can simply solve it by changing the source code of MinkowskiEngine: MinkowskiEngine/utils/quantization.py 262,from discrete_coordinates =_auto_floor(coordinates) to discrete_coordinates = coordinates
My code is mainly based on Graspnet-baseline https://github.com/graspnet/graspnet-baseline.