You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[2.5. Save and reload a map](#25-save-and-reload-a-map)
23
-
-[2.6. Relocalization in a loaded map](#26-relocalization-in-a-loaded-map)
24
-
-[2.7. Trajectory saving](#27-trajectory-saving)
25
-
-[2.8. SLAM GUI](#28-slam-gui)
26
-
-[2.9. Monitor the logs for tracking, local mapping, and loop closing simultaneously](#29-monitor-the-logs-for-tracking-local-mapping-and-loop-closing-simultaneously)
27
-
-[3. Supported components and models](#3-supported-components-and-models)
28
-
-[3.1. Supported local features](#31-supported-local-features)
-[3.3. Supported global descriptors and local descriptor aggregation methods](#33-supported-global-descriptors-and-local-descriptor-aggregation-methods)
31
-
- [3.3.1. Local descriptor aggregation methods](#331-local-descriptor-aggregation-methods)
32
-
- [3.3.2. Global descriptors](#332-global-descriptors)
-[Relocalization in a loaded map](#relocalization-in-a-loaded-map)
24
+
-[Trajectory saving](#trajectory-saving)
25
+
-[SLAM GUI](#slam-gui)
26
+
-[Monitor the logs for tracking, local mapping, and loop closing simultaneously](#monitor-the-logs-for-tracking-local-mapping-and-loop-closing-simultaneously)
27
+
-[Supported components and models](#supported-components-and-models)
28
+
-[Supported local features](#supported-local-features)
29
+
-[Supported matchers](#supported-matchers)
30
+
-[Supported global descriptors and local descriptor aggregation methods](#supported-global-descriptors-and-local-descriptor-aggregation-methods)
-[Comparison pySLAM vs ORB-SLAM3](#comparison-pyslam-vs-orb-slam3)
41
+
-[Contributing to pySLAM](#contributing-to-pyslam)
42
+
-[References](#references)
43
+
-[Credits](#credits)
44
+
-[TODOs](#todos)
45
45
46
46
<!-- /TOC -->
47
47
@@ -104,19 +104,19 @@ Then, use the available specific install procedure according to your OS. The pro
104
104
* Kornia 0.7.3
105
105
* Rerun
106
106
107
-
If you encounter any issues or performance problems, refer to the [TROUBLESHOOTING](./TROUBLESHOOTING.md) file for assistance.
107
+
If you encounter any issues or performance problems, refer to the [TROUBLESHOOTING](./docs/TROUBLESHOOTING.md) file for assistance.
108
108
109
109
110
110
### Ubuntu
111
111
112
-
Follow the instructions reported [here](./PYTHON-VIRTUAL-ENVS.md) for creating a new virtual environment `pyslam` with **venv**. The procedure has been tested on *Ubuntu 18.04*, *20.04*, *22.04* and *24.04*.
112
+
Follow the instructions reported [here](./docs/PYTHON-VIRTUAL-ENVS.md) for creating a new virtual environment `pyslam` with **venv**. The procedure has been tested on *Ubuntu 18.04*, *20.04*, *22.04* and *24.04*.
113
113
114
-
If you prefer **conda**, run the scripts described in this other [file](./CONDA.md).
114
+
If you prefer **conda**, run the scripts described in this other [file](./docs/CONDA.md).
115
115
116
116
117
117
### MacOS
118
118
119
-
Follow the instructions in this [file](./MAC.md). The reported procedure was tested under *Sequoia 15.1.1* and *Xcode 16.1*.
119
+
Follow the instructions in this [file](./docs/MAC.md). The reported procedure was tested under *Sequoia 15.1.1* and *Xcode 16.1*.
120
120
121
121
122
122
### Docker
@@ -130,7 +130,7 @@ If you prefer docker or you have an OS that is not supported yet, you can use [r
130
130
131
131
The provided install scripts will install a recent opencv version (>=**4.10**) with non-free modules enabled (see the provided scripts [install_pip3_packages.sh](./install_pip3_packages.sh) and [install_opencv_python.sh](./install_opencv_python.sh)). To quickly verify your installed opencv version run:
How to check if you have non-free OpenCV module support (no errors imply success):
@@ -139,7 +139,7 @@ How to check if you have non-free OpenCV module support (no errors imply success
139
139
140
140
### Troubleshooting and performance issues
141
141
142
-
If you run into issues or errors during the installation process or at run-time, please, check the [TROUBLESHOOTING.md](./TROUBLESHOOTING.md) file.
142
+
If you run into issues or errors during the installation process or at run-time, please, check the [docs/TROUBLESHOOTING.md](./docs/TROUBLESHOOTING.md) file.
143
143
144
144
---
145
145
## Usage
@@ -149,22 +149,22 @@ Once you have run the script `install_all_venv.sh` (follow the instructions abov
149
149
$ . pyenv-activate.sh # Activate pyslam python virtual environment. This is only needed once in a new terminal.
150
150
$ ./main_vo.py
151
151
```
152
-
This will process a default [KITTI](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) video (available in the folder `videos`) by using its corresponding camera calibration file (available in the folder `settings`), and its groundtruth (available in the same `videos` folder). If matplotlib windows are used, you can stop `main_vo.py` by focusing/clicking on one of them and pressing the key 'Q'.
152
+
This will process a default [KITTI](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) video (available in the folder `data/videos`) by using its corresponding camera calibration file (available in the folder `settings`), and its groundtruth (available in the same `data/videos` folder). If matplotlib windows are used, you can stop `main_vo.py` by focusing/clicking on one of them and pressing the key 'Q'.
153
153
**Note**: As explained above, the basic script `main_vo.py`**strictly requires a ground truth**.
154
154
155
155
In order to process a different **dataset**, you need to set the file `config.yaml`:
156
156
* Select your dataset `type` in the section `DATASET` (further details in the section *[Datasets](#datasets)* below for further details). This identifies a corresponding dataset section (e.g. `KITTI_DATASET`, `TUM_DATASET`, etc).
157
157
* Select the `sensor_type` (`mono`, `stereo`, `rgbd`) in the chosen dataset section.
158
158
* Select the camera `settings` file in the dataset section (further details in the section *[Camera Settings](#camera-settings)* below).
159
-
* The `groudtruth_file` accordingly (further details in the section *[Datasets](#datasets)* below and check the files `ground_truth.py` and `convert_groundtruth.py`).
159
+
* The `groudtruth_file` accordingly (further details in the section *[Datasets](#datasets)* below and check the files `io/ground_truth.py` and `io/convert_groundtruth.py`).
160
160
161
161
Similarly, you can test `main_slam.py` by running:
162
162
```bash
163
163
$ . pyenv-activate.sh # Activate pyslam python virtual environment. This is only needed once in a new terminal.
164
164
$ ./main_slam.py
165
165
```
166
166
167
-
This will process a default [KITTI]((http://www.cvlibs.net/datasets/kitti/eval_odometry.php)) video (available in the folder `videos`) by using its corresponding camera calibration file (available in the folder `settings`). You can stop it by focusing/clicking on one of the opened matplotlib windows and pressing the key 'Q'.
167
+
This will process a default [KITTI]((http://www.cvlibs.net/datasets/kitti/eval_odometry.php)) video (available in the folder `data/videos`) by using its corresponding camera calibration file (available in the folder `settings`). You can stop it by focusing/clicking on one of the opened matplotlib windows and pressing the key 'Q'.
168
168
**Note**: Due to information loss in video compression, `main_slam.py` tracking may peform worse with the available KITTI videos than with the original KITTI image sequences. The available videos are intended to be used for a first quick test. Please, download and use the original KITTI image sequences as explained [below](#datasets).
1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets (check this direct [link](http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/))
444
-
2. Use the script `groundtruth/generate_euroc_groundtruths_as_tum.sh` to generate the TUM-like groundtruth files `path + '/' + name + '/mav0/state_groundtruth_estimate0/data.tum'` that are required by the `EurocGroundTruth` class.
444
+
2. Use the script `io/generate_euroc_groundtruths_as_tum.sh` to generate the TUM-like groundtruth files `path + '/' + name + '/mav0/state_groundtruth_estimate0/data.tum'` that are required by the `EurocGroundTruth` class.
445
445
3. Select the corresponding calibration settings file (parameter `EUROC_DATASET: cam_settings:` in the file `config.yaml`).
Copy file name to clipboardExpand all lines: config_parameters.py
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -171,7 +171,7 @@ class Parameters:
171
171
kGBAUseRobustKernel=True
172
172
173
173
# Volume Integration
174
-
kUseVolumetricIntegration=False# To enable/disable volumetric integration (dense mapping)
174
+
kUseVolumetricIntegration=True# To enable/disable volumetric integration (dense mapping)
175
175
kVolumetricIntegrationDebugAndPrintToFile=True
176
176
kVolumetricIntegrationExtractMesh=False# Extract mesh or point cloud as output
177
177
kVolumetricIntegrationVoxelLength=0.015# [m]
@@ -180,7 +180,7 @@ class Parameters:
180
180
kVolumetricIntegrationDepthTruncOutdoor=10.0# [m]
181
181
kVolumetricIntegrationMinNumLBATimes=1# We integrate only the keyframes that have been processed by LBA at least kVolumetricIntegrationMinNumLBATimes times.
182
182
kVolumetricIntegrationOutputTimeInterval=1.0# [s]
183
-
kVolumetricIntegrationUseDepthEstimator=False# Use depth estimator for volumetric integration in the back-end.
183
+
kVolumetricIntegrationUseDepthEstimator=True# Use depth estimator for volumetric integration in the back-end.
184
184
# Since the depth inference time is above 1 second, this is very slow.
185
185
# NOTE: the depth estimator estimates a metric depth (with an absolute scale). You can't combine it with a MONOCULAR SLAM since the SLAM map scale will be not consistent.
186
186
kVolumetricIntegrationDepthEstimatorType="DEPTH_RAFT_STEREO"# "DEPTH_PRO","DEPTH_ANYTHING_V2, "DEPTH_SGBM", "DEPTH_RAFT_STEREO", "DEPTH_CRESTEREO_PYTORCH" (see depth_estimator_factory.py)
0 commit comments