Skip to content

Commit e4737c5

Browse files
committed
pySLAM v2.3.0. Big file reorganization and refactoring.
1 parent a5d2b4f commit e4737c5

File tree

157 files changed

+234
-642
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

157 files changed

+234
-642
lines changed

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ matches.txt
2222
map.png
2323
.vscode
2424
.project
25-
videos/webcam
25+
data/videos/webcam
2626

2727
kf_info.log
2828
local_mapping.log

README.md

Lines changed: 49 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -1,47 +1,47 @@
1-
# pySLAM v2.2.6
1+
# pySLAM v2.3.0
22

33
Author: **[Luigi Freda](https://www.luigifreda.com)**
44

55
<!-- TOC -->
66

7-
- [pySLAM v2.2.6](#pyslam-v226)
8-
- [1. Install](#1-install)
9-
- [1.1. Main requirements](#11-main-requirements)
10-
- [1.2. Ubuntu](#12-ubuntu)
11-
- [1.3. MacOS](#13-macos)
12-
- [1.4. Docker](#14-docker)
13-
- [1.5. How to install non-free OpenCV modules](#15-how-to-install-non-free-opencv-modules)
14-
- [1.6. Troubleshooting and performance issues](#16-troubleshooting-and-performance-issues)
15-
- [2. Usage](#2-usage)
16-
- [2.1. Feature tracking](#21-feature-tracking)
17-
- [2.2. Loop closing](#22-loop-closing)
18-
- [2.2.1. Vocabulary management](#221-vocabulary-management)
19-
- [2.2.2. Vocabulary-free loop closing](#222-vocabulary-free-loop-closing)
20-
- [2.3. Volumetric reconstruction pipeline](#23-volumetric-reconstruction-pipeline)
21-
- [2.4. Depth prediction](#24-depth-prediction)
22-
- [2.5. Save and reload a map](#25-save-and-reload-a-map)
23-
- [2.6. Relocalization in a loaded map](#26-relocalization-in-a-loaded-map)
24-
- [2.7. Trajectory saving](#27-trajectory-saving)
25-
- [2.8. SLAM GUI](#28-slam-gui)
26-
- [2.9. Monitor the logs for tracking, local mapping, and loop closing simultaneously](#29-monitor-the-logs-for-tracking-local-mapping-and-loop-closing-simultaneously)
27-
- [3. Supported components and models](#3-supported-components-and-models)
28-
- [3.1. Supported local features](#31-supported-local-features)
29-
- [3.2. Supported matchers](#32-supported-matchers)
30-
- [3.3. Supported global descriptors and local descriptor aggregation methods](#33-supported-global-descriptors-and-local-descriptor-aggregation-methods)
31-
- [3.3.1. Local descriptor aggregation methods](#331-local-descriptor-aggregation-methods)
32-
- [3.3.2. Global descriptors](#332-global-descriptors)
33-
- [3.4. Supported depth prediction models](#34-supported-depth-prediction-models)
34-
- [4. Datasets](#4-datasets)
35-
- [4.1. KITTI Datasets](#41-kitti-datasets)
36-
- [4.2. TUM Datasets](#42-tum-datasets)
37-
- [4.3. EuRoC Datasets](#43-euroc-datasets)
38-
- [4.4. Replica Datasets](#44-replica-datasets)
39-
- [5. Camera Settings](#5-camera-settings)
40-
- [6. Comparison pySLAM vs ORB-SLAM3](#6-comparison-pyslam-vs-orb-slam3)
41-
- [7. Contributing to pySLAM](#7-contributing-to-pyslam)
42-
- [8. References](#8-references)
43-
- [9. Credits](#9-credits)
44-
- [10. TODOs](#10-todos)
7+
- [pySLAM v2.3.0](#pyslam-v230)
8+
- [Install](#install)
9+
- [Main requirements](#main-requirements)
10+
- [Ubuntu](#ubuntu)
11+
- [MacOS](#macos)
12+
- [Docker](#docker)
13+
- [How to install non-free OpenCV modules](#how-to-install-non-free-opencv-modules)
14+
- [Troubleshooting and performance issues](#troubleshooting-and-performance-issues)
15+
- [Usage](#usage)
16+
- [Feature tracking](#feature-tracking)
17+
- [Loop closing](#loop-closing)
18+
- [Vocabulary management](#vocabulary-management)
19+
- [Vocabulary-free loop closing](#vocabulary-free-loop-closing)
20+
- [Volumetric reconstruction pipeline](#volumetric-reconstruction-pipeline)
21+
- [Depth prediction](#depth-prediction)
22+
- [Save and reload a map](#save-and-reload-a-map)
23+
- [Relocalization in a loaded map](#relocalization-in-a-loaded-map)
24+
- [Trajectory saving](#trajectory-saving)
25+
- [SLAM GUI](#slam-gui)
26+
- [Monitor the logs for tracking, local mapping, and loop closing simultaneously](#monitor-the-logs-for-tracking-local-mapping-and-loop-closing-simultaneously)
27+
- [Supported components and models](#supported-components-and-models)
28+
- [Supported local features](#supported-local-features)
29+
- [Supported matchers](#supported-matchers)
30+
- [Supported global descriptors and local descriptor aggregation methods](#supported-global-descriptors-and-local-descriptor-aggregation-methods)
31+
- [Local descriptor aggregation methods](#local-descriptor-aggregation-methods)
32+
- [Global descriptors](#global-descriptors)
33+
- [Supported depth prediction models](#supported-depth-prediction-models)
34+
- [Datasets](#datasets)
35+
- [KITTI Datasets](#kitti-datasets)
36+
- [TUM Datasets](#tum-datasets)
37+
- [EuRoC Datasets](#euroc-datasets)
38+
- [Replica Datasets](#replica-datasets)
39+
- [Camera Settings](#camera-settings)
40+
- [Comparison pySLAM vs ORB-SLAM3](#comparison-pyslam-vs-orb-slam3)
41+
- [Contributing to pySLAM](#contributing-to-pyslam)
42+
- [References](#references)
43+
- [Credits](#credits)
44+
- [TODOs](#todos)
4545

4646
<!-- /TOC -->
4747

@@ -104,19 +104,19 @@ Then, use the available specific install procedure according to your OS. The pro
104104
* Kornia 0.7.3
105105
* Rerun
106106

107-
If you encounter any issues or performance problems, refer to the [TROUBLESHOOTING](./TROUBLESHOOTING.md) file for assistance.
107+
If you encounter any issues or performance problems, refer to the [TROUBLESHOOTING](./docs/TROUBLESHOOTING.md) file for assistance.
108108

109109

110110
### Ubuntu
111111

112-
Follow the instructions reported [here](./PYTHON-VIRTUAL-ENVS.md) for creating a new virtual environment `pyslam` with **venv**. The procedure has been tested on *Ubuntu 18.04*, *20.04*, *22.04* and *24.04*.
112+
Follow the instructions reported [here](./docs/PYTHON-VIRTUAL-ENVS.md) for creating a new virtual environment `pyslam` with **venv**. The procedure has been tested on *Ubuntu 18.04*, *20.04*, *22.04* and *24.04*.
113113

114-
If you prefer **conda**, run the scripts described in this other [file](./CONDA.md).
114+
If you prefer **conda**, run the scripts described in this other [file](./docs/CONDA.md).
115115

116116

117117
### MacOS
118118

119-
Follow the instructions in this [file](./MAC.md). The reported procedure was tested under *Sequoia 15.1.1* and *Xcode 16.1*.
119+
Follow the instructions in this [file](./docs/MAC.md). The reported procedure was tested under *Sequoia 15.1.1* and *Xcode 16.1*.
120120

121121

122122
### Docker
@@ -130,7 +130,7 @@ If you prefer docker or you have an OS that is not supported yet, you can use [r
130130

131131
The provided install scripts will install a recent opencv version (>=**4.10**) with non-free modules enabled (see the provided scripts [install_pip3_packages.sh](./install_pip3_packages.sh) and [install_opencv_python.sh](./install_opencv_python.sh)). To quickly verify your installed opencv version run:
132132
`$ . pyenv-activate.sh `
133-
`$ ./opencv_check.py`
133+
`$ ./scripts/opencv_check.py`
134134
or use the following command:
135135
`$ python3 -c "import cv2; print(cv2.__version__)"`
136136
How to check if you have non-free OpenCV module support (no errors imply success):
@@ -139,7 +139,7 @@ How to check if you have non-free OpenCV module support (no errors imply success
139139

140140
### Troubleshooting and performance issues
141141

142-
If you run into issues or errors during the installation process or at run-time, please, check the [TROUBLESHOOTING.md](./TROUBLESHOOTING.md) file.
142+
If you run into issues or errors during the installation process or at run-time, please, check the [docs/TROUBLESHOOTING.md](./docs/TROUBLESHOOTING.md) file.
143143

144144
---
145145
## Usage
@@ -149,22 +149,22 @@ Once you have run the script `install_all_venv.sh` (follow the instructions abov
149149
$ . pyenv-activate.sh # Activate pyslam python virtual environment. This is only needed once in a new terminal.
150150
$ ./main_vo.py
151151
```
152-
This will process a default [KITTI](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) video (available in the folder `videos`) by using its corresponding camera calibration file (available in the folder `settings`), and its groundtruth (available in the same `videos` folder). If matplotlib windows are used, you can stop `main_vo.py` by focusing/clicking on one of them and pressing the key 'Q'.
152+
This will process a default [KITTI](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) video (available in the folder `data/videos`) by using its corresponding camera calibration file (available in the folder `settings`), and its groundtruth (available in the same `data/videos` folder). If matplotlib windows are used, you can stop `main_vo.py` by focusing/clicking on one of them and pressing the key 'Q'.
153153
**Note**: As explained above, the basic script `main_vo.py` **strictly requires a ground truth**.
154154

155155
In order to process a different **dataset**, you need to set the file `config.yaml`:
156156
* Select your dataset `type` in the section `DATASET` (further details in the section *[Datasets](#datasets)* below for further details). This identifies a corresponding dataset section (e.g. `KITTI_DATASET`, `TUM_DATASET`, etc).
157157
* Select the `sensor_type` (`mono`, `stereo`, `rgbd`) in the chosen dataset section.
158158
* Select the camera `settings` file in the dataset section (further details in the section *[Camera Settings](#camera-settings)* below).
159-
* The `groudtruth_file` accordingly (further details in the section *[Datasets](#datasets)* below and check the files `ground_truth.py` and `convert_groundtruth.py`).
159+
* The `groudtruth_file` accordingly (further details in the section *[Datasets](#datasets)* below and check the files `io/ground_truth.py` and `io/convert_groundtruth.py`).
160160

161161
Similarly, you can test `main_slam.py` by running:
162162
```bash
163163
$ . pyenv-activate.sh # Activate pyslam python virtual environment. This is only needed once in a new terminal.
164164
$ ./main_slam.py
165165
```
166166

167-
This will process a default [KITTI]((http://www.cvlibs.net/datasets/kitti/eval_odometry.php)) video (available in the folder `videos`) by using its corresponding camera calibration file (available in the folder `settings`). You can stop it by focusing/clicking on one of the opened matplotlib windows and pressing the key 'Q'.
167+
This will process a default [KITTI]((http://www.cvlibs.net/datasets/kitti/eval_odometry.php)) video (available in the folder `data/videos`) by using its corresponding camera calibration file (available in the folder `settings`). You can stop it by focusing/clicking on one of the opened matplotlib windows and pressing the key 'Q'.
168168
**Note**: Due to information loss in video compression, `main_slam.py` tracking may peform worse with the available KITTI videos than with the original KITTI image sequences. The available videos are intended to be used for a first quick test. Please, download and use the original KITTI image sequences as explained [below](#datasets).
169169

170170
### Feature tracking
@@ -441,7 +441,7 @@ $ python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > asso
441441
### EuRoC Datasets
442442

443443
1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets (check this direct [link](http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/))
444-
2. Use the script `groundtruth/generate_euroc_groundtruths_as_tum.sh` to generate the TUM-like groundtruth files `path + '/' + name + '/mav0/state_groundtruth_estimate0/data.tum'` that are required by the `EurocGroundTruth` class.
444+
2. Use the script `io/generate_euroc_groundtruths_as_tum.sh` to generate the TUM-like groundtruth files `path + '/' + name + '/mav0/state_groundtruth_estimate0/data.tum'` that are required by the `EurocGroundTruth` class.
445445
3. Select the corresponding calibration settings file (parameter `EUROC_DATASET: cam_settings:` in the file `config.yaml`).
446446

447447

config.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
import os
2626
import yaml
2727
import numpy as np
28-
from utils_sys import Printer, locally_configure_qt_environment
28+
from utilities.utils_sys import Printer, locally_configure_qt_environment
2929
import math
3030

3131

config.yaml

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,14 @@ CORE_LIB_PATHS:
77
orb_features: thirdparty/orbslam2_features/lib
88
pyslam_utils: cpp/utils/lib
99
thirdparty: thirdparty # considering the folders in thirdparty as modules
10+
utilities: utilities
11+
depth_estimation: depth_estimation
12+
local_features: local_features
13+
loop_closing: loop_closing
14+
slam: slam
15+
viz: viz
16+
io: io
17+
dense: dense
1018

1119
LIB_PATHS:
1220
# The following libs are explicitely imported on demand by using, for instance:
@@ -47,10 +55,10 @@ LIB_PATHS:
4755
DATASET:
4856
# select your dataset (decomment only one of the following lines)
4957
#type: EUROC_DATASET
50-
#type: KITTI_DATASET
58+
type: KITTI_DATASET
5159
#type: TUM_DATASET
5260
#type: REPLICA_DATASET
53-
type: VIDEO_DATASET
61+
#type: VIDEO_DATASET
5462
#type: FOLDER_DATASET
5563
#type: LIVE_DATASET # Not recommended for current development stage
5664

@@ -121,15 +129,15 @@ VIDEO_DATASET:
121129
type: video
122130
sensor_type: mono # Here, 'sensor_type' can be only 'mono'
123131
#
124-
#base_path: ./videos/kitti00
132+
#base_path: ./data/videos/kitti00
125133
#settings: settings/KITTI00-02.yaml
126134
#name: video.mp4
127135
#
128-
base_path: ./videos/kitti06
136+
base_path: ./data/videos/kitti06
129137
settings: settings/KITTI04-12.yaml
130138
name: video_color.mp4
131139
#
132-
#base_path: ./videos/webcam
140+
#base_path: ./data/videos/webcam
133141
#settings: settings/WEBCAM.yaml
134142
#name: video.mp4
135143
#

parameters.py renamed to config_parameters.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ class Parameters:
171171
kGBAUseRobustKernel = True
172172

173173
# Volume Integration
174-
kUseVolumetricIntegration = False # To enable/disable volumetric integration (dense mapping)
174+
kUseVolumetricIntegration = True # To enable/disable volumetric integration (dense mapping)
175175
kVolumetricIntegrationDebugAndPrintToFile = True
176176
kVolumetricIntegrationExtractMesh = False # Extract mesh or point cloud as output
177177
kVolumetricIntegrationVoxelLength = 0.015 # [m]
@@ -180,7 +180,7 @@ class Parameters:
180180
kVolumetricIntegrationDepthTruncOutdoor = 10.0 # [m]
181181
kVolumetricIntegrationMinNumLBATimes = 1 # We integrate only the keyframes that have been processed by LBA at least kVolumetricIntegrationMinNumLBATimes times.
182182
kVolumetricIntegrationOutputTimeInterval = 1.0 # [s]
183-
kVolumetricIntegrationUseDepthEstimator = False # Use depth estimator for volumetric integration in the back-end.
183+
kVolumetricIntegrationUseDepthEstimator = True # Use depth estimator for volumetric integration in the back-end.
184184
# Since the depth inference time is above 1 second, this is very slow.
185185
# NOTE: the depth estimator estimates a metric depth (with an absolute scale). You can't combine it with a MONOCULAR SLAM since the SLAM map scale will be not consistent.
186186
kVolumetricIntegrationDepthEstimatorType = "DEPTH_RAFT_STEREO" # "DEPTH_PRO","DEPTH_ANYTHING_V2, "DEPTH_SGBM", "DEPTH_RAFT_STEREO", "DEPTH_CRESTEREO_PYTORCH" (see depth_estimator_factory.py)
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)