Skip to content

matteo-bastico/Multisensor-PIT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn


Multisensor-PIT

Continuous Person Identification and Tracking in Healthcare by Integrating Accelerometer Data and 3D Skeletons
Explore the docs »

Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

About The Project

We present here the Multisensor Person Identification and Tracking (PIT) algorithm proposed in "Continuous Person Identification and Tracking in Healthcare by Integrating Accelerometer Data and Deep Learning Filled 3D Skeletons" to associate and track 3D skeleton data and on-wrist accelerometer.

Citation

Our paper is available in IEEE Sensors or

  @ARTICLE{9813452,
  author={Bastico, Matteo and Belmonte-Hernández, Alberto and García, Federico Álvarez},
  journal={IEEE Sensors Journal}, 
  title={Continuous Person Identification and Tracking in Healthcare by Integrating Accelerometer Data and Deep Learning Filled 3D Skeletons}, 
  year={2022},
  volume={22},
  number={15},
  pages={15402-15409},
  doi={10.1109/JSEN.2022.3186499}}

(back to top)

Built With

Our released implementation is tested on:

  • Ubuntu 20.04 / macOS 11.5
  • Python 3.9.7

(back to top)

Getting Started

Prerequisites

  • Create and lunch conda environment
    conda create -n mpit python=3.9
    conda activate mpit

Installation

  • Clone project
    git clone https://github.com/matteo-bastico/Multisensor-PIT.git
    cd Multisensor-PIT
  • Install package for multisensor PIT
    pip install --upgrade --use-feature=in-tree-build .

(back to top)

Usage

Dataset

The complete dataset for PIT can be downloaded from https://drive.upm.es/s/3zgeHKhlbWYcow1. After downloading and extracting the file into the Data folder, you will get a data structure as follows:

  Data
  ├── skeleton_prediction	# Dataset for skeleton prediction training and testing
  │   ├── test   		# Test Dataset
  │   │   ├── examples.npy	# 259 sequences of skeletons with missing points
  │   │   └── labels.npy	# Ground-Truth
  │   └── train			# Train Dataset
  │       └── examples.npy	# 1035 sequences of complete skeletons            
  └── reidentification		# Dataset for PIT
      ├── AE_A			# All subfolders have the same structure
      │   ├── acceleration.txt	# File with bracelets data
      │   ├── skeleton.txt	# File with skeleton data
      │   └── video 		# Video as ground-truth
      ├── CR	
      ├── LE_A	
      ├── S_AE2	
      ├── case2_1		# All "case.." subfolders have the same structure
      │   ├── accel.json	# File with bracelets data
      │   ├── skeleton.json	# File with skeleton data
      │   └── video 		# Video as ground-truth
      ├── case3_2	
      ├── case5_1	
      ├── case7
      ├── AE_AE	
      ├── CR_E
      ├── SIT	
      ├── case1_1	
      ├── case2_2	
      ├── case4_1	
      ├── case5_2	
      ├── case8
      ├── AE_A_B	
      ├── DE_B	
      ├── S_AE	
      ├── case1_2	
      ├── case3_1	
      ├── case4_2	
      └── case6

Skeleton_prediction contains the training and testing data for our SkeletonRNN. PIT data are provided, divided in case studies, in .json or .txt files. The contained data structure is the same and to open them use:

  with open("skeleton.json", "r") as fs:
      skeleton_list = json.load(fs)
  with open("accel.json", "r") as fs:
      accel_list = json.load(fs)

or

  with open("skeleton.txt", "rb") as fs:
      skeleton_list = pickle.load(fs)
  with open("acceleration.txt", "rb") as fs:
      accel_list = pickle.load(fs)

The accelerations are stored in a list of Dicts. Each of them is structured as {'_id', 'x', 'y', 'z', 'timestamp', 'id'} where x,y and z are the accelerations in m/s^2 in the coordinates and id is the smart band identifier.

The skeletons are stored in a list of Dicts. Each of them is structured as {'_id', 'skeletons':{'id': {'confidences', 'joints', 'joints3D'},...}, 'timestamp'} where in 'skeletons' another Dict is stored in which each key correspond to the identifier of the skeleton and the values are joints, joints3D and their confidences. The timestamp is common for all the skeletons of one frame.

Testing

To test the Multisensor-PIT algorithm run

  python tests/test_mpit.py -s Data/reidentification/case1_1/skeleton.json -a Data/reidentification/case1_1/accel.json

The desire output (with default parameters) is the following image:

Parameters:

  • -s : Skeletons data path in the format of our dataset
  • -a : Acceleration data path in the format of our dataset
  • -w : Chuck size in seconds to split the entire data sequence
  • -c : Camera used to record: "Intel" or "Kinect" (Default for our data is Intel)
  • -asw : Acceleration smoothing window for noise removal (Default: 35)
  • -asp : Acceleration smoothing poly for noise removal (Default: 1)
  • -smd : Minimum duration in seconds for a skeleton to be considered valid (Default: 5, NOTE: we suggest to use at least half of the chuck size)
  • -ssf : Skeleton smoothing filter for noise removal, "savgol" or "weiner" (Default: "savgol")
  • -ssw : Skeleton smoothing window for noise removal (Default: 7)
  • -ssp : Skeleton smoothing poly for noise removal (Default: 1)
  • -dsf : Direction smoothing filter for noise removal, "savgol" or "weiner" (Default: "savgol")
  • -dsw : Direction smoothing window for noise removal (Default: 5)
  • -dsp : Direction smoothing poly for noise removal (Default: 1)
  • -csw : Smoothing window for conversion of skeletons positions to accelerations (Default: 3)
  • -csp : Smoothing poly for conversion of skeletons positions to accelerations (Default: 1)
  • -ca : Camera rotation angle on the y-axis (Default: 0)
  • -sw : Weight for similarities measures balance of pure and derivative (see paper, Default: 0.7)
  • -v : Verbose for console logs if >=1 (Default: 0)

Implementation

To include our algorithm on your code

  from mpit.algorithms import identify_and_track

Function header:

  identify_and_track(skeletons_frames, accelerations_dict, camera="Intel",
                     acceleration_smooth_window=35, acceleration_smooth_poly=1,
                     skeleton_min_duration=5, skeleton_smooth_filter="savgol",
                     skeleton_smooth_window=7, skeleton_smooth_poly=1,
                     direction_smooth_filter="savgol",
                     direction_smooth_window=5, direction_smooth_poly=1,
                     conversion_smooth_window=3, conversion_smooth_poly=1,
                     camera_angle=0, similarity_weight=0.7, verbose=0)

Parameters:

  • skeletons_frames: Skeletons data in the format of our dataset
  • accelerations_dict: Acceleration data in the format of our dataset
  • camera: Camera used to record: "Intel" or "Kinect" (Default for our data is Intel)
  • acceleration_smooth_window: Acceleration smoothing window for noise removal (Default: 35)
  • acceleration_smooth_poly: Acceleration smoothing poly for noise removal (Default: 1)
  • skeleton_min_duration: Minimum duration in seconds for a skeleton to be considered valid (Default: 5, NOTE: we suggest to use at least half of the chuck size)
  • skeleton_smooth_filter: Skeleton smoothing filter for noise removal, "savgol" or "weiner" (Default: "savgol")
  • skeleton_smooth_window: Skeleton smoothing window for noise removal (Default: 7)
  • skeleton_smooth_poly: Skeleton smoothing poly for noise removal (Default: 1)
  • direction_smooth_filter: Direction smoothing filter for noise removal, "savgol" or "weiner" (Default: "savgol")
  • direction_smooth_window: Direction smoothing window for noise removal (Default: 5)
  • direction_smooth_poly: Direction smoothing poly for noise removal (Default: 1)
  • conversion_smooth_window: Smoothing window for conversion of skeletons positions to accelerations (Default: 3)
  • conversion_smooth_poly: Smoothing poly for conversion of skeletons positions to accelerations (Default: 1)
  • camera_angle: Camera rotation angle on the y-axis (Default: 0)
  • similarity_weight: Weight for similarities measures balance of pure and derivative (see paper, Default: 0.7)
  • verbose: Verbose for console logs if >=1 (Default: 0)

Return: List of associations Dicts structured like {'ts_start': initial timestamp of association, 'ts_end': final timestamp of association, 'skeleton_id': skeleton identifier, 'bracelet_id': accelerometer identifier}

(back to top)

Roadmap

  • Skeletons graphical visualization
  • Testing with other camera
  • Implementation of direct test on .txt data

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/my_feature)
  3. Commit your Changes (git commit -m 'Add my_feature')
  4. Push to the Branch (git push origin feature/my_feature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Matteo Bastico - @matteobastico - matteo.bastico@gmail.com

Project Link: https://github.com/matteo-bastico/Multisensor-PIT

(back to top)

Acknowledgments

This work was supported by the H2020 European Project: Procare4Life https://procare4life.eu/ web Grant no. 875221. The authors are with the Escuela Técnica Superior de Ingenieros de Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain (e-mail: mab@gatv.ssr.upm.es, abh@gatv.ssr.upm.es, fag@gatv.ssr.upm.es).

(back to top)

About

Continuous Person Identification and Tracking in Healthcare by Integrating Accelerometer Data and 3D Skeletons

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages