Skip to content

Conversation

@JuanaDd
Copy link
Contributor

@JuanaDd JuanaDd commented Sep 10, 2025

Description

This is an implementation of TacSL integrated with Isaac Lab, which demonstrates how to properly configure and use tactile sensors to obtain realistic sensor outputs including tactile RGB images, force fields, and other relevant tactile measurements.

Fixes # (issue)

Type of change

  • New feature (non-breaking change which adds functionality)

Screenshots

The screenshots of added documentation and simulation outputs.
image
image
image

Checklist

  • I have run the pre-commit checks with ./isaaclab.sh --format
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • I have updated the changelog and the corresponding version in the extension's config/extension.toml file
  • I have added my name to the CONTRIBUTORS.md or my name already exists there

clean up license and configs; use full import path;

add doc for visuo_tactile_sensor
References
~~~~~~~~~~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iakinola23 Could you help view this documentation here? Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @iakinola23 updated the documentation with your edits.

Copy link
Contributor Author

@JuanaDd JuanaDd Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kellyguo11 Hi Kelly, I will upload the npz binaries, and the USD files to nucleus once the licensing concerns are resolved. Thanks!

@Mayankm96 Mayankm96 changed the title visual-based tactile sensor impl. and shape sensing example Adds visual-based tactile sensor with shape sensing example Sep 10, 2025
@Mayankm96 Mayankm96 added the enhancement New feature or request label Sep 10, 2025
@github-actions github-actions bot added documentation Improvements or additions to documentation isaac-lab Related to Isaac Lab team labels Sep 11, 2025
Copy link
Contributor

@kellyguo11 kellyguo11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a few high-level comments, haven't gone through the code in detail yet

taxim_gelsight = gelsightRender("gelsight_r15")
import ipdb

ipdb.set_trace()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is there a breakpoint here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted in the newest commit.

self._camera_sensor = TiledCamera(self.cfg.camera_cfg)

# Initialize camera if not already done
# TODO: Juana: this is a hack to initialize the camera sensor. Should camera_sensor be managed by TacSL sensor or InteractiveScene?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can do something similar as the Raycaster camera?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Kelly, I checked the RayCasterCamera implementation, and it differs slightly from the tacsl sensor logic. The tacsl_sensor includes a camera sensor that can be enabled or disabled via the cfg, while RayCasterCamera only uses CameraData without having a camera or other sensor as a member. I’ve kept the current implementation and cleaned up the comments. Please let me know if you have any further concerns or suggestions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

images should be .jpg

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all changed to jpg format and documentation is updated accordingly.

.. code-block:: bash
conda activate env_isaaclab
pip install opencv-python==4.11.0 trimesh==4.5.1 imageio==2.37.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to add these to setup.py?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added opencv-python to setup.py; trimesh is already included, and imageio's dependency is not required in the newest commit.

license agreement from NVIDIA CORPORATION is strictly prohibited.
----------------
Tensorized implementation of RGB rendering of gelsight-style visuo-tactile sensors
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you also add any licenses required for gelsight under docs/licenses?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the license header to be consistent with other files.

@kellyguo11
Copy link
Contributor

please also make sure to run ./isaaclab.sh -f for the linter checks

return dzdx, dzdy


def generate_normals(height_map):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generate_normals not needed and deprecated. only the tensorized version is used.

self.background_tensor = torch.tensor(self.background, device=self.device)
print("Gelsight initialization done!")

def render(self, heightMap):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete render as it is not used. Only the tensorized version (render_tensorized ) is used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted render , generate_normals, and compute_image_gradient , padding as well.

@zxhuang97
Copy link

zxhuang97 commented Sep 30, 2025

Thank you for making TacSL available on IsaacLab! I have three quick questions:

  1. Have you evaluated the physical plausibility of the GelSight finger? For example, would it allow stable grasps of objects like pegs or bolts, similar to what TacSL demonstrated in Isaac Gym?
  2. I noticed that only the USD for gelsight_r15 is currently available. Do you plan to migrate gelsight_mini as well? Alternatively, could you share guidance on converting the URDF to USD while preserving realistic physics?
  3. Is there a way to change the resolution of the tactile image? Currently it's hard-coded in configuration to be 320 x 240

Thanks!

@Mayankm96 Mayankm96 moved this to In review in Isaac Lab Oct 10, 2025
@kellyguo11
Copy link
Contributor

@ooctipus @Mayankm96 could you please help do a review of this one?

@JuanaDd
Copy link
Contributor Author

JuanaDd commented Oct 18, 2025

@zxhuang97 Hi, thanks for your questions and for checking out TacSL on IsaacLab.

  1. Yes. A full task environment (like object grasping) will be supported later after this PR. This PR mainly serves as a reference for how to implement a custom visual-based sensor in the IsaacLab framework.

  2. The GelSight Mini will likely be supported later, as long as there aren’t any license issues. In the meantime, you can try using the URDF importer tool in IsaacSim or the convert_urdf.py script in IsaacLab to convert it yourself. You can tweak the physical properties — such as compliance_stiffness and compliant_damping — in the tacsl sensor config if needed.

  3. The tactile image resolution can be modified through the camera config. Right now, it’s using the values defined in the camera prim included in the asset.

Hope this helps!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only this diagram seems to be getting used. Please remove any unused images.

Comment on lines 15 to 34
conf_r15 = {
"data_dir": "gelsight_r15_data",
"background_path": "bg.jpg",
"calib_path": "polycalib.npz",
"real_bg": "real_bg.npy",
"h": 320,
"w": 240,
"numBins": 120,
"pixmm": 0.0877,
}
conf_gs_mini = {
"data_dir": "gs_mini_data",
"background_path": "bg.jpg",
"calib_path": "polycalib.npz",
"real_bg": "real_bg.npy",
"h": 240,
"w": 320,
"numBins": 120,
"pixmm": 0.065,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following all other sensors, please make these into configclass objects with the correct doc-strings. We usually don't want to keep sensor-specific settings in the core codebase.

Sensors configuration live here: https://github.com/isaac-sim/IsaacLab/tree/main/source/isaaclab_assets/isaaclab_assets/sensors

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please choose variable names that are intuitive to read and understand. "h" and "w" are insufficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion. A Config class has been added here and gelsight configs are put under isaaclab_asets/sensors

)


def mkdir_helper(dir_path):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add type annotations correctly along with docstrings to the code here.

"""Visualize the tactile shear field.
Args:
tactile_normal_force: Array of tactile normal forces.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing shape information.

"""Visualize the penetration depth.
Args:
penetration_depth_img: Image of penetration depth.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing shape information. How are the defaults chosen? Would be good to have some reference.

"""Generate the gradient magnitude and direction of the height map.
Args:
img: Input height map tensor.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing shape information.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

kernel_sz: Size of the kernel. Defaults to 5.
Returns:
Filtering kernel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing shape information.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

"""Apply Gaussian filtering to the input image tensor.
Args:
img: Input image tensor.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing shape information.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

if self.cfg.debug_vis:
self._initialize_visualization()

def get_initial_render(self) -> dict | None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this value used anywhere downstream? How does it handle nominal value when environments are getting reseted? I was wondering if this call should happen inside the sensor reset

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, the sensor uses this stored "no-contact" image to calculate deformation (diff = nominal - current) every step. Capturing it once should be enough and safe because a reset might happen while the robot is holding an object.

I've also added the warning to the docstring. It now explicitly states that the user must ensure the sensor is in a "no contact" state when calling this method to avoid incorrect baselines. Thanks for pointing this out—your feedback helped clarify this important detail.


import isaacsim.core.utils.torch as torch_utils
import omni.log
from isaacsim.core.prims import SdfShapePrim
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we are working towards getting rid of isaacsim dependencies, would it be possible to use PhysX views directly and not rely on Isaac Sim classes for SDF? I don't think they are doing much besides creating the PhysX views internally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to call create_sdf_shape_view from physx view directly.

from scipy.spatial.transform import Rotation as R
from typing import TYPE_CHECKING, Any

import isaacsim.core.utils.torch as torch_utils
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to use the math utils provided with Isaac Lab: isaaclab.utils.math?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# Timing statistics
#########################################################################################

def get_timing_summary(self) -> dict:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To stay consistent with other sensor classes, this shouldn't belong to the sensor class itself. All benchmarking should be done outside in user code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed benchmarking code.

Comment on lines 143 to 144
compliant_contact_stiffness: float | None = None
compliant_contact_damping: float | None = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing doc strings.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

func: Callable = from_files.spawn_from_usd_with_physics_material_on_prim
compliant_contact_stiffness: float | None = None
compliant_contact_damping: float | None = None
apply_physics_material_prim_path: str | None = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply sounds more of a function name which makes it confusing. My expectation would be that this is a bool variable which says whether to apply or not. But it is a string. Is there a better name we can use here instead that is less ambiguou?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to "physics_material_prim_path", hope it's less confusing :)

func: Callable = from_files.spawn_from_usd_with_physics_material_on_prim
compliant_contact_stiffness: float | None = None
compliant_contact_damping: float | None = None
apply_physics_material_prim_path: str | None = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this only work for one rigidbody in the usd?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added support for multiple rigidbodies here, thanks.

Comment on lines 786 to 792
contact_object_velocities = self._contact_object_body_view.get_velocities()
contact_object_linvel_w = contact_object_velocities[env_ids, :3]
contact_object_angvel_w = contact_object_velocities[env_ids, 3:]

elastomer_velocities = self._elastomer_body_view.get_velocities()
elastomer_linvel_w = elastomer_velocities[env_ids, :3]
elastomer_angvel_w = elastomer_velocities[env_ids, 3:]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing to be careful with here is that the velocities returned from the body view are the velocity of the COM of that body. If your *_body_view.get_coms() has non zero transforms your velocity values will be incorrect. I would suggest getting the getting the COMs of the contact body and elastomer body at init and then doing the linear velocity transforms (see IMU update call)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing that out and for the detailed explanation. I’ll update the implementation here to use the COMs of the contact body and elastomer body for the velocity transforms as you suggested.

JuanaDd and others added 12 commits December 24, 2025 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

asset New asset feature or request documentation Improvements or additions to documentation enhancement New feature or request isaac-lab Related to Isaac Lab team

Projects

Status: In review

Development

Successfully merging this pull request may close these issues.

6 participants