metadata
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- LIBERO
- v3.0
configs:
- config_name: default
data_files: data/*/*.parquet
size_categories:
- 100M<n<1B
This dataset was created using LeRobot.
1 Dataset Description
This dataset includes 130 Lightwheel-Libero-Tasks, collected using the double-piper robot in environments provided by LW-BenchHub.
The robot configuration used during data collection is DoublePiper-Abs, using DoublePiperAbsEnvCfg inherits from DoublePiperEnvCfg.
1.1 Robot State
The robot state recorded in the environment is stored in: observation.state
- Dimension: 16
- Description: Joint positions of the robot
- Joint mapping: The exact joint ordering and semantics can be found in the Dataset Structure section below.
1.2 Robot Action
The robot actions are recorded in: action
- Dimension: 12
The action is ordered as: left arm (5), right arm (5), left gripper (1), right gripper (1).Component Dimension Description Arm action 10 5-DOF left arm + 5-DOF right arm Gripper action 2 Binary gripper control
1.2.1 Arm Action
The arm actions are implemented using JointPositionActionCfg.
- Controlled joints:
joint1_l(index 0),joint1_r(index 5)joint2_l(index 1),joint2_r(index 6)joint3_l(index 2),joint3_r(index 7)joint5_l(index 3),joint5_r(index 8)joint6_l(index 4),joint6_r(index 9)
self.action_config.left_arm_action = mdp.JointPositionActionCfg(
asset_name="robot",
joint_names=["joint1_l", "joint2_l", "joint3_l", "joint5_l", "joint6_l"],
scale=1,
use_default_offset=True
)
self.action_config.right_arm_action = mdp.JointPositionActionCfg(
asset_name="robot",
joint_names=["joint1_r", "joint2_r", "joint3_r", "joint5_r", "joint6_r"],
scale=1,
use_default_offset=True
)
1.2.2 Gripper Action
Grippers are controlled using BinaryJointPositionActionCfg. In the action data, -1 indicates closing the gripper, +1 indicates opening the gripper.
The action commands for opening and closing the left and right grippers are defined as follows:
- Left gripper (index 10)
- Right gripper (index 11)
- Action cmd:
-1: close gripper+1: open gripper
- Each gripper is actuated by two joints together, and the open/close commands are mapped explicitly to joint positions.
# left gripper
self.action_config.left_gripper_action = mdp.BinaryJointPositionActionCfg(
asset_name="robot",
joint_names=["finger_joint.*_l"],
open_command_expr={"finger_joint_left_l": 0.035, "finger_joint_right_l": -0.035},
close_command_expr={"finger_joint.*_l": 0.0},
)
# right gripper
self.action_config.right_gripper_action = mdp.BinaryJointPositionActionCfg(
asset_name="robot",
joint_names=["finger_joint.*_r"],
open_command_expr={"finger_joint_left_r": 0.035, "finger_joint_right_r": -0.035},
close_command_expr={"finger_joint.*_r": 0.0},
)
Dataset Structure
{
"codebase_version": "v3.0",
"robot_type": "double_piper",
"total_episodes": 6500,
"total_frames": 6010166,
"total_tasks": 130,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:6500"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.images.left_hand": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.width": 640,
"video.height": 480,
"video.fps": 50.0,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.channels": 3,
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.first_person": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.width": 640,
"video.height": 480,
"video.fps": 50.0,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.channels": 3,
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.right_hand": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.width": 640,
"video.height": 480,
"video.fps": 50.0,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.channels": 3,
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"shape": [
16
],
"names": [
"joint1_r",
"joint1_l",
"joint2_r",
"joint2_l",
"joint3_r",
"joint3_l",
"joint4_r",
"joint4_l",
"joint5_r",
"joint5_l",
"joint6_r",
"joint6_l",
"finger_joint_left_r",
"finger_joint_right_r",
"finger_joint_left_l",
"finger_joint_right_l"
],
"fps": 50
},
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"joint1_l",
"joint2_l",
"joint3_l",
"joint5_l",
"joint6_l",
"joint1_r",
"joint2_r",
"joint3_r",
"joint5_r",
"joint6_r",
"left_gripper",
"right_gripper"
],
"fps": 50
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null,
"fps": 50
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 50
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 50
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 50
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 50
}
},
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500
}