Connect Your Robot or Open Simulation

If you are using a real robot, connect it now and confirm it is recognized by your system before running the record command. If you are working in simulation, the record command can drive a virtual robot in the gym_pusht or gym_aloha environments using a keyboard or scripted policy as the teleoperation source.

# Real robot — verify connection (replace so100 with your robot type) python -m lerobot.scripts.control_robot \ --robot-path lerobot/configs/robot/so100.yaml \ --control-mode teleoperate \ --teleop-time-s 5 # Simulation — no hardware needed python -m lerobot.scripts.control_robot \ --robot-path lerobot/configs/robot/so100_sim.yaml \ --control-mode teleoperate \ --teleop-time-s 5

You should see joint states streaming in the terminal output and (for real robots) the arm responding to input. If not, resolve the connection issue before proceeding — recording with a disconnected robot silently produces corrupted data.

The Record Command

The core recording command. Adapt the flags to your setup:

python -m lerobot.scripts.control_robot \ --robot-path lerobot/configs/robot/so100.yaml \ --control-mode record \ --dataset.repo-id $HF_USER/pick-place-v1 \ --dataset.num-episodes 50 \ --dataset.single-task "Pick up the red cube and place it in the bowl" \ --dataset.fps 30 \ --dataset.push-to-hub 1 \ --display-cameras 1 # $HF_USER is your HuggingFace username (set via: export HF_USER=your_username) # --dataset.push-to-hub 1 uploads automatically after each episode # --display-cameras 1 shows live camera feeds during recording
Simulation recording: Replace --robot-path with your sim config. Add --env-name gym_pusht/PushT-v0 and --policy-path lerobot/act_pusht_keypoints to record scripted (not human) demonstrations for a baseline dataset.

How Many Demonstrations?

The right number depends on your setup:

  • Simulation: 50 scripted episodes is sufficient for a baseline policy. The environment is deterministic, so variance is low and 50 is enough for ACT to converge.
  • Real robot, simple task: 50–80 human demonstrations. A pick-and-place with a fixed object position can train well at the low end of this range if your demos are consistent.
  • Real robot, variable task: 100–200 demonstrations. If object positions vary, or if the task requires multiple sub-steps, you need more coverage.

For this path, target 50 demonstrations minimum. Quality beats quantity — 50 consistent demonstrations outperform 150 sloppy ones every time.

Good Demonstration Practices

Consistent workspace setup

Reset objects to the same position before each episode. Use tape on the table to mark starting positions. The policy will learn from the distribution of positions in your demos — if they are all in the same spot, the policy will be calibrated for that spot.

Full, complete episodes

Every episode should start from the same home pose and end with the task fully completed. Do not stop recording mid-task. An incomplete episode where the gripper is halfway through a grasp teaches the model a broken behavior.

Deliberate, smooth motions

Move at 40–60% of maximum speed. Slow enough to be smooth, fast enough not to be jittery. The model learns timing from your demonstrations — erratic speed produces erratic policies.

Partial or aborted demos

If you drop the object, collide with the workspace, or trigger an error, press Ctrl+C to abort the episode. The episode will be discarded. Never push an aborted episode — it poisons the dataset.

Inconsistent strategies

Do not mix strategies: do not grasp from the left in some demos and from the right in others. Pick one approach and use it for every episode. ACT's CVAE learns a single "style" — inconsistency forces it to average, producing neither strategy reliably.

Push to HuggingFace Hub

If you did not set --dataset.push-to-hub 1 during recording, push manually after the session:

# Push your completed dataset to HuggingFace Hub python -m lerobot.scripts.push_dataset_to_hub \ --dataset-dir ~/lerobot-datasets/pick-place-v1 \ --repo-id $HF_USER/pick-place-v1 # Verify it is live at: # https://huggingface.co/datasets/$HF_USER/pick-place-v1
Dataset visibility: New datasets default to public on HuggingFace Hub. If your workspace or task is sensitive, add --private 1 to the push command. Public datasets contribute to the robotics community and may be featured in the SVRC dataset library.

Unit 3 Complete When...

You have at least 50 complete, unaborted demonstrations in a LeRobot dataset on HuggingFace Hub. You can load your dataset with LeRobotDataset("your-username/pick-place-v1") and see the expected number of episodes. You have visualized at least 5 of your own episodes using lerobot.scripts.visualize_dataset and confirmed the joint trajectories look smooth and the gripper state changes are clean. You are ready to train in Unit 4.