mimicgen.datagen package#

Submodules#

mimicgen.datagen.data_generator module#

Base class for data generator.

class mimicgen.datagen.data_generator.DataGenerator(task_spec, dataset_path, demo_keys=None)#

Bases: object

The main data generator object that loads a source dataset, parses it, and generates new trajectories.

generate(env, env_interface, select_src_per_subtask=False, transform_first_robot_pose=False, interpolate_from_last_target_pose=True, render=False, video_writer=None, video_skip=5, camera_names=None, pause_subtask=False)#

Attempt to generate a new demonstration.

Parameters
  • env (robomimic EnvBase instance) – environment to use for data collection

  • env_interface (MG_EnvInterface instance) – environment interface for some data generation operations

  • select_src_per_subtask (bool) – if True, select a different source demonstration for each subtask during data generation, else keep the same one for the entire episode

  • transform_first_robot_pose (bool) – if True, each subtask segment will consist of the first robot pose and the target poses instead of just the target poses. Can sometimes help improve data generation quality as the interpolation segment will interpolate to where the robot started in the source segment instead of the first target pose. Note that the first subtask segment of each episode will always include the first robot pose, regardless of this argument.

  • interpolate_from_last_target_pose (bool) – if True, each interpolation segment will start from the last target pose in the previous subtask segment, instead of the current robot pose. Can sometimes improve data generation quality.

  • render (bool) – if True, render on-screen

  • video_writer (imageio writer) – video writer

  • video_skip (int) – determines rate at which environment frames are written to video

  • camera_names (list) – determines which camera(s) are used for rendering. Pass more than one to output a video with multiple camera views concatenated horizontally.

  • pause_subtask (bool) – if True, pause after every subtask during generation, for debugging.

Returns

dictionary with the following items:

initial_state (dict): initial simulator state for the executed trajectory states (list): simulator state at each timestep observations (list): observation dictionary at each timestep datagen_infos (list): datagen_info at each timestep actions (np.array): action executed at each timestep success (bool): whether the trajectory successfully solved the task or not src_demo_inds (list): list of selected source demonstration indices for each subtask src_demo_labels (np.array): same as @src_demo_inds, but repeated to have a label for each timestep of the trajectory

Return type

results (dict)

randomize_subtask_boundaries()#

Apply random offsets to sample subtask boundaries according to the task spec. Recall that each demonstration is segmented into a set of subtask segments, and the end index of each subtask can have a random offset.

select_source_demo(eef_pose, object_pose, subtask_ind, src_subtask_inds, subtask_object_name, selection_strategy_name, selection_strategy_kwargs=None)#

Helper method to run source subtask segment selection.

Parameters
  • eef_pose (np.array) – current end effector pose

  • object_pose (np.array) – current object pose for this subtask

  • subtask_ind (int) – index of subtask

  • src_subtask_inds (np.array) – start and end indices for subtask segment in source demonstrations of shape (N, 2)

  • subtask_object_name (str) – name of reference object for this subtask

  • selection_strategy_name (str) – name of selection strategy

  • selection_strategy_kwargs (dict) – extra kwargs for running selection strategy

Returns

selected source demo index

Return type

selected_src_demo_ind (int)

mimicgen.datagen.datagen_info module#

Defines structure of information that is needed from an environment for data generation.

class mimicgen.datagen.datagen_info.DatagenInfo(eef_pose=None, object_poses=None, subtask_term_signals=None, target_pose=None, gripper_action=None)#

Bases: object

Structure of information needed from an environment for data generation. To allow for flexibility, not all information must be present.

to_dict()#

Convert this instance to a dictionary containing the same information.

mimicgen.datagen.selection_strategy module#

Selection strategies used by MimicGen to select subtask segments from source human demonstrations.

class mimicgen.datagen.selection_strategy.MG_SelectionStrategy#

Bases: object

Defines methods and functions for selection strategies to implement.

property NAME#

classmethod(function) -> method

Convert a function to be a class method.

A class method receives the class as implicit first argument, just like an instance method receives the instance. To declare a class method, use this idiom:

class C:

@classmethod def f(cls, arg1, arg2, …):

It can be called either on the class (e.g. C.f()) or on an instance (e.g. C().f()). The instance is ignored except for its class. If a class method is called for a derived class, the derived class object is passed as the implied first argument.

Class methods are different than C++ or Java static methods. If you want those, see the staticmethod builtin.

abstract select_source_demo(eef_pose, object_pose, src_subtask_datagen_infos)#

Selects source demonstration index using the current robot pose, relevant object pose for the current subtask, and relevant information from the source demonstrations for the current subtask.

Parameters
  • eef_pose (np.array) – current 4x4 eef pose

  • object_pose (np.array) – current 4x4 object pose, for the object in this subtask

  • src_subtask_datagen_infos (list) – DatagenInfo instance for the relevant subtask segment in the source demonstrations

Returns

index of source demonstration - indicates which source subtask segment to use

Return type

source_demo_ind (int)

class mimicgen.datagen.selection_strategy.MG_SelectionStrategyMeta(name, bases, class_dict)#

Bases: type

This metaclass adds selection strategy classes into the global registry.

class mimicgen.datagen.selection_strategy.NearestNeighborObjectStrategy#

Bases: mimicgen.datagen.selection_strategy.MG_SelectionStrategy

Pick source demonstration to be the one with the closest object pose to the object in the current scene.

NAME = 'nearest_neighbor_object'#
select_source_demo(eef_pose, object_pose, src_subtask_datagen_infos, pos_weight=1.0, rot_weight=1.0, nn_k=3)#

Selects source demonstration index using the current robot pose, relevant object pose for the current subtask, and relevant information from the source demonstrations for the current subtask.

Parameters
  • eef_pose (np.array) – current 4x4 eef pose

  • object_pose (np.array) – current 4x4 object pose, for the object in this subtask

  • src_subtask_datagen_infos (list) – DatagenInfo instance for the relevant subtask segment in the source demonstrations

  • pos_weight (float) – weight on position for minimizing pose distance

  • rot_weight (float) – weight on rotation for minimizing pose distance

  • nn_k (int) – pick source demo index uniformly at randomly from the top @nn_k nearest neighbors

Returns

index of source demonstration - indicates which source subtask segment to use

Return type

source_demo_ind (int)

class mimicgen.datagen.selection_strategy.NearestNeighborRobotDistanceStrategy#

Bases: mimicgen.datagen.selection_strategy.MG_SelectionStrategy

Pick source demonstration to be the one that minimizes the distance the robot end effector will need to travel from the current pose to the first pose in the transformed segment.

NAME = 'nearest_neighbor_robot_distance'#
select_source_demo(eef_pose, object_pose, src_subtask_datagen_infos, pos_weight=1.0, rot_weight=1.0, nn_k=3)#

Selects source demonstration index using the current robot pose, relevant object pose for the current subtask, and relevant information from the source demonstrations for the current subtask.

Parameters
  • eef_pose (np.array) – current 4x4 eef pose

  • object_pose (np.array) – current 4x4 object pose, for the object in this subtask

  • src_subtask_datagen_infos (list) – DatagenInfo instance for the relevant subtask segment in the source demonstrations

  • pos_weight (float) – weight on position for minimizing pose distance

  • rot_weight (float) – weight on rotation for minimizing pose distance

  • nn_k (int) – pick source demo index uniformly at randomly from the top @nn_k nearest neighbors

Returns

index of source demonstration - indicates which source subtask segment to use

Return type

source_demo_ind (int)

class mimicgen.datagen.selection_strategy.RandomStrategy#

Bases: mimicgen.datagen.selection_strategy.MG_SelectionStrategy

Pick source demonstration randomly.

NAME = 'random'#
select_source_demo(eef_pose, object_pose, src_subtask_datagen_infos)#

Selects source demonstration index using the current robot pose, relevant object pose for the current subtask, and relevant information from the source demonstrations for the current subtask.

Parameters
  • eef_pose (np.array) – current 4x4 eef pose

  • object_pose (np.array) – current 4x4 object pose, for the object in this subtask

  • src_subtask_datagen_infos (list) – DatagenInfo instance for the relevant subtask segment in the source demonstrations

Returns

index of source demonstration - indicates which source subtask segment to use

Return type

source_demo_ind (int)

mimicgen.datagen.selection_strategy.assert_selection_strategy_exists(name)#

Allow easy way to check if selection strategy exists.

mimicgen.datagen.selection_strategy.make_selection_strategy(name, *args, **kwargs)#

Creates an instance of a selection strategy class, specified by @name, which is used to look it up in the registry.

mimicgen.datagen.selection_strategy.register_selection_strategy(cls)#

Register selection strategy class into global registry.

mimicgen.datagen.waypoint module#

A collection of classes used to represent waypoints and trajectories.

class mimicgen.datagen.waypoint.Waypoint(pose, gripper_action, noise=None)#

Bases: object

Represents a single desired 6-DoF waypoint, along with corresponding gripper actuation for this point.

class mimicgen.datagen.waypoint.WaypointSequence(sequence=None)#

Bases: object

Represents a sequence of Waypoint objects.

classmethod from_poses(poses, gripper_actions, action_noise)#

Instantiate a WaypointSequence object given a sequence of poses, gripper actions, and action noise.

Parameters
  • poses (np.array) – sequence of pose matrices of shape (T, 4, 4)

  • gripper_actions (np.array) – sequence of gripper actions that should be applied at each timestep of shape (T, D).

  • action_noise (float or np.array) – sequence of action noise magnitudes that should be applied at each timestep. If a single float is provided, the noise magnitude will be constant over the trajectory.

property last_waypoint#

Return last waypoint in sequence.

Returns

waypoint (Waypoint instance)

split(ind)#

Splits this sequence into 2 pieces, the part up to time index @ind, and the rest. Returns 2 WaypointSequence objects.

class mimicgen.datagen.waypoint.WaypointTrajectory#

Bases: object

A sequence of WaypointSequence objects that corresponds to a full 6-DoF trajectory.

add_waypoint_sequence(sequence)#

Directly append sequence to list (no interpolation).

Parameters

sequence (WaypointSequence instance) – sequence to add

add_waypoint_sequence_for_target_pose(pose, gripper_action, num_steps, skip_interpolation=False, action_noise=0.0)#

Adds a new waypoint sequence corresponding to a desired target pose. A new WaypointSequence will be constructed consisting of @num_steps intermediate Waypoint objects. These can either be constructed with linear interpolation from the last waypoint (default) or be a constant set of target poses (set @skip_interpolation to True).

Parameters
  • pose (np.array) – 4x4 target pose

  • gripper_action (np.array) – value for gripper action

  • num_steps (int) – number of action steps when trying to reach this waypoint. Will add intermediate linearly interpolated points between the last pose on this trajectory and the target pose, so that the total number of steps is @num_steps.

  • skip_interpolation (bool) – if True, keep the target pose fixed and repeat it @num_steps times instead of using linearly interpolated targets.

  • action_noise (float) – scale of random gaussian noise to add during action execution (e.g. when @execute is called)

execute(env, env_interface, render=False, video_writer=None, video_skip=5, camera_names=None)#

Main function to execute the trajectory. Will use env_interface.target_pose_to_action to convert each target pose at each waypoint to an action command, and pass that along to env.step.

Parameters
  • env (robomimic EnvBase instance) – environment to use for executing trajectory

  • env_interface (MG_EnvInterface instance) – environment interface for executing trajectory

  • render (bool) – if True, render on-screen

  • video_writer (imageio writer) – video writer

  • video_skip (int) – determines rate at which environment frames are written to video

  • camera_names (list) – determines which camera(s) are used for rendering. Pass more than one to output a video with multiple camera views concatenated horizontally.

Returns

dictionary with the following items for the executed trajectory:

states (list): simulator state at each timestep observations (list): observation dictionary at each timestep datagen_infos (list): datagen_info at each timestep actions (list): action executed at each timestep success (bool): whether the trajectory successfully solved the task or not

Return type

results (dict)

property last_waypoint#

Return last waypoint in sequence.

Returns

waypoint (Waypoint instance)

merge(other, num_steps_interp=None, num_steps_fixed=None, action_noise=0.0)#

Merge this trajectory with another (@other).

Parameters
  • other (WaypointTrajectory object) – the other trajectory to merge into this one

  • num_steps_interp (int or None) – if not None, add a waypoint sequence that interpolates between the end of the current trajectory and the start of @other

  • num_steps_fixed (int or None) – if not None, add a waypoint sequence that has constant target poses corresponding to the first target pose in @other

  • action_noise (float) – noise to use during the interpolation segment

pop_first()#

Removes first waypoint in first waypoint sequence and returns it. If the first waypoint sequence is now empty, it is also removed.

Returns

waypoint (Waypoint instance)

Module contents#