mimicgen.env_interfaces package#

Submodules#

mimicgen.env_interfaces.base module#

Base class for environment interfaces used by MimicGen. Defines a set of functions that should be implemented for every set of environments, and a global registry.

class mimicgen.env_interfaces.base.MG_EnvInterface(env)#

Bases: object

Environment interface API that MimicGen environment interfaces should conform to.

property INTERFACE_TYPE#

classmethod(function) -> method

Convert a function to be a class method.

A class method receives the class as implicit first argument, just like an instance method receives the instance. To declare a class method, use this idiom:

class C:

@classmethod def f(cls, arg1, arg2, …):

It can be called either on the class (e.g. C.f()) or on an instance (e.g. C().f()). The instance is ignored except for its class. If a class method is called for a derived class, the derived class object is passed as the implied first argument.

Class methods are different than C++ or Java static methods. If you want those, see the staticmethod builtin.

abstract action_to_gripper_action(action)#

Extracts the gripper actuation part of an action (compatible with env.step).

Parameters

action (np.array) – environment action

Returns

subset of environment action for gripper actuation

Return type

gripper_action (np.array)

abstract action_to_target_pose(action, relative=True)#

Converts action (compatible with env.step) to a target pose for the end effector controller. Inverse of @target_pose_to_action. Usually used to infer a sequence of target controller poses from a demonstration trajectory using the recorded actions.

Parameters
  • action (np.array) – environment action

  • relative (bool) – if True, use relative pose actions, else absolute pose actions

Returns

4x4 target eef pose that @action corresponds to

Return type

target_pose (np.array)

get_datagen_info(action=None)#

Get information needed for data generation, at the current timestep of simulation. If @action is provided, it will be used to compute the target eef pose for the controller, otherwise that will be excluded.

Returns

datagen_info (DatagenInfo instance)

abstract get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

abstract get_robot_eef_pose()#

Get current robot end effector pose. Should be the same frame as used by the robot end-effector controller.

Returns

4x4 eef pose matrix

Return type

pose (np.array)

abstract get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

abstract target_pose_to_action(target_pose, relative=True)#

Takes a target pose for the end effector controller and returns an action (usually a normalized delta pose action) to try and achieve that target pose.

Parameters
  • target_pose (np.array) – 4x4 target eef pose

  • relative (bool) – if True, use relative pose actions, else absolute pose actions

Returns

action compatible with env.step (minus gripper actuation)

Return type

action (np.array)

class mimicgen.env_interfaces.base.MG_EnvInterfaceMeta(name, bases, class_dict)#

Bases: type

This metaclass adds env interface classes into the global registry.

mimicgen.env_interfaces.base.make_interface(name, interface_type, *args, **kwargs)#

Creates an instance of a env interface. Make sure to pass any other needed arguments.

mimicgen.env_interfaces.base.register_env_interface(cls)#

Register environment interface class into global registry.

mimicgen.env_interfaces.robosuite module#

MimicGen environment interface classes for basic robosuite environments.

class mimicgen.env_interfaces.robosuite.MG_Coffee(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite Coffee task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_CoffeePreparation(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite CoffeePreparation task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_HammerCleanup(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite HammerCleanup task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_Kitchen(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite Kitchen task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_MugCleanup(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite MugCleanup task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_NutAssembly(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite NutAssembly task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_PickPlace(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite PickPlace task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_Square(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite Square task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_Stack(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite Stack task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_StackThree(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite StackThree task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_Threading(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite Threading task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.MG_ThreePieceAssembly(env)#

Bases: mimicgen.env_interfaces.robosuite.RobosuiteInterface

Corresponds to robosuite ThreePieceAssembly task and variants.

get_object_poses()#

Gets the pose of each object relevant to MimicGen data generation in the current scene.

Returns

dictionary that maps object name (str) to object pose matrix (4x4 np.array)

Return type

object_poses (dict)

get_subtask_term_signals()#

Gets a dictionary of binary flags for each subtask in a task. The flag is 1 when the subtask has been completed and 0 otherwise. MimicGen only uses this when parsing source demonstrations at the start of data generation, and it only uses the first 0 -> 1 transition in this signal to detect the end of a subtask.

Returns

dictionary that maps subtask name to termination flag (0 or 1)

Return type

subtask_term_signals (dict)

class mimicgen.env_interfaces.robosuite.RobosuiteInterface(env)#

Bases: mimicgen.env_interfaces.base.MG_EnvInterface

MimicGen environment interface base class for basic robosuite environments.

INTERFACE_TYPE = 'robosuite'#
action_to_gripper_action(action)#

Extracts the gripper actuation part of an action (compatible with env.step).

Parameters

action (np.array) – environment action

Returns

subset of environment action for gripper actuation

Return type

gripper_action (np.array)

action_to_target_pose(action, relative=True)#

Converts action (compatible with env.step) to a target pose for the end effector controller. Inverse of @target_pose_to_action. Usually used to infer a sequence of target controller poses from a demonstration trajectory using the recorded actions.

Parameters
  • action (np.array) – environment action

  • relative (bool) – if True, use relative pose actions, else absolute pose actions

Returns

4x4 target eef pose that @action corresponds to

Return type

target_pose (np.array)

get_object_pose(obj_name, obj_type)#

Returns 4x4 object pose given the name of the object and the type.

Parameters
  • obj_name (str) – name of object

  • obj_type (str) – type of object - either “body”, “geom”, or “site”

Returns

4x4 object pose

Return type

obj_pose (np.array)

get_robot_eef_pose()#

Get current robot end effector pose. Should be the same frame as used by the robot end-effector controller.

Returns

4x4 eef pose matrix

Return type

pose (np.array)

target_pose_to_action(target_pose, relative=True)#

Takes a target pose for the end effector controller and returns an action (usually a normalized delta pose action) to try and achieve that target pose.

Parameters
  • target_pose (np.array) – 4x4 target eef pose

  • relative (bool) – if True, use relative pose actions, else absolute pose actions

Returns

action compatible with env.step (minus gripper actuation)

Return type

action (np.array)

Module contents#