torchdrivesim.behavior.replay
#
Module Contents#
Classes#
Performs log replay for a subset of agents based on their recorded trajectories. 
Functions#

 torchdrivesim.behavior.replay.interaction_replay(location, dataset_path, initial_frame=1, segment_length=40, recording=0)[source]#
 class torchdrivesim.behavior.replay.ReplayWrapper(simulator: torchdrivesim.simulator.SimulatorInterface, npc_mask: torchdrivesim.simulator.TensorPerAgentType, agent_states: torchdrivesim.simulator.TensorPerAgentType, present_masks: torchdrivesim.simulator.TensorPerAgentType = None, time: int = 0)[source]#
Bases:
torchdrivesim.simulator.NPCWrapper
Performs log replay for a subset of agents based on their recorded trajectories. The log has a finite length T, after which the replay will loop back from the beginning.
 Parameters:
simulator – existing simulator to wrap
npc_mask – A functor of tensors with a single dimension of size A, indicating which agents to replay. The tensors can not have batch dimensions.
agent_states – a functor of BxAxTxSt tensors with states to replay across time, which should be padded with arbitrary values for nonreplay agents
present_masks – indicates when replay agents appear and disappear; by default they’re all present at all times
time – initial index into the time dimension for replay, incremented at every step
 _npc_teleport_to()[source]#
Provides the states to which the NPCs should be set after step, with arbitrary padding for the remaining agents. By default, no teleportation is performed, but subclasses may use it instead of, or on top of defining the NPC action.
 Returns:
a functor of BxAxSt tensors, where A is the number of agents in the inner simulator, or None if no teleportation is required
 _update_npc_present_mask()[source]#
Computes updated present masks for NPCs, with arbitrary padding for the remaining agents. By default, leaves present masks unchanged.
 Returns:
a functor of BxA boolean tensors, where A is the number of agents in the inner simulator
 step(action)[source]#
Runs the simulation for one step with given agent actions. Input is a functor of BxAxAc tensors, where Ac is determined by the kinematic model.
 to(device) typing_extensions.Self [source]#
Modifies the simulator inplace, putting all tensors on the device provided.
 copy()[source]#
Duplicates this simulator, allowing for independent subsequent execution. The copy is relatively shallow, in that the tensors are the same objects but dictionaries referring to them are shallowly copied.
 extend(n, in_place=True)[source]#
Multiplies the first batch dimension by the given number. Like in pytorch3d, this is equivalent to introducing extra batch dimension on the right and then flattening.