Replies: 2 comments 2 replies
-
Note: I've tried ti.kernel and ti.func, but somewhat expectably, they failed: Code that did not work
def execute_straight_line_trajectory(
franka: RigidEntity,
scene: gs.Scene,
target_pos: torch.Tensor,
target_quat: torch.Tensor,
gripper_force: torch.Tensor,
render: bool,
keypoint_distance=0.1,
num_steps_between_keypoints=10,
):
"""
Execute a straight-line trajectory for a robot arm in Cartesian space.
This implements a simple motion planning approach:
1. Creates a straight line from current position to target in Cartesian space
2. Generates evenly spaced keypoints along that line
3. Computes IK for each keypoint
4. Interpolates in joint space between keypoints
Args:
franka: The Franka robot entity
scene: The simulation scene
pos: Target position tensor with shape (num_envs, 3)
quat: Target orientation quaternion tensor with shape (num_envs, 4)
gripper_force: Force in newtons applied to gripper fingers.
render: Whether to render the trajectory to store for visualization. Will not affect training, but will slow it down.
keypoint_distance: Distance between keypoints in meters (default: 0.1m or 10cm)
num_steps_between_keypoints: Number of interpolation steps between keypoints
Returns:
None: The function directly executes the trajectory in the simulation
"""
assert gripper_force.shape[-1] == 2, (
f"Gripper force shape must be (num_envs, 2) or (2,). Currently: {gripper_force.shape}"
)
assert target_pos.shape[-1] == 3, (
f"Target position shape must be (num_envs, 3) or (3,). Currently: {target_pos.shape}"
)
assert target_quat.shape[-1] == 4, (
f"Target quaternion shape must be (num_envs, 4) or (4,). Currently: {target_quat.shape}"
)
device = target_pos.device
current_end_effector_pos = torch.tensor(franka.get_link("hand").pos, device=device)
current_end_effector_quat = torch.tensor(
franka.get_link("hand").quat, device=device
)
alpha = torch.linspace(0, 1, num_steps_between_keypoints + 1)
# Precompute interpolated positions and quaternions for each alpha step
precomputed_pos = torch.stack(
[
current_end_effector_pos + (target_pos[0] - current_end_effector_pos) * a
for a in alpha
]
) # shape: (num_steps_between_keypoints + 1, 3)
precomputed_quat = torch.stack(
[
current_end_effector_quat + (target_quat[0] - current_end_effector_quat) * a
for a in alpha
]
) # shape: (num_steps_between_keypoints + 1, 4)
precomputed_pos_field = ti.Vector.field(
3, dtype=ti.f32, shape=(num_steps_between_keypoints + 1,)
)
precomputed_quat_field = ti.Vector.field(
4, dtype=ti.f32, shape=(num_steps_between_keypoints + 1,)
)
precomputed_pos_field.from_torch(precomputed_pos)
precomputed_quat_field.from_torch(precomputed_quat)
@ti.func
def ik_step(precomputed_pos: ti.template(), precomputed_quat: ti.template()):
keypoint_ik = franka.inverse_kinematics(
link=franka.get_link("hand"), pos=precomputed_pos, quat=precomputed_quat
)
franka.control_dofs_position(keypoint_ik)
franka.control_dofs_force(gripper_force, dofs_idx_local=[7, 8])
scene.step(update_visualizer=render, refresh_visualizer=render)
if render:
cameras = scene.visualizer.cameras
for camera in cameras:
camera.render()
@ti.func
def dry_run_step(target_pos: ti.template(), target_quat: ti.template()):
scene.step(
update_visualizer=render, refresh_visualizer=render
) # let it actually run.
franka.control_dofs_position(target_pos) # set at the last point.
franka.control_dofs_force(gripper_force, dofs_idx_local=[7, 8])
if render:
cameras = scene.visualizer.cameras
for camera in cameras:
camera.render()
@ti.kernel
def execute_ik(
precomputed_pos_field: ti.template(), precomputed_quat_field: ti.template()
):
for i in range(num_steps_between_keypoints + 1):
ik_step(precomputed_pos_field[i], precomputed_quat_field[i])
# let dry-run for 80 steps.
for _ in range(100):
dry_run_step(precomputed_pos_field[-1], precomputed_quat_field[-1])
execute_ik(precomputed_pos_field, precomputed_quat_field) |
Beta Was this translation helpful? Give feedback.
2 replies
-
Are there any solutions to speed this up? Could I've just written plain bad code (working from basic tutorials?) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello Genesis,
I'm publishing a paper with genesis, though I would want to make it faster.
This code, which I've implemented as a stub for no batching on IK (as per the tutorial):
Takes approximately 4.5~5 seconds to run on batch dim of 128 on rtx 5070 - kind of too slow for RL.
If it is possible, I would want to (at least partially) compile it, to have runtimes of 1 second maybe.
I understand I both can't compile it in torch nor taichi because here genesis uses both.
Is there a workaround? Is the code sound at all?
Perhaps there is a function that does
step
function many times?Thank you.
Beta Was this translation helpful? Give feedback.
All reactions