Skip to content

Trajectory Generation

Functions and classes for generating robot trajectories.

Overview

Trajectory generation converts 2D sketches to 3D robot joint positions using inverse kinematics.

graph LR
    A[Sketch] --> B[Map to 3D]
    B --> C[Solve IK]
    C --> D[Interpolate]
    D --> E[Trajectory]

Main Function

generate_trajectory

generate_trajectory

generate_trajectory(
    image, output_path=None, config=None, initial_q=None
)

Convenience function: Convert image directly to robot trajectory.

This is a one-shot function that combines image_to_sketch() and sketch_to_trajectory() for simple use cases.

Parameters:

Name Type Description Default
image Union[str, Path, ndarray, Image]

Input image (file path, numpy array, or PIL Image).

required
output_path Optional[Union[str, Path]]

If provided, save trajectory JSON to this path.

None
config Optional[TrajectoryConfig]

Full configuration (uses sensible defaults if None).

None
initial_q Optional[Union[ndarray, dict, Trajectory]]

Initial joint configuration to start IK solving from. Can be one of: - numpy array of shape (n_joints,) with joint positions in radians - dict mapping joint names to positions in radians - Trajectory object (uses last waypoint) - None (default): use fixed initial pose for drawing

This enables sequential trajectory execution where each new trajectory starts from the robot's current position.

None

Returns:

Type Description
Trajectory

Trajectory object ready for execution.

Example

import pib3 trajectory = pib3.generate_trajectory("my_drawing.png") trajectory.to_json("output.json")

With custom config

from pib3 import TrajectoryConfig, PaperConfig config = TrajectoryConfig( ... paper=PaperConfig(size=0.20, drawing_scale=0.9), ... ) trajectory = pib3.generate_trajectory("drawing.png", config=config)

Sequential trajectories (robot draws multiple images)

traj1 = pib3.generate_trajectory("image1.png") traj2 = pib3.generate_trajectory("image2.png", initial_q=traj1) traj3 = pib3.generate_trajectory("image3.png", initial_q=traj2)

Source code in pib3/__init__.py
def generate_trajectory(
    image: Union[str, Path, "np.ndarray", "PIL.Image.Image"],
    output_path: Optional[Union[str, Path]] = None,
    config: Optional[TrajectoryConfig] = None,
    initial_q: Optional[Union["np.ndarray", dict, Trajectory]] = None,
) -> Trajectory:
    """
    Convenience function: Convert image directly to robot trajectory.

    This is a one-shot function that combines image_to_sketch() and
    sketch_to_trajectory() for simple use cases.

    Args:
        image: Input image (file path, numpy array, or PIL Image).
        output_path: If provided, save trajectory JSON to this path.
        config: Full configuration (uses sensible defaults if None).
        initial_q: Initial joint configuration to start IK solving from.
            Can be one of:
            - numpy array of shape (n_joints,) with joint positions in radians
            - dict mapping joint names to positions in radians
            - Trajectory object (uses last waypoint)
            - None (default): use fixed initial pose for drawing

            This enables sequential trajectory execution where each new
            trajectory starts from the robot's current position.

    Returns:
        Trajectory object ready for execution.

    Example:
        >>> import pib3
        >>> trajectory = pib3.generate_trajectory("my_drawing.png")
        >>> trajectory.to_json("output.json")

        >>> # With custom config
        >>> from pib3 import TrajectoryConfig, PaperConfig
        >>> config = TrajectoryConfig(
        ...     paper=PaperConfig(size=0.20, drawing_scale=0.9),
        ... )
        >>> trajectory = pib3.generate_trajectory("drawing.png", config=config)

        >>> # Sequential trajectories (robot draws multiple images)
        >>> traj1 = pib3.generate_trajectory("image1.png")
        >>> traj2 = pib3.generate_trajectory("image2.png", initial_q=traj1)
        >>> traj3 = pib3.generate_trajectory("image3.png", initial_q=traj2)
    """
    if config is None:
        config = TrajectoryConfig()

    # Step 1: Image to sketch
    sketch = image_to_sketch(image, config.image)

    # Step 2: Sketch to trajectory
    trajectory = sketch_to_trajectory(
        sketch, config, initial_q=initial_q
    )

    # Optional: Save to file
    if output_path is not None:
        trajectory.to_json(output_path)

    return trajectory

Usage

import pib3

# Basic usage
trajectory = pib3.generate_trajectory("drawing.png")
trajectory.to_json("output.json")

# With configuration
from pib3 import TrajectoryConfig, PaperConfig
config = TrajectoryConfig(
    paper=PaperConfig(size=0.15, drawing_scale=0.9)
)
trajectory = pib3.generate_trajectory("drawing.png", config=config)

# With visualization during IK solving
trajectory = pib3.generate_trajectory(
    "drawing.png",
    visualize=True  # Ignored (Swift removed)
)

sketch_to_trajectory

sketch_to_trajectory

sketch_to_trajectory(
    sketch,
    config=None,
    progress_callback=None,
    initial_q=None,
)

Convert a Sketch to a robot Trajectory using inverse kinematics.

Parameters:

Name Type Description Default
sketch Sketch

Sketch object containing strokes to draw.

required
config Optional[TrajectoryConfig]

Trajectory configuration. Uses defaults if None.

None
progress_callback Optional[Callable[[int, int, bool], None]]

Optional callback(current_point, total_points, success).

None
initial_q Optional[Union[ndarray, Dict[str, float], Trajectory]]

Initial joint configuration to start IK solving from. Can be one of: - numpy array of shape (n_joints,) with joint positions in radians - dict mapping joint names to positions in radians - Trajectory object (uses last waypoint) - None (default): use fixed initial pose for drawing

This allows sequential trajectory execution where each new trajectory starts from the end position of the previous one.

None

Returns:

Type Description
Trajectory

Trajectory object with joint positions for each waypoint.

Raises:

Type Description
ImportError

If roboticstoolbox is not installed.

RuntimeError

If no IK solutions are found.

Example

from pib3 import image_to_sketch, sketch_to_trajectory sketch = image_to_sketch("drawing.png") trajectory = sketch_to_trajectory(sketch) trajectory.to_json("output.json")

Sequential trajectories

traj1 = sketch_to_trajectory(sketch1) traj2 = sketch_to_trajectory(sketch2, initial_q=traj1) # Start from traj1 end

Source code in pib3/trajectory.py
def sketch_to_trajectory(
    sketch: Sketch,
    config: Optional[TrajectoryConfig] = None,
    progress_callback: Optional[Callable[[int, int, bool], None]] = None,
    initial_q: Optional[Union[np.ndarray, Dict[str, float], "Trajectory"]] = None,
) -> Trajectory:
    """
    Convert a Sketch to a robot Trajectory using inverse kinematics.

    Args:
        sketch: Sketch object containing strokes to draw.
        config: Trajectory configuration. Uses defaults if None.
        progress_callback: Optional callback(current_point, total_points, success).
        initial_q: Initial joint configuration to start IK solving from.
            Can be one of:
            - numpy array of shape (n_joints,) with joint positions in radians
            - dict mapping joint names to positions in radians
            - Trajectory object (uses last waypoint)
            - None (default): use fixed initial pose for drawing

            This allows sequential trajectory execution where each new
            trajectory starts from the end position of the previous one.

    Returns:
        Trajectory object with joint positions for each waypoint.

    Raises:
        ImportError: If roboticstoolbox is not installed.
        RuntimeError: If no IK solutions are found.

    Example:
        >>> from pib3 import image_to_sketch, sketch_to_trajectory
        >>> sketch = image_to_sketch("drawing.png")
        >>> trajectory = sketch_to_trajectory(sketch)
        >>> trajectory.to_json("output.json")

        >>> # Sequential trajectories
        >>> traj1 = sketch_to_trajectory(sketch1)
        >>> traj2 = sketch_to_trajectory(sketch2, initial_q=traj1)  # Start from traj1 end
    """
    import spatialmath as sm

    if config is None:
        config = TrajectoryConfig()

    # Load robot
    robot = _load_robot()
    joint_limits = _get_joint_limits(robot)

    # Determine arm configuration
    arm = config.ik.arm
    grip_style = config.ik.grip_style

    if arm == "left":
        arm_joints = LEFT_ARM_JOINTS
        tcp_link_finger = LEFT_TCP_LINK
        tcp_link_pencil = LEFT_TCP_LINK_PENCIL
    else:
        arm_joints = RIGHT_ARM_JOINTS
        tcp_link_finger = RIGHT_TCP_LINK
        tcp_link_pencil = RIGHT_TCP_LINK_PENCIL

    arm_joint_indices = list(arm_joints.values())

    # Configure TCP and tool offset based on grip style.
    # Only the 6 arm joints are IK degrees of freedom; finger joints stay
    # fixed at the pose set by _set_initial_arm_pose.  Forward kinematics
    # still runs through the finger chain to reach the TCP.
    if grip_style == "pencil_grip":
        tcp_link = tcp_link_pencil
        tool_offset = sm.SE3(0.04, -0.06, 0)  # 40mm forward, 60mm toward pinky
    else:
        # Index finger drawing: use finger tip as TCP
        tcp_link = tcp_link_finger
        tool_offset = sm.SE3(0, 0.027, 0)  # 27mm in Y (finger tip direction)

    # Initialize configuration
    if initial_q is not None:
        # Use provided initial configuration
        if isinstance(initial_q, Trajectory):
            # Trajectory may be filtered (only drawing arm joints).
            # Expand back to full robot configuration using direct index
            # mapping (avoids ambiguity from duplicate motor names in
            # URDF_TO_MOTOR_NAME, e.g. joints 11 & 12 both named
            # 'index_left_stretch').
            q_current = np.zeros(robot.n)
            q_current = _set_initial_arm_pose(q_current, arm, grip_style)
            last_wp = initial_q.waypoints[-1]
            traj_arm = initial_q.metadata.get('arm', arm)
            if traj_arm == 'left':
                traj_joint_indices = list(range(2, 19))
            else:
                traj_joint_indices = list(range(19, 36))
            for j, urdf_idx in enumerate(traj_joint_indices):
                if j < len(last_wp):
                    q_current[urdf_idx] = last_wp[j]
        elif isinstance(initial_q, dict):
            # Convert dict of joint names to array
            q_current = np.zeros(robot.n)
            q_current = _set_initial_arm_pose(q_current, arm, grip_style)  # Start with defaults
            joint_name_to_idx = {name: idx for idx, name in URDF_TO_MOTOR_NAME.items()}
            for name, value in initial_q.items():
                if name in joint_name_to_idx:
                    q_current[joint_name_to_idx[name]] = value
        else:
            # Assume numpy array (full robot configuration)
            q_current = np.asarray(initial_q, dtype=np.float64).copy()
            if q_current.shape[0] != robot.n:
                raise ValueError(
                    f"initial_q array has {q_current.shape[0]} elements, "
                    f"expected {robot.n}"
                )
    else:
        # Default: start from fixed initial pose
        q_current = np.zeros(robot.n)
        q_current = _set_initial_arm_pose(q_current, arm, grip_style)
    robot.q = q_current

    # Get initial TCP position
    T_start = robot.fkine(q_current, end=tcp_link)
    if isinstance(T_start, np.ndarray):
        T_start = sm.SE3(T_start)
    T_start = T_start * tool_offset
    start_pos = T_start.t

    # Adjust paper position based on TCP position
    paper = config.paper
    if paper.center_y is None:
        # Use consistent center_y regardless of grip style
        # This allows both grip styles to use the same paper position
        paper.center_y = -0.34
    paper.start_x = float(start_pos[0] - paper.size / 2.0)

    # Generate 3D trajectory points
    trajectory_3d = []
    for stroke in sketch.strokes:
        dense_points = _interpolate_stroke_points(stroke, config.point_density)
        if not dense_points:
            continue

        # Lift before stroke
        u0, v0 = dense_points[0]
        x, y, z = _map_to_3d(u0, v0, paper)
        trajectory_3d.append((x, y, z + paper.lift_height, True))

        # Draw stroke
        for u, v in dense_points:
            x, y, z = _map_to_3d(u, v, paper)
            trajectory_3d.append((x, y, z, False))

        # Lift after stroke
        u_end, v_end = dense_points[-1]
        x, y, z = _map_to_3d(u_end, v_end, paper)
        trajectory_3d.append((x, y, z + paper.lift_height, True))

    # Solve IK
    q_traj = []
    success_flags = []
    success_count = 0
    fail_count = 0
    total_points = len(trajectory_3d)

    # Orientation constraints
    # For drawing on paper, position accuracy is most important
    # Both grip styles use position-only IK for maximum robustness
    target_orientation = None
    orientation_weight = 0.0

    for i, (x, y, z, is_lift) in enumerate(trajectory_3d):
        q_sol, success = _solve_ik_point(
            robot,
            np.array([x, y, z]),
            q_current,
            tcp_link,
            tool_offset,
            arm_joint_indices,
            joint_limits,
            config.ik,
            target_orientation=target_orientation,
            orientation_weight=orientation_weight,
        )

        if success:
            q_current = q_sol
            success_count += 1
        else:
            fail_count += 1

        q_traj.append(q_current.copy())
        success_flags.append(success)

        if progress_callback:
            progress_callback(i + 1, total_points, success)

    if success_count == 0:
        raise RuntimeError("No successful IK solutions found. Check paper position and robot reach.")

    # Interpolate failed points
    if fail_count > 0:
        q_array = _interpolate_failed_points(q_traj, success_flags)
    else:
        q_array = np.array(q_traj)

    # Filter trajectory to only include the drawing arm + hand joints.
    # This prevents the non-drawing arm from being commanded during playback.
    if arm == "left":
        drawing_joint_indices = list(range(2, 19))   # left arm (2-7) + hand (8-18)
    else:
        drawing_joint_indices = list(range(19, 36))  # right arm (19-24) + hand (25-35)

    joint_names = [URDF_TO_MOTOR_NAME.get(i, f"joint_{i}") for i in drawing_joint_indices]
    q_array = q_array[:, drawing_joint_indices]

    # Create trajectory
    trajectory = Trajectory(
        joint_names=joint_names,
        waypoints=q_array,
        metadata={
            "source": "pib3",
            "robot_model": "pib",
            "tcp_link": tcp_link,
            "arm": arm,
            "grip_style": grip_style,
            "total_points": len(q_array),
            "success_count": success_count,
            "fail_count": fail_count,
            "success_rate": success_count / total_points if total_points > 0 else 0,
            "paper_config": {
                "start_x": paper.start_x,
                "size": paper.size,
                "center_y": paper.center_y,
                "height_z": paper.height_z,
                "drawing_scale": paper.drawing_scale,
            },
        },
    )

    return trajectory

Usage

import pib3

# Step-by-step approach
sketch = pib3.image_to_sketch("drawing.png")
trajectory = pib3.sketch_to_trajectory(sketch)

# With progress callback
def on_progress(current, total, success):
    print(f"Point {current}/{total}: {'OK' if success else 'FAIL'}")

trajectory = pib3.sketch_to_trajectory(
    sketch,
    progress_callback=on_progress
)

# With custom config
from pib3 import TrajectoryConfig
config = TrajectoryConfig(...)
trajectory = pib3.sketch_to_trajectory(sketch, config)

Trajectory Class

Trajectory dataclass

Trajectory(joint_names, waypoints, metadata=dict())

Robot joint trajectory for executing a drawing.

Stores joint positions in canonical Webots motor radians.

Attributes:

Name Type Description
joint_names List[str]

Names of joints in order (36 for PIB).

waypoints ndarray

Array of shape (N, num_joints) with joint positions in radians.

metadata dict

Additional info (paper position, IK stats, etc.)

__len__

__len__()

Return number of waypoints.

Source code in pib3/trajectory.py
def __len__(self) -> int:
    """Return number of waypoints."""
    return len(self.waypoints)

to_json

to_json(path)

Save trajectory to JSON file with unit metadata.

Source code in pib3/trajectory.py
def to_json(self, path: Union[str, Path]) -> None:
    """Save trajectory to JSON file with unit metadata."""
    data = {
        "format_version": self.FORMAT_VERSION,
        "unit": self.UNIT,
        "coordinate_frame": self.COORDINATE_FRAME,
        "joint_names": self.joint_names,
        "waypoints": self.waypoints.tolist(),
        "metadata": {
            **self.metadata,
            "created_at": datetime.now(timezone.utc).isoformat(),
            "offsets": {
                "description": "Canonical format is absolute radians. "
                "Per-joint offsets from Webots starting position are "
                "applied by the backend at playback time.",
            },
        },
    }
    with open(path, 'w') as f:
        json.dump(data, f, indent=2)

from_json classmethod

from_json(path)

Load trajectory from JSON file.

Source code in pib3/trajectory.py
@classmethod
def from_json(cls, path: Union[str, Path]) -> "Trajectory":
    """Load trajectory from JSON file."""
    with open(path) as f:
        data = json.load(f)

    unit = data.get("unit")
    waypoints = np.array(data.get("waypoints", data.get("points", [])))

    if unit is not None and unit != "radians":
        raise ValueError(f"Unsupported trajectory unit: {unit}")

    return cls(
        joint_names=data.get("joint_names", data.get("link_names", [])),
        waypoints=waypoints,
        metadata=data.get("metadata", {}),
    )

to_webots_format

to_webots_format()

Convert waypoints to Webots motor positions (identity).

Waypoints are already in absolute radians. Per-joint offsets from the Webots starting position are applied by the backend, not here.

Source code in pib3/trajectory.py
def to_webots_format(self) -> np.ndarray:
    """Convert waypoints to Webots motor positions (identity).

    Waypoints are already in absolute radians.  Per-joint offsets from
    the Webots starting position are applied by the backend, not here.
    """
    return self.waypoints

to_robot_format

to_robot_format()

Convert waypoints to real robot format (centidegrees).

Source code in pib3/trajectory.py
def to_robot_format(self) -> np.ndarray:
    """Convert waypoints to real robot format (centidegrees)."""
    return np.round(np.degrees(self.waypoints) * 100).astype(int)

Creating Trajectories

import numpy as np
from pib3 import Trajectory

# From arrays
joint_names = ["joint_0", "joint_1", "joint_2"]
waypoints = np.array([
    [0.0, 0.0, 0.0],
    [0.1, 0.2, 0.3],
    [0.2, 0.4, 0.6],
])

trajectory = Trajectory(
    joint_names=joint_names,
    waypoints=waypoints,
    metadata={"source": "custom"}
)

Saving and Loading

from pib3 import Trajectory

# Save to JSON
trajectory.to_json("my_trajectory.json")

# Load from JSON
loaded = Trajectory.from_json("my_trajectory.json")

# Access data
print(f"Waypoints: {len(loaded)}")
print(f"Joints: {loaded.joint_names}")
print(f"Metadata: {loaded.metadata}")

Format Conversion

# Get waypoints in Webots format (no offset, canonical format)
webots_waypoints = trajectory.to_webots_format()



# Get waypoints in robot format (centidegrees)
robot_waypoints = trajectory.to_robot_format()

JSON Format

{
  "format_version": "1.0",
  "unit": "radians",
  "coordinate_frame": "webots",
  "joint_names": ["turn_head_motor", "tilt_forward_motor", ...],
  "waypoints": [
    [0.1, 0.2, 0.3, ...],
    [0.15, 0.25, 0.35, ...]
  ],
  "metadata": {
    "source": "pib3",
    "robot_model": "pib",
    "success_rate": 0.95,
    "created_at": "2024-01-01T12:00:00Z"
  }
}

IK Solver Details

The inverse kinematics solver uses:

  • Algorithm: Damped Least Squares (DLS) gradient descent
  • Convergence: Position error below tolerance
  • Limits: Joint limits enforced during solving
  • Fallback: Linear interpolation for failed points

Solver Parameters

Parameter Effect
max_iterations More iterations = better accuracy, slower
tolerance Smaller = more precise, harder to converge
step_size Larger = faster convergence, risk of oscillation
damping Higher = more stable near singularities