Quick Guide: Simple 2D Robot Simulator Using PythonA lightweight 2D robot simulator is an excellent tool for learning robotics concepts, testing algorithms (like localization, mapping, and path planning), and prototyping autonomous behaviors without hardware. This guide walks you through building a simple, extensible 2D robot simulator in Python: the simulation model, a minimal physics loop, visualization, and examples (differential-drive robot, sensors, and control). By the end you’ll have runnable code and clear paths to extend the simulator for your projects or classes.
What you’ll build
- A 2D environment (bounded arena) with static obstacles.
- A differential-drive robot model (pose: x, y, theta).
- Motion integration using simple kinematics.
- Sensor models: rangefinder (simulated LiDAR) and odometry.
- Visualization using Pygame or Matplotlib for animation.
- Example tasks: waypoint navigation and obstacle avoidance.
Requirements and setup
- Python 3.8+
- Recommended packages:
- numpy
- pygame (for real-time visualization) or matplotlib (for simple plots)
- optionally shapely (for robust geometry) and scipy
Install with pip:
pip install numpy pygame shapely
Simulator architecture (high level)
- World: holds map boundaries and obstacles.
- Robot: state (x, y, theta), kinematic model, sensor interfaces.
- Physics/Integrator: updates robot pose based on commands and time step dt.
- Sensor models: produce noisy measurements from the world and robot pose.
- Controller: converts goals into wheel velocities or steering commands.
- Visualizer: draws the world, robot, sensor rays, and trajectories.
This separation makes the simulator modular: swap sensor types, change robot models, or replace the visualizer.
Robot kinematics: differential-drive
A differential-drive robot has two wheels with velocities v_l and v_r. Linear velocity v and angular velocity ω are: v = (v_r + v_l) / 2 ω = (v_r – v_l) / L where L is the track width.
Discrete pose update over dt: x += v * cos(theta) * dt y += v * sin(theta) * dt theta += ω * dt
If you want smoother integration for larger dt, use a more accurate integration (e.g., exact integration over circular arc).
Collision representation
Represent obstacles as polygons or circles. For simplicity, circles are easy to test:
- Robot as circle with radius r_robot.
- Obstacle as circles with radius r_obs. Collision if distance(center_robot, center_obs) < r_robot + r_obs.
For more complex maps, use shapely for polygon collision tests.
Simple rangefinder (2D LiDAR) model
Simulate N rays evenly distributed over a field of view (e.g., 360° or ±120°). For each ray:
- Cast a segment from robot pose along angle theta + ray_angle up to max_range.
- Find nearest intersection with obstacles or boundary.
- Return range with optional Gaussian noise.
Raycasting can be implemented by testing intersection with line segments of polygon obstacles, or approximated by sampling points (less accurate).
Minimal runnable example
Below is a compact implementation using pygame for visualization. It implements a circular robot, simple obstacles, odometry, and a basic raycaster.
# simple_sim.py import math import random import pygame import numpy as np # ---------- Config ---------- WIDTH, HEIGHT = 800, 600 DT = 0.02 ROBOT_R = 12 TRACK_WIDTH = 30.0 MAX_RANGE = 200 NUM_RAYS = 36 # ---------- Utilities ---------- def wrap_angle(a): return (a + math.pi) % (2*math.pi) - math.pi def line_intersect(p1, p2, q1, q2): # Return intersection point of segment p1-p2 with q1-q2 or None x1,y1 = p1; x2,y2 = p2; x3,y3 = q1; x4,y4 = q2 denom = (x1-x2)*(y3-y4)-(y1-y2)*(x3-x4) if abs(denom) < 1e-6: return None t = ((x1-x3)*(y3-y4)-(y1-y3)*(x3-x4))/denom u = -((x1-x2)*(y1-y3)-(y1-y2)*(x1-x3))/denom if 0 <= t <= 1 and 0 <= u <= 1: xi = x1 + t*(x2-x1) yi = y1 + t*(y2-y1) return (xi, yi) return None # ---------- World ---------- class World: def __init__(self, w,h): self.w, self.h = w,h # obstacles as list of polygons (list of points) self.obstacles = [] def add_box(self, x,y,wid,hei): hw, hh = wid/2, hei/2 pts = [(x-hw,y-hh),(x+hw,y-hh),(x+hw,y+hh),(x-hw,y+hh)] self.obstacles.append(pts) # ---------- Robot ---------- class Robot: def __init__(self,x,y,theta): self.x,self.y,self.theta = x,y,theta self.r = ROBOT_R self.track = TRACK_WIDTH self.vl = 0.0 self.vr = 0.0 self.odom = (x,y,theta) self.trajectory = [] def set_wheel_vel(self, vl, vr): self.vl, self.vr = vl, vr def step(self, dt): v = (self.vr + self.vl)/2.0 omega = (self.vr - self.vl)/self.track if abs(omega) < 1e-6: self.x += v * math.cos(self.theta) * dt self.y += v * math.sin(self.theta) * dt else: # exact integration on circular arc R = v / omega cx = self.x - R * math.sin(self.theta) cy = self.y + R * math.cos(self.theta) self.theta += omega * dt self.x = cx + R * math.sin(self.theta) self.y = cy - R * math.cos(self.theta) self.theta = wrap_angle(self.theta) # simple odometry (with noise) self.odom = (self.x + random.gauss(0,0.5), self.y + random.gauss(0,0.5), wrap_angle(self.theta + random.gauss(0,0.01))) self.trajectory.append((self.x,self.y)) def sense_ranges(self, world): ranges = [] angles = np.linspace(-math.pi, math.pi, NUM_RAYS, endpoint=False) for a in angles: ang = wrap_angle(self.theta + a) rx = self.x; ry = self.y ex = rx + MAX_RANGE * math.cos(ang) ey = ry + MAX_RANGE * math.sin(ang) closest = None; closest_dist = MAX_RANGE # check world boundaries as segments walls = [ ((0,0),(world.w,0)),((world.w,0),(world.w,world.h)), ((world.w,world.h),(0,world.h)),((0,world.h),(0,0)) ] segs = walls[:] # add obstacle edges for poly in world.obstacles: for i in range(len(poly)): segs.append((poly[i], poly[(i+1)%len(poly)])) for s in segs: p = line_intersect((rx,ry),(ex,ey), s[0], s[1]) if p: d = math.hypot(p[0]-rx, p[1]-ry) if d < closest_dist: closest_dist = d closest = p ranges.append(closest_dist) return ranges # ---------- Simple controller ---------- def go_to_goal_controller(robot, goal, vmax=60.0): dx = goal[0] - robot.x dy = goal[1] - robot.y rho = math.hypot(dx,dy) if rho < 6.0: return 0.0, 0.0 angle_to_goal = math.atan2(dy,dx) alpha = wrap_angle(angle_to_goal - robot.theta) # Proportional steering k_r = 0.5 k_a = 1.5 v = min(vmax, k_r * rho) omega = k_a * alpha # convert to wheel speeds vl = v - 0.5 * robot.track * omega vr = v + 0.5 * robot.track * omega return vl, vr # ---------- Visualization ---------- def draw_world(screen, world, robot, ranges): screen.fill((30,30,30)) # obstacles for poly in world.obstacles: pygame.draw.polygon(screen, (200,80,80), poly) # trajectory if len(robot.trajectory) > 1: pygame.draw.lines(screen, (100,200,100), False, robot.trajectory, 2) # robot pygame.draw.circle(screen, (80,160,240), (int(robot.x), int(robot.y)), robot.r) # heading hx = robot.x + robot.r * math.cos(robot.theta) hy = robot.y + robot.r * math.sin(robot.theta) pygame.draw.line(screen, (20,20,20), (robot.x,robot.y), (hx,hy), 3) # lidar rays angles = np.linspace(-math.pi, math.pi, NUM_RAYS, endpoint=False) for i,a in enumerate(angles): ang = wrap_angle(robot.theta + a) d = ranges[i] ex = robot.x + d * math.cos(ang) ey = robot.y + d * math.sin(ang) color = (120,120,120) if d < MAX_RANGE-1 else (60,60,60) pygame.draw.line(screen, color, (robot.x,robot.y), (ex,ey), 1) # ---------- Main ---------- def main(): pygame.init() screen = pygame.display.set_mode((WIDTH, HEIGHT)) clock = pygame.time.Clock() world = World(WIDTH, HEIGHT) # add boxes world.add_box(400,150,200,60) world.add_box(200,420,120,200) world.add_box(600,420,200,120) robot = Robot(100,100,0.2) goal = (700,500) running = True while running: dt = DT for ev in pygame.event.get(): if ev.type == pygame.QUIT: running = False elif ev.type == pygame.MOUSEBUTTONDOWN: goal = ev.pos vl, vr = go_to_goal_controller(robot, goal) robot.set_wheel_vel(vl, vr) robot.step(dt) ranges = robot.sense_ranges(world) draw_world(screen, world, robot, ranges) # draw goal pygame.draw.circle(screen, (255,200,0), (int(goal[0]), int(goal[1])), 6) pygame.display.flip() clock.tick(int(1.0/dt)) pygame.quit() if __name__ == "__main__": main()
Save as simple_sim.py and run with python simple_sim.py
. Click to change the goal.
Extensions and improvements
- Add sensor noise models and more realistic odometry drift.
- Replace raycasting with spatial acceleration structures (grid, quadtree, BSP) for faster large maps.
- Add dynamic obstacles and simple friction model.
- Implement SLAM (e.g., particle filter or EKF) using the odometry and range measurements.
- Support different robot types (ackermann steering, omnidirectional).
- Use shapely for exact polygon intersection and map loading from SVG or JSON.
- Add unit tests for kinematics and collision detection.
Tips for experiments
- Start with low max speeds and small dt for stability.
- Visualize sensor readings as heatmaps or occupancy grids when implementing mapping.
- Keep physics and control loops decoupled: run physics at fixed dt, allow controllers to run at lower rates.
Conclusion
This guide provides a compact but functional 2D robot simulator in Python that you can run, study, and extend. It balances simplicity with enough realism (kinematics, collision checking, ray-based sensors) to support learning tasks: navigation, obstacle avoidance, and basic mapping. Add complexity progressively: sensors, noise, SLAM, path planning — and the simulator will remain a valuable sandbox for robotics experiments.
Leave a Reply