Full robotic autonomy, from mechanical design to artificial intelligence
Project overview
"Niryo plays checkers" is a core robotics engineering project built during my degree program.
The objective was to make a robot play full checkers games autonomously while respecting official FFJD rules.
Instead of scripted moves, we developed a complete system able to perceive the board in real time through an onboard camera,
make strategic decisions, and physically interact with a human player.
General overview of the demonstrator.
The challenge: starting from a robot only
At project start, we only had the robot (Niryo Ned 2, later replaced by Ned 3). The full gameplay environment had to be designed from scratch.
Initial setup with the Niryo Ned 2 robot.
The project relied on three major technical pillars:
Mechanical design: custom board, pieces and storage systems adapted to suction-based grasping.
Game intelligence: rule-compliant decision making with coherent strategic behavior.
Mechanical design and environment
The physical environment was modeled in SolidWorks, considering the robot working radius (490 mm) and suction tool constraints (22.5 mm).
Board design adapted to robot kinematics and reach constraints.
Custom board: 10x10 checkerboard split into two clip-on parts for easier 3D printing and transport.
Storage racks: dedicated paths for captured pieces and promoted queens to keep the game autonomous.
Robot optimization: custom tool extension to avoid singularities near the robot base.
Rigid base: stable, repeatable robot positioning relative to the board.
An integrated multidisciplinary solution
The final system combines several engineering blocks implemented in Python.
Deep-learning vision: CNN trained on 20,000+ images for robust classification of pieces and queens.
Geometric calibration: 3-point alignment and homography to convert camera images into a logical board state.
Decision engine: prioritized strategy (Capture > Defense > Advancement > Random move) for fluid gameplay.
User interface: Tkinter GUI to visualize board reconstruction and validate moves on touchscreen.
Result and portability
The final system was successfully deployed on an embedded platform: Raspberry Pi 5 (8 GB RAM) integrated into a touchscreen tablet.
This allows fully autonomous operation without an external computer.
Starting from a bare robot, we delivered a complete demonstrator able to initialize a game, detect mandatory multi-captures,
and manage automatic promotion to queens, providing a robust and educational autonomous gameplay experience.
Marc GAUTHIER
Technical architecture: an integrated robotic system
The project combines four tightly coupled pillars: constrained mechanical design, deep-learning vision,
game intelligence, and embedded deployment.
1. Mechanical design and robotics
The whole physical environment was modeled in SolidWorks to match the intrinsic limits of the Niryo Ned 2.
Mechanical behavior and trajectory simulation.
Workspace optimization: board dimensions fit the 490 mm working radius.
Singularity mitigation: tool extension moved critical joint configurations away from first-row positions.
Grasping and evacuation: piece dimensions and rack slopes were tuned experimentally for smooth, jam-free handling.
2. Robot control choices
The software stack was fully developed in Python, mainly because of its flexibility and the dedicated pyniryo library.
This enabled high-level robot control with both joint and Cartesian commands.
The main advantage was a unified software architecture where vision, game logic and robot motion coexist in the same program.
This significantly simplified integration and validation of advanced rules such as mandatory multi-captures and promotion.
3. Vision evolution: from classical methods to AI
Perception was the main challenge: turning a raw distorted image into a reliable 10x10 logical board matrix.
Initial approaches (V1 and V2): speed versus robustness
Version 1 (HSV mask): fast but highly sensitive to illumination changes and shadows.
Version 2 (Hough circles): better geometric localization but fragile with reflections and shape deviations.
Version 1 (HSV-based) and its real-world limitations.
Final approach: CNN-based vision
The final pipeline classifies each of the 100 cells independently using a compact deep-learning model.
4. Deep-learning pipeline: dataset and training
Robustness came from both architecture and data quality.
Dataset creation and annotation
We built a semi-automatic annotation script: capture board, apply homography, crop 100 cells, then validate class via keyboard.
This produced a clean initial dataset of about 1,000 images per category.
Dataset organization before augmentation.
Data augmentation
The dataset was expanded to over 20,000 images with synthetic variations:
extreme brightness and contrast changes,
blue channel shifts (critical for red/blue discrimination),
Gaussian noise and motion blur to emulate robot dynamics.
Training and inference
Training under TensorFlow/Keras (80/20 split) takes around 30 minutes.
Final inference for the full board (100 predictions) is below 5 seconds.
Model training results.
5. Calibration and homography: maintenance mode
To ensure correct CNN inputs, the camera image must be rectified accurately.
The robot moves to a dedicated viewing pose.
The user selects board corners manually on screen.
A homography matrix maps perspective view to an orthogonal board view.
Coordinates are saved for repeatable startup without full recalibration.
6. Game intelligence and robot execution
The decision core orchestrates rule logic, perception and robot motion.
Modular functions enforce FFJD-compliant behavior.
Mandatory captures: enforced whenever available.
Chains and promotion: multi-captures and automatic queen promotion are handled natively.
State recovery: board scan can resynchronize game state after interruptions.
After each human turn, the system compares previous and current board states to infer move origin and destination.
Invalid states trigger user correction through the GUI.
Decision hierarchy:
Priority 1: mandatory captures.
Priority 2: active defense of threatened pieces.
Priority 3: advancement toward promotion.
Priority 4: random safe move when needed.
Decision pipeline used by the bot.
Each move is executed as safe sub-steps: approach, descent, grasp, transit and release.
7. Embedded software integration
The complete stack was deployed on Raspberry Pi 5 (8 GB) to make the system fully standalone.
Startup sequence of the embedded demonstrator.
Multithreading: GUI stays responsive while vision and robot logic run in parallel.
Field deployment: remote access via Tailscale (SSH/VNC) and touchscreen integration for plug-and-play demos.