top of page
Yuvi Chess Trainer

Location: Carnegie Mellon University
Date: October 2021 - December 2021
An open-source, interactive, Python chess training application that uses a minimax algorithm with alpha-beta pruning and a custom chess evaluation engine trained on 500,000 chess positions to power its chess AI and produce in-game analysis.
What is the minimax algorithm?
It is a recursive algorithm that helps decide the optimal move to play given some position or representation of the game. This works by assigning a value to each possible move and assuming that White will try to maximize their chances of winning while Black will try to minimize White's chances of winning.
The minimax algorithm is often visualized as a decision tree of each side's possible moves and evaluations. Anything between +Infinity to 0 means White is better/winning. Alternatively, 0 to -Infinity means Black is better/winning.
Why Use Alpha-Beta Pruning?
With the seemingly endless possibilities in chess, the computational time in evaluating all move sequences would be too costly. To help alleviate this, alpha-beta pruning introduces a more efficient search algorithm that decreases the number of move sequences that need to be considered.
The essential concept behind the algorithm is as follows: a move need no longer be considered if at least one move sequence proves the move worse than another move previously examined.

How do we determine the value of a position?
Without an accurate means of evaluating a position, no minimax algorithm with alpha-beta pruning will play well. Thus, I built a custom chess evaluation engine that uses neural networks to predict an evaluation score for a given position.
First, the given position is encoded as a string in Forsyth–Edwards Notation (FEN) to be passed as input to the model. FEN captures piece placement, player turn, as well as other chess factors like en passant and castling.
Trained on over 500,000 FEN encoded positions and their corresponding Stockfish chess engine evaluation scores, the network learned to weight critical factors in chess positions like material, piece placement, and more.
The model outputs a decimal representing the evaluation score. For instance, -1.0 refers to Black being ahead by 1 pawn.

What are some other interesting approaches?
Reinforcement learning is an emerging and powerful technique being implemented in modern chess engines. Essentially, neural networks train themselves by simulating different moves in an environment with the goal of maximizing some objective function (winning the game). Through trial and error, these chess agents improve.
Alpha Zero, developed by Google Deep Mind, considered the strongest chess engine in the world, uses reinforcement learning to reach an unthinkable level of play.

Check Out Demo Video Here:

Takeaways from my experience building Yuvi Chess Trainer?
As someone with 11+ years of competitive chess experience, I've relied on chess computers to help me train and prepare for tournaments. To that end, I wanted to build my own chess computer to help others learn and develop as chess players. In the process, I learned more about the algorithms chess computers use to produce the insights they share with me and other players. I also gained experience managing a large codebase of thousands of lines while writing clean, modular Python code.

bottom of page