Towards NNUE

A good part of the development effort in the previous year was spent optimizing evaluation weights using logistic regression, first using the Texel Tuning Method (simple and slow), and later using a more standard gradient descent approach (faster, but more complicated). The ultimate goal, however, has been to develop a solution that doesn’t require domain knowledge at all. What if there were a way to just let the machine figure out for itself what is “good” about a chess position by examining enough training positions? A supervised machine learning algorithm such as neural networks would be perfect here.

One of the key challenges in using a neural network based evaluation within an alpha-beta search is the loss of speed. There is no getting around the fact that a neural network based evaluator is going to be slower than a traditional “hand crafted” evaluation. The key question is, can it be performed efficiently enough that the speed loss is offset by the gain in evaluation precision? Enter “efficiently updatable neural networks (NNUE).”

To begin the journey towards NNUE, I first had to learn about neural networks. My thoughts were to first learn about neural networks, and then worry about the “efficiently updatable” part, which is specific to computer chess. To that end I’ve spent the last several months reading, taking some online courses, and even writing my own simple machine learning library. Though I’m still learning and experimenting with neural nets in general, I’m at the point where I’m thinking about how to apply them to computer chess and have begun that work.

My path towards NNUE looks like this:

  • Learning the fundamentals (neural networks) without worrying about NNUE. I’ve learned a lot from Andrew Ng’s Deep Learning courses on Coursera. Michael Nielson’s fantastic (and free!) online book has also been a fantastic resource.
  • Write some code! The “hello world” of neural networks seems to be writing an image classifier, so I did that.
  • How would it look to train a network that replaces a traditional chess evaluator? This is the stage I’m at now, and the work is already underway. I’ve decided again to not worry about the “efficiently updatable” stuff here, though I have done some studying around NNUE. I’ll focus on fixed depth searches in this stage, so I can focus more on the mechanics of the network itself. How do I encode a chess position? How should the network be structured? What hyperparameters seem to work better?
  • Finally, how do I do all this efficiently to minimize speed loss?

There are shorter paths than the DIY approach I’m taking. I could probably rip some code or use an existing library to get something working much faster, but I’m OK with the long road. To me, the journey is as important as the end result. So far it’s been fun.