Beyond the Usual Alpha-Beta Search: "Deep Thinking - Where Machine Intelligence Ends and Human Creativity Begins” by Garry Kasparov, Mig Greengard
“In 2016, nineteen years after my loss to Deep Blue, the Google-backed AI project DeepMind and its Go-playing offshoot AlphaGo defeated the world’s top Go player, Lee Sedol. More importantly, as also as predicted, the methods used to create AlphaGo were more interesting as an IA Project than anything that had produced the top chess machines. It uses machine learning and neural networks to teach itself how to play better, as well as other sophisticated techniques beyond the usual alpha-beta search. Deep Blue was the end; AlphaGo is a beginning.”
In “Deep Thinking - Where Machine Intelligence Ends and Human Creativity Begins” by Garry Kasparov, Mig Greengard
My personal experience with Go dates back at least a decade. I remember getting slaughtered every time by the free GNUgo software, just as I had been by every human opponent for the last 20 years. Never got the hang of it, though I was school chess captain back in the day. Totally different mindset. I first came across it in a little-remembered crime series called 'The Man in Room 17', with Richard Vernon and Denholm Ellit eponymously solving crimes without leaving their office, where they were always playing go. I also remember a funny little story while I was attending the British Council. Back in the 80s, a Korean guy gave me a game. After every move I played, he stifled a laugh and started a rapid fire of, "No! Cos you purrin ['put in', I presume] there, then I purrin here, after you purrin there an' I purrin here, you lose these piece" None of which made anything clearer. At chess, the first (okay, tenth) time I got mated on the back row by a rook, I learned not to leave the king behind a wall of pawns. Never got my head round the simplest 'joseki' (corner opening) at Go. Beautifully elegant game though.
If you're into Chess, and Computer Science of the AI variety, read on.