Online: | |
Visits: | |
Stories: |
The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game through brute force speed and clever programming ‘shortcuts’. Deep Blue was able to look ahead at which moves would be the best in linear fashion.
AlphaGo Artificial Intelligence was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns. This means that similar techniques could be applied to other AI domains (read jobs) that require recognition of complex patterns, long-term planning and decision-making.
In China, Japan and South Korea, Go is hugely popular and is even played by celebrity professionals. But the game has long interested AI researchers because of its complexity. The rules are relatively simple: the goal is to gain the most territory by placing and capturing black and white stones on a 19 × 19 grid. But the average 150-move game contains more possible board configurations — 10 to the 170th power — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move as the Deep Blue machine had done in the past with chess.
Chess is less complex than Go, but it still has too many possible configurations to solve by brute force alone. Instead, programs cut down their searches by looking a few turns ahead and judging which player would have the upper hand. In Go, recognizing winning and losing positions is much harder: stones have equal values and can have subtle impacts far across the board.