HAMPDEN, Maine — David Bishop spends the school day as a mild-mannered custodian, but before the final bell rings, he grabs his chess boards and pieces and begins his second role.
“The Queen’s Gambit” is playing out in real life in Maine, where this custodian is coaching his schools’ chess teams to acclaim.
Bishop, a part-time chess coach and full-time custodian, led his elementary and middle school teams to state championship titles this year, drawing comparisons to the Netflix series about a chess prodigy inspired by a janitor.

Robert F. Bukaty, Associated Press
Custodian and chess coach David Bishop challenges 6th-grader Owen Isenhour during after-school practice April 25, 2023, in Hampden, Maine. Under Bishop's tutelage, the Reeds Brook Middle School chess team has gained national recognition.
Some of his players are good enough to beat their coach, proudly declaring “checkmate!”
“Initially, it was humiliating and demoralizing, but it didn’t take long for me to realize that’s a good thing. They’re getting stronger,” the 61-year-old said.
Nationwide, chess is riding a new wave of popularity, and it’s not just because of the popular Netflix mini series based on the 1983 book by Walter Tevis.
During the pandemic, a growing number of kids forced to stay at home for extended periods turned to Chess.com to relieve their boredom. The website and app allows visitors to learn the game, to play against each other or against a computer, and to get chess news.
Chess fans are also watching videos of grandmasters teaching strategies and livestreams of high-profile chess players facing off.
“What we are seeing is an unprecedented period of boom, like nothing before,” said Leon Watson, spokesperson for Chess.com. “It definitely feels like chess is having a moment.”

Robert F. Bukaty, Associated Press
Reeds Brook Middle School chess team member Lucien Paradis shakes hands after defeating his opponent during an after-school chess practice April 25, 2023, in Hampden, Maine.
In Hampden, Bishop’s coaching success followed a happy twist of fate.
He was burned out from his job in the telecommunications industry and took an early retirement package at age 50. He was exploring new opportunities in the field — and not having much luck — when someone told him about a school custodial job. He figured it would be mean less stress.
He didn’t even know there was a chess club until after he’d begun work in 2013. He began volunteering with the chess club at Reeds Brook Middle School, and later at George B. Weatherbee Elementary School, as well.
Bishop learned chess the old-fashioned way, with a family chessboard and by experimenting with the board pieces. At age 10, he followed with keen interest the match in which American grandmaster Bobby Fischer defeated the Soviet Union’s Boris Spassky in 1972.
While Bishop enjoyed chess and was good at it, he didn’t join his high school chess club, worrying he would be typecast as a nerd. He regrets that now.
These days, thanks to its growing appeal, those stereotypes no longer apply.

Robert F. Bukaty, Associated Press
Owen Isenhour whispers advice to Eli Marquis during a Reeds Brook Middle School after-school chess practice April 25, 2023, in Hampden, Maine.
On a recent day, there was a buzz in the air at the Reeds Brook Middle School library where the chess club meets. Bishop’s team had just represented Maine at the the national championships in Texas, and they came in eighth place out of 52 teams. The elementary school team competes this weekend in its national championships in Maryland.
The students quickly tossed their backpacks aside, sat down at library tables and launched into matches. Those who weren’t actively playing watched others’ moves intently.
Eli Marquis, 12, said the chess players are constantly learning new skills and tactics — like opening and closing moves — allowing them to improve and ensuring they don’t get bored.

Robert F. Bukaty, Associated Press
Members of the Reeds Brook Middle School chess team practice at the Hampden Academy library April 25, 2023, in Hampden, Maine.
“You can never run out of things to learn and to practice and to do, and you can just keep on getting better as long as you practice. There’s no end to it. Really,” he said.

Robert F. Bukaty, Associated Press
Eli Marquis, a Reeds brook Middle school student, reacts during an after-school chess team match in April in Hampden, Maine. Part-time chess coach and full-time custodian David Bishop led his elementary and middle school teams to state championship titles this year.
Eddie LaRochelle, 13, compared chess to other competitive team sports. A strong work ethic and practice improve individual skills, and those individuals work together to achieve victory.
“You don’t need to work out every single day in the gym. To get stronger, you can exercise your brain with puzzles, chess and other things,” he said.
Team members said chess has taught them to think ahead, be strategic and consider the ramifications of decisions. And it helps with keeping on task and staying organized.
“Chess is so good for them, and most of them don’t know it,” their coach said. “They’re just playing chess, but it’s like a workout for the brain.”
Bishop understands comparisons to the janitor in “The Queen’s Gambit” — William Shaibel, played by actor Bill Camp — and he thinks it’s an entertaining series. The chess play is accurate and exciting, he said.

Robert F. Bukaty, Associated Press
Custodian David Bishop holds the trophy recently won by the Reeds Brook Middle School chess team in a national competition April 25, 2023, in Hampden, Maine. Bishop coached his schools' chess team to an 8th place finish out of 52 schools in the 2023 National Middle School Championships in Round Rock, Texas.
Camp, the actor, has heard of the team’s success and hopes to pay a visit to the school to offer his congratulations. He had high praise for Bishop.
“What he’s doing is about as noble as one can do – he’s a teacher,” Camp said. “He’s doing the greatest service.”
Unlike the Netflix series’s janitor, Bishop is helping not just one girl in an orphanage, but dozens of kids of all skill levels and socioeconomic backgrounds.
For now, Bishop looks forward to seeing how far his teams can go. As the teams get better, he’s getting used to losing chess matches more frequently.
Riley Richardson, who placed 14th out of 386 competitors at the nationals, said the first time he beat his coach, he thought Bishop was letting him win. But now, he has beaten his coach a few times.
“A while ago, I actually beat him because I just started learning his weaknesses,” Richardson said. That weakness? He smiled and said: ”Sometimes, he’s overthinking.”
-
How computers have (mostly) conquered chess, poker and more
Leon Neal // Getty Images
Computers have raced toward the future for decades, starting as manual punchcards and now turning the tides on how all of humanity operates.
Artificial intelligence is just one field in computing, referring not just to the mechanical guts of our machines but how we can teach machines to reason, strategize, and even delay their actions to seem and behave more, well, human. As experts have made more and more powerful AIs, they’ve sought ways to demonstrate how good those AIs are, which can be challenging to do in a relatable, quantifiable way.
Enter the classic game format. Games are tailor-made for AI demonstrations for multiple reasons, as many games are “solvable” (meaning AI can truly master them, mathematically speaking) and their contexts (fast, multifaceted, strategic) can allow programmers to show off truly multidimensional reasoning approaches.
To illustrate this, PokerListings assembled a list of breakthrough gaming wins for AI, from traditional board games to imperfect information games to video games. The games listed here have little in common sometimes apart from the fact that AI can now beat human players at them all. They range from classic analog games like chess and Go to Texas Hold 'Em poker and today’s most popular multiplayer esports video games.
Keep reading to learn more about the eight significant instances when AIs beat human players at their own game.

Leon Neal // Getty Images
Computers have raced toward the future for decades, starting as manual punchcards and now turning the tides on how all of humanity operates.
Artificial intelligence is just one field in computing, referring not just to the mechanical guts of our machines but how we can teach machines to reason, strategize, and even delay their actions to seem and behave more, well, human. As experts have made more and more powerful AIs, they’ve sought ways to demonstrate how good those AIs are, which can be challenging to do in a relatable, quantifiable way.
Enter the classic game format. Games are tailor-made for AI demonstrations for multiple reasons, as many games are “solvable” (meaning AI can truly master them, mathematically speaking) and their contexts (fast, multifaceted, strategic) can allow programmers to show off truly multidimensional reasoning approaches.
To illustrate this, PokerListings assembled a list of breakthrough gaming wins for AI, from traditional board games to imperfect information games to video games. The games listed here have little in common sometimes apart from the fact that AI can now beat human players at them all. They range from classic analog games like chess and Go to Texas Hold 'Em poker and today’s most popular multiplayer esports video games.
Keep reading to learn more about the eight significant instances when AIs beat human players at their own game.

-
How computers have (mostly) conquered chess, poker and more
TOM MIHALEK/AFP // Getty Images
- Year AI had a benchmark win: 1997
Chess has always led the way in technology, from games conducted by letter or telegram to early internet servers used to host primitive chess clients.
The game of chess is deceptively simple: Each player has 16 pieces including eight identical pawns and eight variety figures that all move in different patterns, which they must use to capture the opponent’s king. The mathematics of chess is technically finite—unless both players intentionally prolong the game forever as a thought experiment. That means computers have always been in the game, so to speak, learning chess moves and processing long lists of possibilities at faster and faster speeds.
In 1997, Russian chess grandmaster Garry Kasparov lost to IBM’s supercomputer Deep Blue, which could simply smash through thousands or millions of possible moves in a much faster time than the human brain can do.
TOM MIHALEK/AFP // Getty Images
- Year AI had a benchmark win: 1997
Chess has always led the way in technology, from games conducted by letter or telegram to early internet servers used to host primitive chess clients.
The game of chess is deceptively simple: Each player has 16 pieces including eight identical pawns and eight variety figures that all move in different patterns, which they must use to capture the opponent’s king. The mathematics of chess is technically finite—unless both players intentionally prolong the game forever as a thought experiment. That means computers have always been in the game, so to speak, learning chess moves and processing long lists of possibilities at faster and faster speeds.
In 1997, Russian chess grandmaster Garry Kasparov lost to IBM’s supercomputer Deep Blue, which could simply smash through thousands or millions of possible moves in a much faster time than the human brain can do.
-
-
How computers have (mostly) conquered chess, poker and more
Ben Hider // Getty Images
- Year AI had a benchmark win: 2011
'Jeopardy!' is a long-running American quiz show where contestants deliver their answers in the form of a question. In 2011, the show was the site of an astonishing victory when IBM’s artificial intelligence, Watson, won handily over two human contestants. And the two humans were nobody to sneeze at, either, as both were iconic former winners: Ken Jennings, who held the longest streak ever; and Brad Rutter, who held the biggest amount of total prize money ever.
Watson was the collective name for a set of 10 racks of 10 powerful processors each. Watson also had to be trained not just for knowledge but for the style and structure of 'Jeopardy!' questions, making the victory all the more impressive.
Ben Hider // Getty Images
- Year AI had a benchmark win: 2011
'Jeopardy!' is a long-running American quiz show where contestants deliver their answers in the form of a question. In 2011, the show was the site of an astonishing victory when IBM’s artificial intelligence, Watson, won handily over two human contestants. And the two humans were nobody to sneeze at, either, as both were iconic former winners: Ken Jennings, who held the longest streak ever; and Brad Rutter, who held the biggest amount of total prize money ever.
Watson was the collective name for a set of 10 racks of 10 powerful processors each. Watson also had to be trained not just for knowledge but for the style and structure of 'Jeopardy!' questions, making the victory all the more impressive.
-
How computers have (mostly) conquered chess, poker and more
Andreas Rentz // Getty Images
- Year AI had a benchmark win: 2013
For people over a certain age, Atari knowledge is practically guaranteed. The iconic video game and console manufacturer captured the public imagination, and its influence on pop culture was so great that even nongamers had to take notice. The Atari 2600 console was released in 1977 and represented a major step forward in home gameplay, including both a joystick and a controller. That means any computer attempting to do well at Atari games must both comprehend the levels in games and also turn that information into action cues involving direction as well as buttons.
In 2013, researchers published the results of a study where a computer even outperformed a human expert at the games “Breakout,” “Enduro,” and “Pong.”
Andreas Rentz // Getty Images
- Year AI had a benchmark win: 2013
For people over a certain age, Atari knowledge is practically guaranteed. The iconic video game and console manufacturer captured the public imagination, and its influence on pop culture was so great that even nongamers had to take notice. The Atari 2600 console was released in 1977 and represented a major step forward in home gameplay, including both a joystick and a controller. That means any computer attempting to do well at Atari games must both comprehend the levels in games and also turn that information into action cues involving direction as well as buttons.
In 2013, researchers published the results of a study where a computer even outperformed a human expert at the games “Breakout,” “Enduro,” and “Pong.”
-
-
How computers have (mostly) conquered chess, poker and more
Barry Chin/The Boston Globe // Getty Images
- Year AI had a benchmark win: 2015
Poker is the collective name for a bunch of card games of different styles that are typically played in contexts where players bet tokens or money. One of the major challenges with poker is that the computer—or any player—has imperfect information, meaning entire events transpire that just one player knows about and doesn’t share. Unlike perfect information games like Connect Four and checkers, poker doesn't make all the pieces that are in play visible at once.
In computing terms, imperfect information translates to an amorphous black box with mystery contents. But in 2015, a computer algorithm called CFR+ broke the black box. Texas Hold 'Em is among one of the most popular poker games, and a variation called heads-up limit has only two players, which makes “solving” the game with a computer simpler than when there are more players in the mix. The CFR+ algorithm “solved” heads-up limit poker, meaning the computer will likely be able to beat almost any human player.
Barry Chin/The Boston Globe // Getty Images
- Year AI had a benchmark win: 2015
Poker is the collective name for a bunch of card games of different styles that are typically played in contexts where players bet tokens or money. One of the major challenges with poker is that the computer—or any player—has imperfect information, meaning entire events transpire that just one player knows about and doesn’t share. Unlike perfect information games like Connect Four and checkers, poker doesn't make all the pieces that are in play visible at once.
In computing terms, imperfect information translates to an amorphous black box with mystery contents. But in 2015, a computer algorithm called CFR+ broke the black box. Texas Hold 'Em is among one of the most popular poker games, and a variation called heads-up limit has only two players, which makes “solving” the game with a computer simpler than when there are more players in the mix. The CFR+ algorithm “solved” heads-up limit poker, meaning the computer will likely be able to beat almost any human player.
-
How computers have (mostly) conquered chess, poker and more
Google // Getty Images
- Year AI had a benchmark win: 2016
“Go” is one of the oldest exigent games played by humans and involves a much more complex playspace even than chess. It’s played on a 19-by-19 board with up to hundreds of pieces, making its strategies and number of possible moves exponentially greater than those in chess. For this reason, people have long believed that computers aren’t really capable of figuring it out.
In 2016, however, Google’s AI, DeepMind, finally bested the best human players at “Go.” To achieve this, the system first employed deep learning, a technology where computers study human behavior in great depth in order to build a library of available strategies. Then they had two copies of the computer play each other in order to build an even more superior second set of moves based on what the “best” human-conceived moves could offer.
Google // Getty Images
- Year AI had a benchmark win: 2016
“Go” is one of the oldest exigent games played by humans and involves a much more complex playspace even than chess. It’s played on a 19-by-19 board with up to hundreds of pieces, making its strategies and number of possible moves exponentially greater than those in chess. For this reason, people have long believed that computers aren’t really capable of figuring it out.
In 2016, however, Google’s AI, DeepMind, finally bested the best human players at “Go.” To achieve this, the system first employed deep learning, a technology where computers study human behavior in great depth in order to build a library of available strategies. Then they had two copies of the computer play each other in order to build an even more superior second set of moves based on what the “best” human-conceived moves could offer.
-
-
How computers have (mostly) conquered chess, poker and more
AAron Ontiveroz/The Denver Post // Getty Images
- Year AI had a benchmark win: 2017
“Pac-Man” is a classic arcade game in which players control a hungry yellow circle to chow down on mysterious white dots. All the while, you’re pursued by colorful pixelated ghosts determined to catch and eat you. “Ms. Pac-Man” is a notoriously tough sequel that proponents argue is even better than the original. To conquer it, an AI group called Maluuba that was acquired by Google’s DeepMind designed a system that experts fittingly named “divide and conquer.” They divided all the actions in the game into “chunks,” like escaping ghosts or seeking out a particular white dot. Then they assigned a “manager” role to decide, in the moment, what the best strategic move is.
AAron Ontiveroz/The Denver Post // Getty Images
- Year AI had a benchmark win: 2017
“Pac-Man” is a classic arcade game in which players control a hungry yellow circle to chow down on mysterious white dots. All the while, you’re pursued by colorful pixelated ghosts determined to catch and eat you. “Ms. Pac-Man” is a notoriously tough sequel that proponents argue is even better than the original. To conquer it, an AI group called Maluuba that was acquired by Google’s DeepMind designed a system that experts fittingly named “divide and conquer.” They divided all the actions in the game into “chunks,” like escaping ghosts or seeking out a particular white dot. Then they assigned a “manager” role to decide, in the moment, what the best strategic move is.
-
How computers have (mostly) conquered chess, poker and more
Leon Neal // Getty Images
- Year AI had a benchmark win: 2019
“StarCraft II” is a 2010 multiplayer, real-time strategy game that has been free-to-play since 2017. In RTS games, multiple players can simultaneously take actions, meaning there’s a huge competitive emphasis on making as many actions per minute as possible. The game has a huge esports scene, with professionals who can somehow reach or top 300 APM.
Google’s DeepMind set its sights on “StarCraft II” as a worthy challenge after their efforts with “Go,” “Ms. Pac-Man,” and others. In 2019, their artificial intelligence was able to reach the top 0.15% of 90,000 players in the game’s European servers at the time.
Leon Neal // Getty Images
- Year AI had a benchmark win: 2019
“StarCraft II” is a 2010 multiplayer, real-time strategy game that has been free-to-play since 2017. In RTS games, multiple players can simultaneously take actions, meaning there’s a huge competitive emphasis on making as many actions per minute as possible. The game has a huge esports scene, with professionals who can somehow reach or top 300 APM.
Google’s DeepMind set its sights on “StarCraft II” as a worthy challenge after their efforts with “Go,” “Ms. Pac-Man,” and others. In 2019, their artificial intelligence was able to reach the top 0.15% of 90,000 players in the game’s European servers at the time.
-
-
How computers have (mostly) conquered chess, poker and more
Suzi Pratt/FilmMagic // Getty Images
- Year AI had a benchmark win: 2019
“Dota 2” is a multiplayer online battle arena game, meaning groups of teams (usually two and some larger multiples of two) face each other in premade battle maps and fight to see who wins.
First released in 2013, “Dota 2” features more than 100 player characters to choose from with different strengths and weaknesses. The game is massively popular and has a huge esports scene. Part of the goal with trying to beat humans at a game like “Dota 2” is that these games make great, relatable examples of how AIs are able to think on their feet, so to speak, in complicated, ever-changing environments. OpenAI’s team, OpenAI Five—so named because two teams of five players compete in “Dota 2”—was able to defeat the world’s top-ranked “Dota 2” esports team in 2019.
This story originally appeared on PokerListings and was produced and distributed in partnership with Stacker Studio.
Suzi Pratt/FilmMagic // Getty Images
- Year AI had a benchmark win: 2019
“Dota 2” is a multiplayer online battle arena game, meaning groups of teams (usually two and some larger multiples of two) face each other in premade battle maps and fight to see who wins.
First released in 2013, “Dota 2” features more than 100 player characters to choose from with different strengths and weaknesses. The game is massively popular and has a huge esports scene. Part of the goal with trying to beat humans at a game like “Dota 2” is that these games make great, relatable examples of how AIs are able to think on their feet, so to speak, in complicated, ever-changing environments. OpenAI’s team, OpenAI Five—so named because two teams of five players compete in “Dota 2”—was able to defeat the world’s top-ranked “Dota 2” esports team in 2019.
This story originally appeared on PokerListings and was produced and distributed in partnership with Stacker Studio.