The odd thing about the much publicized Jeopardy match between humans and IBM's Watson is how differently both sides are challenged by the game. Arguably the hardest part for the human players, acquiring and retaining information, is trivial for the computer while certainly the hardest part for Watson, understanding everyday human language, is something almost all of us master as young children.
Natural language processing continues to chug along at a respectable pace. Things like Watson and even Google Translate represent remarkable advances. Still, they hardly seem like amazing advances in artificial intelligence. I'm not going to worry about the rise of the machines until they start beating us at games like Robert Abbott's Eleusis.
Abbott's game (old Eleusis -- you can buy a booklet of rules for the updated game from Mr. Abbott himself) made its national début in the Second Scientific American Book of Mathematical Puzzles and Diversions by Martin Gardner. It's easy to play but a bit complicated to score (not unnecessarily complicated -- there's a real flash of insight behind the process).
The dealer (sometimes referred to as 'Nature' or 'God' for what will be obvious reasons) writes a rule like "If the card on top is red, play a black card. If the card on top is is black, then play a red card." on a piece of paper then folds it and puts it away. The dealer then shuffles the deck, randomly selects a card, puts it face up in the center of the table then deals out the rest evenly to the players (the dealer doesn't get a hand). If the number of cards isn't divisible by the number of players the extra cards are put aside.
The first player selects any card from his or her hand and puts it on top of the starter card. Based on the hidden rule, the dealer says 'right' and the card stays on the pile or says 'wrong' and the card (called a mistake card) goes face up in front of the player. The players continue in turn
The object for players is to have as few mistake cards as possible. The object for the dealer is to have the largest possible range in players' scores.
At the end of the first hand, the score is calculated for the dealer. The scoring method is clever but a bit complicated. For n players (excluding the dealer), have each player count his or her mistake cards then multiply the smallest number by n-1 and subtract the product from the total number of mistake cards in front of the other players. For example, if there were four players with 7, 2, 9 and 8 mistake cards, you would multiply 2 (the lowest) times 3 (n-1) and subtract that from 24 (the sum of the rest).
In the second stage, the players take turns selecting cards from their mistake pile (leaving them face up so that other players can see what has been rejected). Play continues until someone goes out or until the dealer sees that no more cards can be accepted. At that point the rule is revealed.
Players' score are then calculated with a formula similar to the one used for the dealer: each player multiplies his or her mistake cards by n-1 then subtracts the product from the total of the other players' mistake cards. If the difference is negative, the score is zero. The player who goes out first or who has the fewest cards if no one goes out gets an additional six points.
While most 'new' games are actually collections of old ideas with new packaging, Abbott managed to come up with two genuinely innovative ideas for Eleusis: the use of induction and the scoring of the dealer. As someone who has spent a lot of time studying games, I may be even more impressed with the second. One of the fundamental challenges of game design is coming up with rules that encourage strategies that make the game more enjoyable for all the players. In this case, that means giving the dealer an incentive to come up with rules that are both challenging enough to stump some of the players and simple enough that someone will spot the pattern.
Eleusis has often been used as a tool for teaching the scientific method. You recognize a pattern, form a hypothesis, and test it. Gardner discusses this analogy at length. At one point, he even brings William James and John Dewey into the conversation.
The New York Times said that Robert Abbott's games were "for lovers of the unfamiliar challenge." Any AI people out there up to that challenge?
What can you infer from three Russian data points?
16 minutes ago