What Can We Learn from Games?

ImageThis holiday season I enjoyed giving, receiving, and playing several new card and board games with friends and family. These included classics such as cribbage, strategy games like Dominion and Power Grid, and the whimsical Munchkin.

Can video and board games teach us more than just strategy? What if games could teach us not to be better thinkers, but just to be… better? A while ago we discussed how monopoly was originally designed as a learning experience to promote cooperation. Lately I have learned of two other such games in a growing genre and wanted to share them here.

The first is Depression Quest by Zoe Quinn (via Jeff Atwood):

Depression Quest is an interactive fiction game where you play as someone living with depression. You are given a series of everyday life events and have to attempt to manage your illness, relationships, job, and possible treatment. This game aims to show other sufferers of depression that they are not alone in their feelings, and to illustrate to people who may not understand the illness the depths of what it can do to people.

The second is Train by Brenda Romero (via Marcus Montano) described here with spoilers:

In the game, the players read typewritten instructions. The game board is a set of train tracks with box cars, sitting on top of a window pane with broken glass. There are little yellow pegs that represent people, and the player’s job is to efficiently load those people onto the trains. A typewriter sits on one side of the board.

The game takes anywhere from a minute to two hours to play, depending on when the players make a very important discovery. At some point, they turn over a card that has a destination for the train. It says Auschwitz. At that point, for anyone who knows their history, it dawns on the player that they have been loading Jews onto box cars so they can be shipped to a World War II concentration camp and be killed in the gas showers or burned in the ovens.

The key emotion that Romero said she wanted the player to feel was “complicity.”

“People blindly follow rules,” she said. “Will they blindly follow rules that come out of a Nazi typewriter?”

I have tried creating my own board games in the past, and this gives me renewed interest and a higher standard. What is the most thought-provoking moment you have experienced playing games?

The Political Economy of Scrabble: Currency, Innovation, and Norms

Scrabble ornaments made by Jennifer Bormann, 2011

Scrabble Christmas ornaments made by Jennifer Bormann, 2011

In Scrabble, there is a finite amount of resources (letter tiles) that players use to create value (points) for themselves. Similarly, in the real world matter cannot be created so much of human effort is rearranging the particles that exist into more optimal combinations. The way that we keep track of how desirable those new combinations are in the economy is with money. Fiat currency has no intrinsic value–it is just said to be worth a certain amount. Sometimes this value changes in response to other currencies. Other times, governments try to hold it fixed. The “law of Scrabble” has remained unchanged since 1938 when it was introduced–but that may be about to change.

Like any well-intentioned dictator, Scrabble inventor Alfred Butts tried to base the value of his fiat money–er, tiles–on a reasonable system:  the frequency of their appearance on the front page of the New York Times. As the English language and the paper of record have evolved over the years, though, the tiles’ stated value has remained static. This has opened the door for arbitrage opportunities, although some players try to enforce norms to discourage this type of play:

What has changed in the intervening years is the set of acceptable words, the corpus, for competitive play. As an enthusiastic amateur player I’ve annoyed several relatives with words like QI and ZA, and I think the annoyance is justified: the values for Scrabble tiles were set when such words weren’t acceptable, and they make challenging letters much easier to play.

That is a quote from Joshua Lewis, who has proposed updating Scrabble scoring using his open source software package Valett. He goes on to say:

For Scrabble, Valett provides three advantages over Butts’ original methodology. First, it bases letter frequency on the exact frequency in the corpus, rather than on an estimate. Second, it allows one to selectively weight frequency based on word length. This is desirable because in a game like Scrabble, the presence of a letter in two- or three-letter words is valuable for playability (one can more easily play alongside tiles on the board), and the presence of a letter in seven- or eight-letter words is valuable for bingos. Finally, by calculating the transition probabilities into and out of letters it quantifies the likelihood of a letter fitting well with other tiles in a rack. So, for example, the probability distribution out of Q is steeply peaked at U, and thus the entropy of Q’s outgoing distribution is quite low.

Lewis’s idea seems to fit with a recent finding by Peter Norvig of Google. Norvig was contacted last month by Mark Mayzner, who studied the same kind of information as the Valett package but did it back in the early 1960s. Mayzner asked Norvig whether his group at Google would be interested in updating those results from five decades ago using the Google Corpus Data. Here’s what Norvig has to say about the process:

The answer is: yes indeed, I (Norvig) am interested! And it will be a lot easier for me than it was for Mayzner. Working 60s-style, Mayzner had to gather his collection of text sources, then go through them and select individual words, punch them on Hollerith cards, and use a card-sorting machine.

Here’s what we can do with today’s computing power (using publicly available data and the processing power of my own personal computer; I’m not not relying on access to corporate computing power):

1. I consulted the Google books Ngrams raw data set, which gives word counts of the number of times each word is mentioned (broken down by year of publication) in the books that have been scanned by Google.

2. I downloaded the English Version 20120701 “1-grams” (that is, word counts) from that data set given as the files “a” to “z” (that is, http://storage.googleapis.com/books/ngrams/books/googlebooks-eng-all-1gram-20120701-a.gz to http://storage.googleapis.com/books/ngrams/books/googlebooks-eng-all-1gram-20120701-z.gz). I unzipped each file; the result is 23 GB of text (so don’t try to download them on your phone).

3. I then condensed these entries, combining the counts for all years, and for different capitalizations: “word”, “Word” and “WORD” were all recorded under “WORD”. I discarded any entry that used a character other than the 26 letters A-Z. I also discarded any word with fewer than 100,000 mentions. (If you want you can download the word count file; note that it is 1.5 MB.)

4. I generated tables of counts, first for words, then for letters and letter sequences, keyed off of the positions and word lengths.

Here is the breakdown of word lengths that resulted (average=4.79):


Sam Eifling then took Norvig’s results and translated them into updated Scrabble values:

While ETAOINSR are all, appropriately, 1-point letters, the rest of Norvig’s list doesn’t align with Scrabble’s point values….

This potentially opens a whole new system of weighing the value of your letters….  H, which appeared as 5.1 percent of the letters used in Norvig’s survey, is worth 4 points in Scrabble, quadruple what the game assigns to the R (6.3 percent) and the L (4.1 percent) even though they’re all used with similar frequency. And U, which is worth a single point, was 2.7 percent of the uses—about one-fifth of E, at 12.5 percent, but worth the same score. This confirms what every Scrabble player intuitively knows: unless you need it to unload a Q, your U is a bore and a dullard and should be shunned.

However, Norving included repeats like “THE”–not much fun to play in Scrabble, and certainly not with the same frequency it appears in the text corpus (1 out of 14 turns). With the help of his friend Kyle Rimkus, Eifling conducted a letter-frequency survey of words from the Scrabble dictionary and came up with these revisions to the scoring system:

Image from Slate

Image from Slate

Eifling points out that Q and J seem quite undervalued in the present scoring system. So what is an entrepreneurial player to do? “Get rid of your J and your Q as quickly as possible, because they’re just damn hard to play and will clog your rack. The Q, in fact, is the worst offender,” he says.

Now as with any proposed policy update that challenges long-standing norms, there has been some pushback against these recent developments.  at Slate quotes the old guard of Scrabble saying that the new values “take the fun out” of the game. Fatsis seems to hope that the imbalance between stated and practical values will persist:

Quackle co-writer John O’Laughlin, a software engineer at Google, said the existing inequities also confer advantages on better players, who understand the “equity value” of each tile—that is, its “worth” in points compared with the average tile. That gives them an edge in balancing scoring versus saving letters for future turns, and in knowing which letters play well with others. “If we tried to equalize the letters, this part of the game wouldn’t be eliminated, but it would definitely be muted,” O’Laughlin said. “Simply playing the highest score available every turn would be a much more fruitful strategy than it currently is.”

In political economy this is known as rent-seeking behavior. John Chew, doctoral student in mathematics at the University of Toronto and co-president of the North American Scrabble Players Association, went so far as to call Valett a “catastrophic outrage.”

Who knew that the much beloved board game could provoke such strong feelings? With a fifth edition of the Scrabble dictionary due in 2014 it seems possible but highly unlikely that there could be a response to these new findings. A more probable outcome is that we begin to see “black market” Scrabble valuations that incorporate the new data, much like underground economies emerge in states with strict official control over the value of their money. Yet again, evidence for politics in everyday life.

For more fun with letter games, data, and coding, check out Jeff Knups’ guide to “Creating and Optimizing a Letterpress Cheating Program in Python.”

The Politics of Monopoly

Earliest known rendering of The Landlord’s Game, 1904

The official history of Monopoly, as told by Hasbro, which owns the brand, states that the board game was invented in 1933 by an unemployed steam-radiator repairman and part-time dog walker from Philadelphia named Charles Darrow. Darrow had dreamed up what he described as a real estate trading game whose property names were taken from Atlantic City, the resort town where he’d summered as a child….

The game’s true origins, however, go unmentioned in the official literature. Three decades before Darrow’s patent, in 1903, a Maryland actress named Lizzie Magie created a proto-Monopoly as a tool for teaching the philosophy of Henry George, a nineteenth-century writer who had popularized the notion that no single person could claim to “own” land. In his book Progress and Poverty (1879), George called private land ownership an “erroneous and destructive principle” and argued that land should be held in common, with members of society acting collectively as “the general landlord.”

Magie called her invention The Landlord’s Game, and when it was released in 1906 it looked remarkably similar to what we know today as Monopoly. It featured a continuous track along each side of a square board; the track was divided into blocks, each marked with the name of a property, its purchase price, and its rental value…. The Landlord Game’s chief entertainment was the same as in Monopoly: competitors were to be saddled with debt and ultimately reduced to financial ruin, and only one person, the supermonopolist, would stand tall in the end. The players could, however, vote to do something not officially allowed in Monopoly: cooperate. Under this alternative rule set, they would pay land rent not to a property’s title holder but into a common pot—the rent effectively socialized so that, as Magie later wrote, “Prosperity is achieved.”

From Harper’s, the entire thing is worth a read.

As an aside, John von Neumann and Oskar Morgenstern’s classic Theory of Games and Economic Behavior was based on earlier research by von Neumann entitled “On the Theory of Parlor Games.” They likely had in mind games of the type and complexity that humans actually play and find interesting, rather than the artificially simplified games that now fall under the purview of game theory.

Wednesday Nerd Fun: Sudoku on the Richter Scale

Sudoku Richter Scale from MIT

I have wanted to write a post on Sudoku for a while now–especially computer programs that can solve puzzles or evaluate solutions. This week’s Nerd Fun post gives me a chance to bring up the topic, thanks to a recent post at Technology Review.

Sudoku puzzles are generally classified as easy, medium or hard with puzzles having more starting clues generally but not always easier to solve. But quantifying the difficulty mathematically is hard.

Now Ercsey-Ravasz and Toroczkai say they’ve worked out a way to do it using algorithmic complexity theory. They point out that it’s easy to design an algorithm that solves Sudoku by testing every combination of digits to find the one that works. That kind of brute force solution guarantees you an answer but not very quickly.

Instead, algorithm designers look for cleverer ways of finding solutions that exploit the structure and constraints of the problem. These algorithms and their behaviour are are more complex but they get an answer more quickly.

The central point of Ercsey-Ravasz and Toroczkai argument is that because an algorithm reflects the structure of the problem, its behaviour–the twists and turns that it follows through state space–is a good measure of the difficulty of the problem.

To quantify the difficulty of Sudoku puzzles, Ercsey-Ravasz and Toroczkai evaluate the complexity of the problem as the solution progresses. A puzzle does not have a static state of difficulty: the solution gets chaotic before it starts to coalesce. The end result is a type of “Richter scale” for puzzles, although it doesn’t go all the way to 10–or even 4, at least not yet.

They say this scale correlates surprisingly well with the subjective human ratings with 1 corresponding to easy puzzles, 2 to medium puzzles and 3 to hard puzzles. The platinum blond has a difficulty of 3.5789.

An interesting corollary is that no Sudoku puzzle is known with a difficulty of 4.  And the number of clues is not always a good measure of difficulty either.  Ercsey-Ravasz and Toroczkai say they tested many puzzles including several with the 17 clues, the minimum number, and a few with 18 clues.

These were all easier to solve than the platinum blond, which has 21 clues. That’s because the hardness of the puzzle depends not only on the number of clues but also on their position as well.

Wednesday Nerd Fun: How to Win at Jeopardy

Alex: “You know Roger, you could set a new one day record.”

Roger: “What’s the old one?”

Roger Craig should have known the answer, because he held the old record.

Craig says it works like Moneyball — a reference to the book and movie about the statistical techniques used by legendary Oakland Athletics coach Billy Beane to build a winning baseball team. Craig’s system also relied heavily on statistics.

“I actually downloaded this site called the Jeopardy! Archive, which is a fan-created site of all the questions and answers that are on the show.”

“Something like 211,000 questions and answers that have appeared on Jeopardy!,” says Esquire writer Chris Jones, a self-proclaimed “game-show nerd” who’s familiar with Craig’s tactics.

Using data-mining and text-clustering techniques, Craig grouped questions by category to figure out which topics were statistically common — and which weren’t.

“Obviously it’s impossible to know everything,” Jones says. “So he was trying to decide: What things did he need to know? He prepared himself in a way that I think is probably more rigorous than any other contestant.”

You can find the full article here, and the link comes to us via Tyler Cowen. To see Roger at work play, check out these clips:

Wednesday Nerd Fun: Games (and More) in Stata

Stata is a software program for running statistical analysis, as readers who have been to grad school in the social sciences in the last couple of decades will know. Compared to R Stata is like an old TI-83 calculator, but it remains popular with those who spent the best years of their lives typing commands into its green-on-black interface. I recently discovered that Stata shares one important feature with the TI-83 calculator: the ability to play games. (For TI-83 games, see here and here.)

Eric Booth of Texas A&M shares this implementation of Blackjack in Stata:

The game is played by typing -blackjack- into the command window and then the game prompts the user for the amount she wants to bet (default is $500 which replenishes after you lose it all or you exit Stata), and whether to hit or stay.  It doesn’t accurately represent all the rules and scenarios of a real game a blackjack (e.g., no doubling down), so don’t use it to prep for your run at taking down a Vegas casino.

Booth’s blog provides other fun, unconventional uses of Stata as well. There’s a script that lets you Google from the Stata interface, one that lets you control iTunes, and even one for running commands from an iPhone.

This post is probably less “general interest” than most of the nerd fun posts, but I hope you enjoyed it.

Wednesday Nerd Fun: Create Your Own Crossword Puzzles

Whether you want to make your own crossword puzzles, or just wonder how they are created, this post is for you. A user over at StackExchange asked how to create a puzzle in \LaTeX. Another user named Thorsten gave a very comprehensive answer, which forms the basis for this post.

The \LaTeX package to use is cwpuzzle. It isn’t quite as easy as I had envisioned, but is still relatively simple. The key part of Thorsten’s code looks like this:

|*    |[1]O |[2]P |E  |R     |A  |T     |I  |O    |N  |*     |*    |[3]B |*  |*  |*  |. 
|*    |*    |L    |*  |*     |*  |*     |*  |*    |*  |*     |[4]R |A    |N  |G  |E  |. 
|[5]E |*    |A    |*  |[6]M  |*  |*     |*  |*    |*  |*     |*    |R    |*  |*  |*  |. 
|S    |*    |[7]C |O  |O     |R  |D     |I  |N    |A  |T     |E    |G    |R  |I  |D  |. 
|T    |*    |E    |*  |D     |*  |*     |*  |*    |*  |*     |*    |R    |*  |*  |*  |. 
|I    |*    |V    |*  |E     |*  |*     |*  |[8]V |A  |R     |I    |A    |B  |L  |E  |. 
|[9]M |E    |A    |N  |*     |*  |*     |*  |*    |*  |*     |*    |P    |*  |*  |*  |. 
|A    |*    |L    |*  |[10]L |I  |N     |E  |G    |R  |[11]A |P    |H    |*  |*  |*  |. 
|T    |*    |U    |*  |*     |*  |*     |*  |*    |*  |X     |*    |*    |*  |*  |*  |. 
|I    |*    |E    |*  |*     |*  |[12]S |C  |A    |L  |E     |M    |O    |D  |E  |L  |. 
|O    |*    |*    |*  |*     |*  |*     |*  |*    |*  |S     |*    |*    |*  |*  |*  |. 
|N    |*    |*    |*  |*     |*  |*     |*  |*    |*  |*     |*    |*    |*  |*  |*  |. 

And here is the result:

The Puzzle

The Answers

A pretty neat tool, overall.

Crossword fans might also like this video, with remarks from a classic crossword puzzle “grid man:”

Wednesday Nerd Fun: Dictators and Sit-Coms

While it isn’t the Dictator Game familiar to students of game theory, smalltime industries’ Guess the Dictator/Sit-Com Character game promises some genuine, nerdy fun. From their intro:

Have you always thought of yourself as a sitcom character? Or maybe a world-class dictator? No, me either. But now you can. Pretend to be the bumbling sidekick or the heartless autocrat and our state-of-last-century’s-art algorithm will guess who you are. Can’t think of a sitcom character or dictator? Then answer the questions as yourself, and you’ll find out whom you most resemble.

So, whether you’re Gilligan or Fidel stuck on that island, answer the Yes/No questions as best you can and the computer will try to guess who you are.

The game is an example of Bayesian thinking on two levels. By asking yes/no, it discriminates you into smaller and smaller categories, eventually arriving at a single answer as in the children’s game “Guess Who.” On another level, if you go through the whole series of questions and the game arrives at the wrong answer, it asks you to provide a single additional question that would help to differentiate between who you were pretending to be and who the game guessed you were. Once a sufficient number of people have played, theoretically the game should guess correctly every time. Can you stump it?

Wednesday Nerd Fun: The Game of 99

This week’s entertainment is an original creation–my first computer game. The game is based on the board game “99” (French: “Le Jeu de 99”) that my girlfriend’s family introduced me to a couple of summers ago.

I made the game for several reasons. The first was to improve my very modest Python programming skills–this was my first program to include a GUI. An earlier version was text based, and it was very ugly–email me for the source code if you want to see just how ugly. The second reason was the irony of turning an old-fashioned Mennonite game into a computer game. Perhaps I will try to preserve other antique games this way in the future. My third motivation was to create a multi-player online version, but that will have to wait.

Game play proceeds as follows. On each turn, the player (total number: two or three) can draw a card, discard, skip their turn, or play. The cards are numbered with a range: the largest is 0 to 99, the smallest is 99 to 99. This range specifies the number spots on the board onto which the player can place their marker. You may have up to five cards in your hand at a time. The game allows players to hide or show their hands so that their competitor may not see them. When you choose to play, you may place your marker on any open spot within the range specified by the card that you play. The game is won when a player places five of their markers in a row horizontally, vertically, or diagonally. (Full instructions.)

Opening screenshot

The game does not currently have a function to tell when a player is won, so you’ll have to do that visually for now. This is the first update that I hope to add, along with a “replay” button. As I said above, I’d also like to make the game playable online, which will probably mean learning Django.

Thanks to Jennifer for her help as a beta tester. You can download the source code for the game here or the Mac OS X app version here. Please notify me of any problems with the game–developers are welcome to modify the source themselves under a Creative Commons license.