A New Wiki for Computer Science Symbols

Computer science is increasingly relevant to a wide range of professional fields, yet many working programmers today do not have a formal CS education. This makes it difficult for the uninitiated to read academic research in computer science and related fields. Keeping up with the latest research is not a job requirement for most programmers, but understanding fundamental papers (such as the ones listed on Papers We Love) is important for building on established knowledge.

However, jargon and unfamiliar symbols present a non-trivial barrier to entry. This came up in the discussion on a recent episode of the Turing Incomplete podcast. A few existing resources were mentioned such as Wikipedia’s math symbols page and Volume I of The Art of Computer Programming. None of these is ideal for new programmers who may not know the names of the symbols, though.

That’s why I started a CS notation wiki. There are currently four pages, one each for computational symbols, linguistic symbols, logical symbols, and mathematical operators. Each page currently only has a few entries, but requests for additional ones can be filed as Github issues. New contributions are certainly welcome, and should be submitted as pull requests. Contribution guidelines can be found on the wiki’s home page. Other suggestions can be submitted as comments here, via email, or on Twitter. Let me know how this could be more useful to you!

Wednesday Nerd Fun: Sudoku on the Richter Scale

Sudoku Richter Scale from MIT

I have wanted to write a post on Sudoku for a while now–especially computer programs that can solve puzzles or evaluate solutions. This week’s Nerd Fun post gives me a chance to bring up the topic, thanks to a recent post at Technology Review.

Sudoku puzzles are generally classified as easy, medium or hard with puzzles having more starting clues generally but not always easier to solve. But quantifying the difficulty mathematically is hard.

Now Ercsey-Ravasz and Toroczkai say they’ve worked out a way to do it using algorithmic complexity theory. They point out that it’s easy to design an algorithm that solves Sudoku by testing every combination of digits to find the one that works. That kind of brute force solution guarantees you an answer but not very quickly.

Instead, algorithm designers look for cleverer ways of finding solutions that exploit the structure and constraints of the problem. These algorithms and their behaviour are are more complex but they get an answer more quickly.

The central point of Ercsey-Ravasz and Toroczkai argument is that because an algorithm reflects the structure of the problem, its behaviour–the twists and turns that it follows through state space–is a good measure of the difficulty of the problem.

To quantify the difficulty of Sudoku puzzles, Ercsey-Ravasz and Toroczkai evaluate the complexity of the problem as the solution progresses. A puzzle does not have a static state of difficulty: the solution gets chaotic before it starts to coalesce. The end result is a type of “Richter scale” for puzzles, although it doesn’t go all the way to 10–or even 4, at least not yet.

They say this scale correlates surprisingly well with the subjective human ratings with 1 corresponding to easy puzzles, 2 to medium puzzles and 3 to hard puzzles. The platinum blond has a difficulty of 3.5789.

An interesting corollary is that no Sudoku puzzle is known with a difficulty of 4.  And the number of clues is not always a good measure of difficulty either.  Ercsey-Ravasz and Toroczkai say they tested many puzzles including several with the 17 clues, the minimum number, and a few with 18 clues.

These were all easier to solve than the platinum blond, which has 21 clues. That’s because the hardness of the puzzle depends not only on the number of clues but also on their position as well.

Wednesday Nerd Fun: Build Your Own Turing Machine

I sent this around to a few folks last week, but thought I would share it here as well. If you are not quite nerdy enough to know what a Turing Machine is (hint–you have already used one), check out the Wikipedia page on it.

The version of the machine shown below was built by Mike Davey, who offers this description of himself and the project:

I live in northeast Wisconsin and love to build things. I’ve always liked to make things, take things apart, and see how stuff works. I’ve made all sorts of things, from a CNC router to a Gingery metal lathe, from a greenhouse to furniture; it doesn’t really matter, I find it all enjoyable. I’m also fortunate that, while they may not always understand what I’m building, my family has always been supportive.

The Turing machine came about from a long interest in the history of computers. It’s amazing how groundbreaking computer concepts that were developed during the 40’s and 50’s are now often taken for granted. Something that today seems as basic as the flip-flop or a stack were hard-won ideas in their day. The Turing machine is that type of concept; although it seems almost trivial today, is it still conceptually so powerful.

While thinking about Turing machines I found that no one had ever actually built one, at least not one that looked like Turing’s original concept (if someone does know of one, please let me know). There have been a few other physical Turing machines like the Lego of Doom, but none were immediately recognizable as Turing machines. As I am always looking for a new challenge, I set out to build what you see here.

Here is Mike’s Turing Machine:

And in true WNF spirit, here’s the Lego of Doom project that Mike mentions, set to the theme of the A-Team: