Etiquette in the Digital Age

writing_stylesIt happens whenever new communication technology comes into widespread use. Standard forms of behavior that worked well in the past are less suitable for the new medium. When the telephone was invented, people were unsure how to greet the caller. Thankfully Alexander Graham Bell’s proposed “Ahoy!” was not adopted. Similarly, recent technologies such as text messaging and smartphone internet access are challenging existing norms and creating new ones. This post describes some of those changes, but should not be interpreted as taking a position on which are appropriate.

One taboo is asking someone a question when the information is readily available on the internet. If you want to chide the questioner you might use lmgtfy.com, which stands for Let Me Google That for You.

Voicemails–a relatively new technology themselves–are on the way out, replaced by a follow-up text message if necessary. Caity Weaver has a list of when she considers voicemails OK and when they are unwarranted.

Personally I use e-mail sign-offs as if I was writing a short letter, but Matthew Malady wants to kill this bit of formality:

[E]veryone has a breaking point. For me, it was the ridiculous variations on “Regards” that I received over the past holiday season. My transition from signoff submissive to signoff subversive began when a former colleague ended an email to me with “Warmest regards.”

Were these scalding hot regards superior to the ordinary “Regards” I had been receiving on a near-daily basis? Obviously they were better than the merely “Warm Regards” I got from a co-worker the following week. Then I received “Best Regards” in a solicitation email from the New Republic. Apparently when urging me to attend a panel discussion, the good people at the New Republic were regarding me in a way that simply could not be topped.

After 10 or 15 more “Regards” of varying magnitudes, I could take no more. I finally realized the ridiculousness of spending even one second thinking about the totally unnecessary words that we tack on to the end of emails. And I came to the following conclusion: It’s time to eliminate email signoffs completely. Henceforth, I do not want—nay, I will not accept—any manner of regards. Nor will I offer any. And I urge you to do the same.

The difficulty with these emerging norms is the disparity in how different people use the technologies. My siblings and I text more than we talk on the phone and are OK with short informal messages, but when our grandmother texts us it is more like an email. Some workers use e-mail for regular communication in their office and may send and receive 100 or more messages a day, while for others it is a much less commonly used tool. It seems likely that different norms could emerge in these various settings, but this will require attention when you are talking/writing to someone outside your usual network. As these norms emerge it will give us a chance to observe the development of micro-institutions in real time.

Off to ISA

Blog-ReceptionThe International Studies Association is meeting this week in San Francisco. This will be my first time attending, so I found Megan MacKenzie’s survival guide helpful. Here are some relavent Do’s:

  • Do remember that a full-on formal business suit isn’t necessarily the standard for men or women–especially if you are under 25.
  • Do keep a stash of protein/granola bars, fruit, yogurts in your room and in your conference bag.

And some Don’ts:

  • Don’t follow the advice “ask a question at every panel, but start by talking about your research first.”
  • Don’t take it personally if the presentation you have been feverishly preparing has less than 3 attendees or it turns out you’re on a smashemup panel with 4 other folks who do completely different things. (added by commenter Daniel Levine)

Safe travels to all who are attending, and maybe I will see some of you at the blogging reception.

Ruby’s Benevolent Dictator

The Ruby Logo

The Ruby Logo

The first version of the Ruby programming language was developed by Yukihiro Matsumoto, better known as “Matz,” in 1995. Since then it has become especially popular for web development thanks to the advent of Rails by DHH. A variety of Ruby implementations have also sprung up, optimized for various uses. You may recall our recent discussion of RubyMotion as a well to develop iOS apps in Ruby. As with human languages, the spread and evolution of computer languages raises an interesting question: how different can two things be and still be the same?

To run with the human language example for a bit, consider the following. My native language is American English. (There are a number of regional variants within the US, so even the fact that American English is a useful category is telling.) I would recognize a British citizen with a cockney accent as a speaker of the same language, even though I would have trouble understanding him or her. I would not, however, recognize a French speaker as someone with whom I shared a language. The latter distinction exists despite the relative similarity between the languages–a shared alphabet, shared roots in Latin, and so on. So who decides whether two languages are the same?

In the case of human languages this is very much an emergent decision, worked out through the behavior of numerous individuals with little conscious thought for their coordination. This is where the human/computer language analogy fails us. The differences between computer languages are discrete, not continuous–there are measurable differences and similarities between any two language implementations, and intermediate steps between one implementation and another might not be viable. So who decides what is Ruby and what is not?

That is the question Brian Shirai raised in a series of posts and a conference talk. As of right now there is no clear process by which the community decides the future of Ruby, or what counts as a legitimate Ruby implementation. Matz is a benevolent dictator–but maybe not for life. His implementation is known to some as MRI–“Matz’s Ruby Implementation,” with the implication that this is just one of many.

Shirai is proposing a process by which the Ruby community could depersonalize such decisions by moving to a decision-making council. This depersonalization of power relations is at the heart of what it means to institutionalize. Shirai’s process consists of seven steps:

  1. Ruby Design Council made up of representatives from any significant Ruby implementation, where significant means able to run a base level of RubySpec (which is to be determined).
  2. A proposal for a Ruby change can be submitted by any member of the Ruby Design Council. If a member of the larger Ruby community wishes to submit a proposal, they must work with a member of the Council.
  3. The proposal must meet the following criteria:
    1. An explanation, written in English, of the change, what use cases or problems motivates the change, how existing libraries, frameworks, or applications may be affected.
    2. Complete documentation, written in English, describing all relevant aspects of the change, including documentation for any specific methods whose behavior changes or behavior of new methods that are added.
    3. RubySpecs that completely describe the behavior of the change.
  4. When the Council is presented with a proposal that meets the above criteria, any member can decide that the proposal fails to make a case that justifies the effort to implement the feature. Such veto must explain in depth why the proposed change is unsuitable for Ruby. The member submitting the proposal can address the deficiencies and resubmit.
  5. If a proposal is accepted for consideration, all Council members must implement the feature so that it passes the RubySpecs provided.
  6. Once all Council members have implemented the feature, the feature can be discussed in concrete terms. Any implementation, platform, or performance concerns can be addressed. Negative or positive impact on existing libraries, frameworks or applications can be clearly and precisely evaluated.
  7. Finally, a vote on the proposed change is taken. Each implementation gets one vote. Only changes that receive approval from all Council members become the definition of Ruby.

Step 3B is a particularly interesting one for students of politics. As you may have guessed, Matz is Japanese. (This is somewhat ironic since Ruby is the currently the most readable language for English speakers–see this example if you don’t believe me.) Many discussions about Ruby take place on Japanese message boards, and some non-Japanese developers have even learned Japanese so that they can participate in these discussions. English is the lingua franca of the international software development community, so Shirai’s proposal makes sense but it is not uncontroversial.

In Shirai’s own words this proposal would provide the Ruby community with a “technology for change.” That is exactly what political institutions are for–organizing the decision-making capacity of a community. This proposal and its eventual acceptance, rejection, or modification by the Ruby community will be interesting for students of politics to keep an eye on, and may be the topic of future posts.

Coughing at Classical Concerts

concert_2464934bNot being an opera fan myself I will take their word for it:

Classical concerts comes with a set of very strict rules for the public: you cannot applaud while the music plays (the only exception being after opera arias), you are supposed to dress up, and there should be complete silence from the audience during the performance. And that urge to cough should be repressed until an applause. Yet, it turns out that coughing is more frequent during the performance.

Here’s the abstract from Andreas Wagener’s paper on the topic:

Concert etiquette demands that audiences of classical concerts avoid inept noises such as coughs. Yet, coughing in concerts occurs more frequently than elsewhere, implying a widespread and intentional breach of concert etiquette. Using the toolbox of (behavioral) economics, we study the social costs and benefits of concert etiquette and the motives and implications of individually disobeying such social norms. Both etiquette and its breach arise from the fact that music and its “proper” perception form parts of individual and group identities, convey prestige and status, allow for demarcation and inclusion, produce conformity, and affirm individual and social values.

Micro-institutions indeed.

See also: Miller and Page on the “Standing Ovation Problem”

Phony Rules of English Grammar

The phrase "to boldly go where no man has gone before," popularized by Star Trek, includes a split infinitive--but the grounding for this prohibition is shakier than you may think.

The phrase “to boldly go where no man has gone before,” popularized by Star Trek, includes a split infinitive–but the grounding for this prohibition is shakier than you may think.

You have heard the rules before: Don’t end a sentence with a preposition. Don’t split an infinitive. Don’t start with a conjunction. But who makes these rules? How did they become incorporated into English grammar?

One culprit is Robert Lowth, who advised against ending English sentences with prepositions based on an earlier Latin rule. Similarly, according to Smithsonian Magazine, Henry Alford popularized the prohibition against splitting infinitive’s in A Plea for the Queen’s English.

In Latin, sentences don’t end in prepositions, and an infinitive is one word that can’t be divided. But in a Germanic language like English, as linguists have pointed out, it’s perfectly normal to end a sentence with a preposition and has been since Anglo-Saxon times. And in English, an infinitive is also one word. The “to” is merely a prepositional marker. That’s why it’s so natural to let English adverbs fall where they may, sometimes between “to” and a verb.

We can’t blame Latinists, however, for the false prohibition against beginning a sentence with a conjunction, since the Romans did it too (Et tu, Brute?). The linguist Arnold Zwicky has speculated that well-meaning English teachers may have come up with this one to break students of incessantly starting every sentence with “and.” The truth is that conjunctions are legitimately used to join words, phrases, clauses, sentences—and even paragraphs.

This is a case where a little learning is a dangerous thing. Because the rules are easy to remember, snobs can readily point them out in writing or speech. There is also a desire for social acceptability: no one wants to look stupid, even if the reasons for the rule make no sense. Writers trying to stick to the letter of the law often contort their sentences, while the better practice is often simply to say what sounds natural.

Micro-institutions can seem so ingrained that we fail to question them. Just going with the flow can sometimes make sense, but looking a little deeper can help to expose senseless rules or useless norms. The key is to understand which rules fall into which category. I do not have an answer now. But it’s something I would like to know more about.

Hackers vs. Diplomats

XKCD's Map of the Internet, 2006

XKCD’s Map of the Internet, 2006

Katherine Maher’s Foreign Policy piece got a lot of (deserved) attention last week. If the topic interests you, go read the whole thing. I’ll highlight the parts that are most relevant to our recent conversations on internet politics.

On the web as geography:

Like all new frontiers, cyberspace’s early settlers declared themselves independent — most famously in 1996, in cyberlibertarian John Perry Barlow’s “A Declaration of the Independence of Cyberspace.” Barlow asserted a realm beyond borders or government, rejecting the systems we use to run the physical universe. “Governments of the Industrial World,” he reproached, “You have no sovereignty where we gather.… Cyberspace does not lie within your borders.” …

Barlow was right, in part. Independence was a structural fact of cyberspace, and free expression and communication were baked into the network. The standards and protocols on which the Internet runs are agnostic: They don’t care whether you were in Bangkok, Buenos Aires, or Boise. If they run into an attempt to block traffic, they merely reroute along a seemingly infinite network of decentralized nodes, inspiring technologist John Gilmore’s maxim: “The Net interprets censorship as damage and routes around it.”

On the promise of the internet for promoting freedom:

Information has always been power, and governments have long sought to control it. So for countries where power is a tightly controlled narrative, parsed by state television and radio stations, the Internet has been catastrophic. Its global, decentralized networks of information-sharing have routed around censorship — just as Gilmore promised they would. It gives people an outlet to publish what the media cannot, organize where organizing is forbidden, and revolt where protest is unknown.

On the changing reality–increasingly state-based control:

Recently, the network research and analytics company Renesys tried to assess how hard it would be to take the world offline. They assessed disconnection risk based on the number of national service providers in every country, finding that 61 countries are at severe risk for disconnection, with another 72 at significant risk. That makes 133 countries where network control is so centralized that the Internet could be turned off with not much more than a phone call.

It seems our global Internet is not so global.

From my perspective I can only hope that we will find the equivalent of “internet mountains” that will remain hard to govern. It is possible that some nation states will even facilitate this. (I am thinking here of The Pirate Bay’s move from a US-based .com domain to a Swedish .se address.) The emperor may still be far away, but he’s getting closer.

James C. Scott on the Politics of Everyday Life

Scott photographed at home for an interview with NYT

Scott photographed at home for an interview with NYT

We talk a lot on this blog about micro-institutions. I initially used the term in October, 2011, and did not know of anyone else using it at the time. Since then I have found a paper from 2011 that uses the term, but I do not have any more specific date information. In the coming weeks I plan to flesh out more of what I mean by a micro-institution and review what we have learned about them so far here on YSPR.

Undoubtedly the work of James C. Scott influenced my thinking on the politics of everyday life. That’s why I was excited to see this passage in his new book, Two Cheers for Anarchism:

For the peasantry and much of the early working class historically, we may look in vain for formal organizations and public manifestations. There is a whole realm of what I (JCS) have called “infrapolitics” because it is practiced outside the visible spectrum of what usually passes for political activity. The state has historically thwarted lower-class organization, let along public defiance….

By infrapolitics I have in mind such acts as foot-dragging, poaching, pilfering, dissimulation, sabotage, desertion, absenteeism, squatting, and flight. Why risk getting shot for a failed mutiny when desertion will do just as well?… The large-mesh net political scientists and most historians use to troll for political activity utterly misses the fact that most subordinate classes have historically not had the luxury of open political organization. That has not prevented them from working microscopically, cooperatively, complicitly, and massively as political change from below.

Certainly the overlap between infra-politics and micro-institutions is not one-for-one. Scott focuses on resistance and subversion, whereas I tend to emphasize cooperation and the role of norms in clarifying expectations of social behavior. Nevertheless Scott’s pioneering work has been hugely influential on my thinking thus far, and all of his books come highly recommended.

See also: 

Scott speaks at Cornell on The Art of Not Being Governed

Does the Internet Have a Political Disposition?

Was the Civil War a Constitutional Fork?

Confederate ConstitutionShortly after Aaron Swartz’s untimely suicide, O’Reilly posted their book Open Government for free on Github as a tribute. The book covers a number of topics from civil liberties and privacy on the web to how technology can improve government,  with each chapter written by a different author. My favorite was the fifth chapter by Howard Dierking. From the intro:

In many ways, the framers of the Constitution were like the software designers of today. Modern software design deals with the complexities of creating systems composed of innumerable components that must be stable, reliable, efficient, and adaptable over time. A language has emerged over the past several years to capture and describe both practices to follow and practices to avoid when designing software. These are known as patterns and antipatterns.

The chapter goes on to discuss the Constitution and the Articles of Confederation as pattern and antipattern, respectively. In the author’s own words he hopes to “encourage further application of software design principles as a metaphor for describing and modeling the complex dynamics of government in the future.”

In the spirit of Dierking’s effort, I will offer an analogy of my own: civil war as fork. In open source software a “fork” occurs when a subset of individuals involved with the project take an existing copy of the code in a new direction. Their contributions are not combined into the main version of the project, but instead to their new code base which develops independently.

This comparison seems to hold for the US Civil War. According to Wikipedia,

In regard to most articles of the Constitution, the document is a word-for-word duplicate of the United States Constitution. However, there are crucial differences between the two documents, in tone and legal content, and having to do with the topics of states’ rights and slavery.

Sounds like a fork to me. There’s a full list of the “diffs” (changes from one body of text or code to another) on the same wiki page. But to see for myself, I also put the text of the US Constitution on Github, then changed the file to the text of the CSA Constitution. Here’s what it looks like visually:

usa-csa-diffs

As the top of the image says, there are 130 additions and 119 deletions required to change the US Constitution into that of the Confederacy. Many of these are double-counts since, as you can see, replacing “United States” with “Confederate States” counts as both a deletion of one line and an addition of a new one.

I did not change trivial differences like punctuation or capitalization, nor did I follow the secessionists’ bright idea to number all subsections (which would have overstated the diffs). Wikipedia was correct that most of the differences involve slavery and states’ rights. Another important difference is that the text of the Bill of Rights is included–verbatim–as Section 9 of Article 1 rather than as amendments.

In other words, the constitution of the CSA was a blatant fork of the earlier US version. Are there other cases like this?

How to Be a Dictator in the Age of the Internet

Having taken a look at electronic voting in Friday’s post, today we look at the other side of the coin: how can dictators use the internet to stay in power? Laurier Rochon has a few answers in a free e-book, The Dictator’s Practical Internet Guide to Power Retention.

A dictator’s goals for the internet are to destroy security and anonymity. The three essential conditions for achieving these goals are:

  1. Relative political stability (no protests in the streets)
  2. Centralized telecommunications infrastructure (one ISP)
  3. Non-democratic selection of officials

Once you have done this, you can begin to exert control over the populace and will be well on your way to lifelong control. As dictator, you will get to make the following decisions:

  1. What is the right trade-off between economic prosperity and tight regulation? (the Dictator’s Dilemma)
  2. How much entertainment will you allow? (more cat videos, less protest)
  3. What will be the punishment for violating your rules? (breaking kneecaps of violators, taking it out on the populace at large)

Being a dictator is not easy, but with a few key decisions on internet policy your life can be a lot simpler. Laurier also shares his tips below:

This short, partly tongue-in-cheek book is worth a read if you like the talk. I also look forward to some winter break reading on this topic with The Digital Origins of Dictatorship and Democracy and possibly Consent of the Networked.

The Politics of Monopoly

Earliest known rendering of The Landlord’s Game, 1904

The official history of Monopoly, as told by Hasbro, which owns the brand, states that the board game was invented in 1933 by an unemployed steam-radiator repairman and part-time dog walker from Philadelphia named Charles Darrow. Darrow had dreamed up what he described as a real estate trading game whose property names were taken from Atlantic City, the resort town where he’d summered as a child….

The game’s true origins, however, go unmentioned in the official literature. Three decades before Darrow’s patent, in 1903, a Maryland actress named Lizzie Magie created a proto-Monopoly as a tool for teaching the philosophy of Henry George, a nineteenth-century writer who had popularized the notion that no single person could claim to “own” land. In his book Progress and Poverty (1879), George called private land ownership an “erroneous and destructive principle” and argued that land should be held in common, with members of society acting collectively as “the general landlord.”

Magie called her invention The Landlord’s Game, and when it was released in 1906 it looked remarkably similar to what we know today as Monopoly. It featured a continuous track along each side of a square board; the track was divided into blocks, each marked with the name of a property, its purchase price, and its rental value…. The Landlord Game’s chief entertainment was the same as in Monopoly: competitors were to be saddled with debt and ultimately reduced to financial ruin, and only one person, the supermonopolist, would stand tall in the end. The players could, however, vote to do something not officially allowed in Monopoly: cooperate. Under this alternative rule set, they would pay land rent not to a property’s title holder but into a common pot—the rent effectively socialized so that, as Magie later wrote, “Prosperity is achieved.”

From Harper’s, the entire thing is worth a read.

As an aside, John von Neumann and Oskar Morgenstern’s classic Theory of Games and Economic Behavior was based on earlier research by von Neumann entitled “On the Theory of Parlor Games.” They likely had in mind games of the type and complexity that humans actually play and find interesting, rather than the artificially simplified games that now fall under the purview of game theory.