Sunday, June 17, 2012

Language Quiz!

I took this photograph this weekend in Xenia, Ohio.  Identify the alphabet, the language, and where the photo was taken.

Wednesday, March 21, 2012

Ambiguous constructors considered harmful

Eclipse is more than just a code editor: it is a very general and flexible framework for building tools for programmers. I've been using it for my own research in building a tool for debugging cognitive models. But I'm finding that the learning curve for writing good tools in Eclipse is incredibly steep.  I don't want to be too harsh about it because it's an incredible effort involving a lot of people over a lot of years, and maybe a project of that complexity is just necessarily going to trade ease of use for power. But there are at least some incremental ways that the situation can be improved. Here are a couple examples I've found in recent months:

I've been working on a debugging plugin for Eclipse, and learning SWT in the process.  SWT is the library Eclipse uses for drawing buttons and windows on the screen. Today I discovered something horrible in their API. I needed to pick colors for a graph, and I wanted them to be a little darker if they were selected.  The right way to do this is to describe the color in terms of three values: hue, saturation, and brightness, but the Color object in SWT thinks of colors in terms of red, green, and blue.


So how do you convert between the two ways of describing color?  Easy: there's an object called RGB: you load it up with the color you want, and then ask it for the hue, saturation, and brightness values you need.  Here are its two constructors:

RGB(float hue, float saturation, float brightness)
          Constructs an instance of this class with the given hue, saturation, and brightness.
RGB(int red, int green, int blue)
          Constructs an instance of this class with the given red, green and blue values.

If you know H/S/B already and want R/G/B you pass it floating point numbers, then query its member fields; if you know R/G/B already, you pass it integers, and then call a "getHSB" method. As a result, the following code will do something completely insane:

int red = 100;
int green = 100;
int blue = 50;
float pastel = 0.85;
Color normal = new RGB(red,green,blue)
Color bland = new RGB(red*pastel, green*pastel, blue*pastel)

I've seen this in more than one place in SWT: the same class will have multiple constructors with different types, that have different meanings that you'd never guess at, and the types are ones that the compiler is apt to convert automatically for you.  

Another case of this same disease, this time in the JFace library: the story begins with the task of creating three nested widgets: a "Composite" (which is usually a window pane you see as part of Eclipse's interface), a "Tree", which is kind of a spreadsheet-looking control placed in that pane, and a "TreeViewer", an invisible thing between the Composite and the Tree that does some convenient behind-the-scenes accounting work. 

Here's the documentation for a TreeViewer's constructors:

TreeViewer(Composite parent)
          Creates a tree viewer on a newly-created tree control under the given parent.
TreeViewer(Composite parent, int style)
          Creates a tree viewer on a newly-created tree control under the given parent.
TreeViewer(Tree tree)
          Creates a tree viewer on the given tree control.

The first two constructors put a TreeViewer into a Composite, and create the Tree for you automatically.  The third constructor assumes you've already got a Tree inside a Composite, and it sticks the TreeViewer in the middle for you.

The irksome thing here is that a Tree is a subclass of Composite.  In my case, I misread the documentation and tried to pass both a Tree and a "style" argument to the constructor. It matched the second, not the third constructor, treated my Tree as if it were a window pane, and cheerfully put another tree inside my tree (I heard you liked Trees, dawg!). The end result was a Tree viewer managing a different tree than the one I was writing data to, and I spent about a day stepping through SWT code trying to work out exactly where it was going wrong.

There's a growing body of research on how to make APIs usable for programmers. Jeff Stylos' and Steven Clarke's research on constructors, in particular, concluded that it was better (for other reasons) to just have constructors without arguments, and have users set up the objects after their creation with method calls.

In general, making Eclipse easier to write plugins for is a really interesting problem, and I hope after I get over my dissertation hump I'll have time to pitch in and help with it.

Monday, November 28, 2011

Darius and the Dragons

I love the writing system designed for the dragons in Bethesda's new game, Skyrim.

According to a gameinformer interview with the language's designers, the symbols are designed to look like they were gouged into stone by dragons with three talons and a dewclaw (that odd extra toe partway up your cat's leg that seems to serve no purpose)


Dragon writing from Skyrim

Bethesda (the company that makes Skyrim) was not the first bunch to encounter this graphic design problem. Cyrus the Great was an ancient Persian king (around 600 BCE) with the same idea: he wanted to create monumental inscriptions to celebrate his grandiose awesomeness, and he appeared to have a similar aesthetic, and apparently had stoneworking tools not unlike Skyrim dragon talons:

"Newer" cuneiform of Cyrus and Darius,
 about 600 BCE (from Wikipedia)

That's from the Behistun inscription in Iran, which is a really good sample of this writing by a slightly later king, Darius the Great. It is a solemn record of how awesome he is, and what he's conquered: it begins, "I (am) Darius, the great king, the king of kings, the king in Persia, the king of countries."

This is technically cuneiform, "wedge-shaped" writing. Cuneiform is the oldest kind of writing we know of, originally used by the Sumerians. But Sumerian cuneiform was invented for a different medium: they wrote it by pressing a wedge-shaped tool into clay, not by whacking spikes into cliffs.

The Sumerian writing system was already at least two thousand years old and on the decline when Cyrus and Darius came along, but Cyrus's people revived it and adapted it to his language (Old Persian) to make this type of monumental inscription. I don't know why, but I wonder if they had the same feelings about it as the Skyrim artists: it evokes dragon's claws and ancient secret meanings. Maybe Cyrus hoped to co-opt the mystique of ancient Sumerian and Akkadian writing, the same way we invent Latin mottos for modern institutions.

Older cuneiform,
from Code of
Hammurabi
(~1700 BCE)
But I think Cyrus improved on the state of the art. Compare the Behistun inscription with real old skool cuneiform back when it was still in regular use a thousand years before: on the right here is a sample from the Code of Hammurabi. Cyrus's relatively simple script is almost a true alphabet (it represents syllables with about 50 symbols), but Hammurabi was writing the Akkadian language with more complicated cuneiform shapes, with more varied shapes and angles, uneven spacing, and the symbols representing, like modern Japanese, a mix of sounds and meanings. To me, Hammurabi's writing has a sloppy, random look, and as you look at even older inscriptions, it gets even sloppier. I think the simpler systems of Cyrus the Great and Skyrim's Dragon Writing look more imposing as inscription than Hammurabi's did.  Each serves their purpose well enough; Hammurabi was trying to codify a stable system of law in a writing system people at the time understood, while Cyrus was trying to make people's hair stand on end.

It's time for a cuneiform revival: let's come up with a system for writing English using this style of writing, and start marking gravestones and monuments this way. The Roman alphabet has been fun for these last couple thousand years, and it's perfectly adequate for business expense reimbursement vouchers and vampire romance novels, but clearly it lacks the gravitas of stonecarved cuneiform.

Monday, November 16, 2009

Code bloat








Really, Microsoft? It takes 200.2 MB of code to convert an XML file? Really?

Thursday, June 11, 2009

Use case stick figures

The stick figures in use case diagrams seem simple and featureless, but they undoubtedly have a rich inner life.

Thursday, April 09, 2009

chi2009 -- day 3

Chi day 3. Everyone here has done a formative study to show how users go about some task on the computer or in their lives, generated a list of design criteria from that, mocked up a prototype, tested it in lab study, and shown with bar graphs that people do whatever it is 22% more efficiently.

If I add up all these 22% more efficiencies in every area of my life, I somehow don't see myself as 22% better off. Lots of things seem kind of cool in isolation, and I wouldn't mind being able to use them. But there's some kind of deeper "why" question I don't really see being addressed in the formative studies. Maybe it comes down to an ROI analysis: just because something is more efficient doesn't mean it's worth your time to switch to it: consider the costs versus benefits of switching the world from QWERTY to Dvorak. Dvorak is probably slightly more efficient, but the amount of campaigning it would take to change this, would be better spent campaigning for something else.

So, when studying people doing stuff with an eye towards making that stuff easier, there ought to be a rule of thumb you can apply to find out when to drop the whole thing and say "this person is actually doing OK; they don't really need something new". I don't know what that rule might be; it would have to be pretty subtle. The most comically inappropriate example was the guy presenting on Monday who was studying love with the idea of how a device might "improve" it. Sometimes people miss each other: this emotion serves a purpose, and it's not at all clear to me that we would be better off if we had technological gimmicks to trick ourselves out of it.

That's obviously a more extreme example than someone trying to help you search for restaurant reviews faster, or keep "to-do" notes all in one place, but maybe there's some ineffable factor in all of these things that's being missed.

Tuesday, April 07, 2009

chi2009 -- day 2

Some interesting stuff at CHI today:

- "Difficulties in establishing common ground in multiparty groups using machine translation": Naomi Yamashita talked about a study where they had a Chinese speaker, and Korean speaker, and a Japanese speaker all working together on a task over machine-translated instant messaging. Apparently this kind of works with two languages, but it's a disaster with three. She did detailed analysis of their chat logs to figure out where things went wrong: because you can't see the translation flaws between the other two speakers, you don't have enough information to understand why they make the adjustments they do in trying to communicate. So things spin out of control. They had some elaborate suggestions for fixing this, by loading up everyone with more information. I didn't really care for their solutions; I think doing three-way translations probably requires a specialized translation engine that maps words consistently among the three languages, even at the cost of some inaccuracy. I bet people could adapt better to some translation weirdness if it least everything was consistent and predictable.

- "Spectator Understanding of Error in Performance": this was a poster in the poster room, and I chatted with one of the authors. They had a psychological theory about how people interpret mistakes in musical performances, especially for unfamiliar kinds of music: did the musician hit the wrong note, or did the listener misunderstand what the musician was trying to do? The guy kindly engaged me in philosophical musing about it as I tried to think how it might relate to a programmer's perception of program errors when debugging.

- "Resilience through technology adoption: merging the old and the new in Iraq": Gloria Mark talked about internet, cell phone, and satellite TV use in Iraq. Fascinating stuff, but the biggest takeaway for me was this: bring a printout of your slides, so if the projectors or computers go horribly, horribly awry, you can give your talk without a computer in front of you. I gather that an Iraqi would have known that!

- Another poster by Anja Austerman, and Seiji Yamada: It was a very clever study about how people try to train robots. They gave people the task of training a robot. But the robot in fact already knew the task, and was programmed to behave or misbehave at certain times, in order to collect data about how the people tried to train it. Her idea was that a robot for the home might go through a pre-training phase like this, to learn its owners training styles; then later they could train it to do real tasks in a way that felt natural to them.

- alt.chi is a set of sessions with stuff that doesn't get accepted into the regular conference. So the talks are a little weird or speculative. There was one about how computers can help "improve love"; one on "thanatosensitivity" in design (product designers should plan for the death of the user and how the family is going to deal with the deceased's online identity, cell phone contacts, etc); and a study about people who do medical research online before they go see their doctor.

- Jet Townsend from CMU had a gizmo you can wear on your arm, that tells you where there are strong electromagnetic signals in the room around you. I wish I had the skills to do hardware hacking like that.

- Todd's been all a-twitter, and I've found out about cool stuff 2nd hand through him, like the MIT tour last night, and a Stockholm University shindig tonight, at which I met many cool people.

Monday, April 06, 2009

CHI2009 -- day 1

I'm in Boston for CHI, a conference about human-computer interaction. It's interesting, although some of the language in the talks is a little fluffy: lots of people doing research through design to explore a process of discovery that reveals empowering affordances.

Some of the actual stuff people are coming up with is pretty cool though, so I don't meant to knock the work itself. These designers have a genuine interest in making technology work better for people, so they're trying new interface techniques all the time and reporting on how well they work for people. For example I saw today some multi-touch pressure-sensitive mousepads, only with no mouse. Simple idea but it works very nicely, and probably a lot cheaper than multi-touch sensitive screens.

There are also some wacky muppet-labs kind of inventions, like the "crowdsourced haptic interface" where you let anonymous strangers give you backrubs over the internet. I'm not sure what problem that solves, but I guess it was worth trying.

After a reception tonight, some nice MIT grad students took a bunch of us over on a tour of the CSAIL building, where we saw roombas watering plants, Richard Stallman's office, discarded dusty robots lurking in corners, and we walked the whole length of an infinite hallway. CSAIL is housed in a Frank Gehry building, and all the plants died when they finally plugged up all the leaks that had been watering them.