Thursday, April 09, 2009

chi2009 -- day 3

Chi day 3. Everyone here has done a formative study to show how users go about some task on the computer or in their lives, generated a list of design criteria from that, mocked up a prototype, tested it in lab study, and shown with bar graphs that people do whatever it is 22% more efficiently.

If I add up all these 22% more efficiencies in every area of my life, I somehow don't see myself as 22% better off. Lots of things seem kind of cool in isolation, and I wouldn't mind being able to use them. But there's some kind of deeper "why" question I don't really see being addressed in the formative studies. Maybe it comes down to an ROI analysis: just because something is more efficient doesn't mean it's worth your time to switch to it: consider the costs versus benefits of switching the world from QWERTY to Dvorak. Dvorak is probably slightly more efficient, but the amount of campaigning it would take to change this, would be better spent campaigning for something else.

So, when studying people doing stuff with an eye towards making that stuff easier, there ought to be a rule of thumb you can apply to find out when to drop the whole thing and say "this person is actually doing OK; they don't really need something new". I don't know what that rule might be; it would have to be pretty subtle. The most comically inappropriate example was the guy presenting on Monday who was studying love with the idea of how a device might "improve" it. Sometimes people miss each other: this emotion serves a purpose, and it's not at all clear to me that we would be better off if we had technological gimmicks to trick ourselves out of it.

That's obviously a more extreme example than someone trying to help you search for restaurant reviews faster, or keep "to-do" notes all in one place, but maybe there's some ineffable factor in all of these things that's being missed.

Tuesday, April 07, 2009

chi2009 -- day 2

Some interesting stuff at CHI today:

- "Difficulties in establishing common ground in multiparty groups using machine translation": Naomi Yamashita talked about a study where they had a Chinese speaker, and Korean speaker, and a Japanese speaker all working together on a task over machine-translated instant messaging. Apparently this kind of works with two languages, but it's a disaster with three. She did detailed analysis of their chat logs to figure out where things went wrong: because you can't see the translation flaws between the other two speakers, you don't have enough information to understand why they make the adjustments they do in trying to communicate. So things spin out of control. They had some elaborate suggestions for fixing this, by loading up everyone with more information. I didn't really care for their solutions; I think doing three-way translations probably requires a specialized translation engine that maps words consistently among the three languages, even at the cost of some inaccuracy. I bet people could adapt better to some translation weirdness if it least everything was consistent and predictable.

- "Spectator Understanding of Error in Performance": this was a poster in the poster room, and I chatted with one of the authors. They had a psychological theory about how people interpret mistakes in musical performances, especially for unfamiliar kinds of music: did the musician hit the wrong note, or did the listener misunderstand what the musician was trying to do? The guy kindly engaged me in philosophical musing about it as I tried to think how it might relate to a programmer's perception of program errors when debugging.

- "Resilience through technology adoption: merging the old and the new in Iraq": Gloria Mark talked about internet, cell phone, and satellite TV use in Iraq. Fascinating stuff, but the biggest takeaway for me was this: bring a printout of your slides, so if the projectors or computers go horribly, horribly awry, you can give your talk without a computer in front of you. I gather that an Iraqi would have known that!

- Another poster by Anja Austerman, and Seiji Yamada: It was a very clever study about how people try to train robots. They gave people the task of training a robot. But the robot in fact already knew the task, and was programmed to behave or misbehave at certain times, in order to collect data about how the people tried to train it. Her idea was that a robot for the home might go through a pre-training phase like this, to learn its owners training styles; then later they could train it to do real tasks in a way that felt natural to them.

- alt.chi is a set of sessions with stuff that doesn't get accepted into the regular conference. So the talks are a little weird or speculative. There was one about how computers can help "improve love"; one on "thanatosensitivity" in design (product designers should plan for the death of the user and how the family is going to deal with the deceased's online identity, cell phone contacts, etc); and a study about people who do medical research online before they go see their doctor.

- Jet Townsend from CMU had a gizmo you can wear on your arm, that tells you where there are strong electromagnetic signals in the room around you. I wish I had the skills to do hardware hacking like that.

- Todd's been all a-twitter, and I've found out about cool stuff 2nd hand through him, like the MIT tour last night, and a Stockholm University shindig tonight, at which I met many cool people.

Monday, April 06, 2009

CHI2009 -- day 1

I'm in Boston for CHI, a conference about human-computer interaction. It's interesting, although some of the language in the talks is a little fluffy: lots of people doing research through design to explore a process of discovery that reveals empowering affordances.

Some of the actual stuff people are coming up with is pretty cool though, so I don't meant to knock the work itself. These designers have a genuine interest in making technology work better for people, so they're trying new interface techniques all the time and reporting on how well they work for people. For example I saw today some multi-touch pressure-sensitive mousepads, only with no mouse. Simple idea but it works very nicely, and probably a lot cheaper than multi-touch sensitive screens.

There are also some wacky muppet-labs kind of inventions, like the "crowdsourced haptic interface" where you let anonymous strangers give you backrubs over the internet. I'm not sure what problem that solves, but I guess it was worth trying.

After a reception tonight, some nice MIT grad students took a bunch of us over on a tour of the CSAIL building, where we saw roombas watering plants, Richard Stallman's office, discarded dusty robots lurking in corners, and we walked the whole length of an infinite hallway. CSAIL is housed in a Frank Gehry building, and all the plants died when they finally plugged up all the leaks that had been watering them.

Wednesday, March 25, 2009

Syntonicity

I learned a new word today, "syntonicity". Syntonicity is when you understand something by identifying with it, rather than understanding it purely abstractly. Here's a presentation by Stuart Watt talking about syntonicity in the context of programming language design.

Beginners to programming (and, frankly, all of us sometimes) have problems stepping through a program in their heads to see if it will work. They remember their intent when they wrote the code, but they have trouble putting themselves in the computer's shoes and seeing the program through its eyes. Watt claims that Seymour Papert was trying to aid this kind of thinking when he invented the Logo language. In most graphics languages, there's a set of coordinates on the screen, with (0,0) in the upper left corner, from the viewer's perspective. In Logo, everything's done from a cursor's perspective (the "turtle") as it moves around the screen.

Watt was involved in the creation of Hank, a specialized programming language for psychology students where information is shuffled around by a character named Fido. John Pane's language for children, Hands, does very nearly the same thing, but actually draws a dog ("Handy") on the screen with a pack of cards in its paw.

I don't know one way or the other if explicitly naming some part of your language or environment after an animal helps people think about programming the right way. But I can see where syntonicity could be a way of evaluating more subtle features about a language.

Consider for example the naming difference between F#'s workflows and Haskell's monads. They're similar concepts: a way of kind of quoting some code, so you can later say "execute it in the following peculiar way". The lines of the code are there, and you later say how information is passed from line to line. "Workflow" isn't a perfect description of that, but it gives the impression that you're going to hand it over to the processor later, and say, "do this work". "Monad" on the other hand is a mathematical term from category theory, that describes the mathematical relationship between the type signatures of the lines of code.

"Workflow" leads you to identify with the good old blue-collar working function you're going to hand this thing off to to execute thing, while "Monad", if you're a category theoretician, leads you to think about the types underlying the code as interesting theoretical objects: how can they be twisted around and changed and reapplied in new ways?

This fits nicely with the goals of F# and Haskell: the former is meant to be a practical programming language for scientific and financial programming, and the latter is primarily a research testbed and playground.

So, I think the names for things do matter when designing a language. If you choose names that sound like verbs, and those verbs have subjects that might be imagined to have a consistent point of view about your programming model, maybe it will help guide users into the right mindset for reasoning about your language the way you'd like them to.

Wednesday, December 10, 2008

Java beginners: beware the black hole of "static"

I've just been grading a bunch of programming assignments from college sophomores taking their second term of Java, and I've noticed a dangerous pattern.

Suppose you have a class like this in your project:


class Foo {
private int state = 42;
void doStuff() { System.out.println(state); }
}

The right way to use the doStuff() method is like this:

Foo f = new Foo();
f.doStuff();

But a common mistake is for people to try to access it like this:

Foo.doStuff();

The latter doesn't work, because it's the syntax for referring to a static method: that is, a method that belongs to the class rather than to each particular object in the class.

So, if you make that mistake, Eclipse gives you the option to

change modifier of doStuff() to static?

Don't do it! Run away! That's not the right way to fix it. In most cases, it means you forgot to create a Foo object.

But suppose you take Eclipse's bad advice anyway. Then you get a new error, because now doStuff() can't see the integer state anymore. What does Eclipse offer to do for you? You guessed it:

change modifier of state to static?

If you do that, then your program will seem to work again. But the meaning is very different, and as soon as you try to use multiple instances of this Foo object, your state variable will appear to do crazy things -- because it will be shared between all your Foo's.

So beware of static. If you have a method that refers to variables that aren't either passed in as arguments, or declared inside its brackets (like state, in this example), then it probably shouldn't be static.

As for making data items static, ask yourself this: think of some future modification of this program where you might need to have several different objects made from this class. Do you want them all to have the same value for this variable? If the answer is "no", then using static is bad form, even if it doesn't currently cause a problem.

So do the extra work and eliminate "static" from your Java code if you've added it in "just to make it compile". It'll be tedious to get it to compile again. It might require you passing extra parameters to methods, or creating new object variables. But take the time to figure it out. Learning this will give you the right intuitions, and will save you headaches later on in your programming career.

Friday, October 31, 2008

Learning from game designers

Here's a very readable presentation about how computer game designers
manage to build in a more evenly-sloped learning curve than application designers
do.

Princess-Rescuing Application

Wednesday, August 13, 2008

What kind of debugging should we be studying?

I really think, but just can't back this up, that a lot of the time when programmers are debugging, they're doing it on code that they're pretty familiar with. The research literature empirically studying people debugging, seems heavily biased towards comprehension and debugging of code that people are seeing for the first time. This has got to be a methodological issue -- it's just so much easier to bring people into a lab and hand them code they haven't seen before.

Some people I've asked about this seem to disagree -- they think it's commonplace for programmers to be thrown into new unfamiliar code in their workplace and have to figure it out. Could be, but I still think they spend most of their time around familiar code, hour by hour. And that will judge their debugging tools by how well they work for them in familiar code.

This makes a big difference, because people who are debugging familiar code already have some understanding of how it works, and they're trying to compare the real code against the understanding in their head, to see where they, or it, are going wrong. So it's kind of a scientific process, coming up with hypotheses and refuting them by testing them out, until they narrow down the exact point where the code goes astray from their notions about it.

People debugging unfamiliar code, on the other hand, are going to start with a comprehension phase where they go bottom up, looking at it and just trying to make sense of it. If academic research, and product development research, puts too much focus into understanding that particular situation, I think it will be giving short shrift to the later phases of code maintenance.

Tuesday, July 15, 2008

Help with study of functional programmers

Are you currently developing or maintaining a medium to large-sized
program written in a functional language, such as Haskell, F#, OCaml,
or Lisp? I'm doing a study of functional programmers, as part of
a research internship at Microsoft, and I would like the opportunity to look over
your shoulder while you do debugging or coding on your project.

I'm looking for people with at least a year's experience doing
functional programming, and who are currently working on a real
project (i.e. for some purpose other than learning functional
programming). I'm only allowed to use people who can work in the US
(because of the gratuity, which is taxable income). I'd simply come
watch you work, and ask a few questions along the way. You'd do
whatever you would normally be doing. If you're near Seattle or
Portland, I'd come to your office for a couple of hours. If you're
not near Seattle or Portland, then we'd set you up with LiveMeeting
or some other remote screencast software so I can watch you from here.

Obviously security concerns are an issue - I will not share any
proprietary information that I learn about while visiting you.

In exchange for your help, Microsoft will offer you your pick of free
software off its gratuity list (which has about 50 items, including
Visual Studio Professional, Word for Mac, XBOX 360 games) or any book
from MS Press.

We're doing this because expert functional programmers have not been
studied much. We plan to share our findings through academic
publications, to help tool developers create debugging tools that are
genuinely helpful in real-world settings.

I'm hoping to finish my observations by August 8th, so please contact
me immediately if you're interested!

Thank you,

Chris Bogart
425-538-3562
t-chribo@microsoft.com