Monday, November 16, 2009
Thursday, June 11, 2009
Thursday, April 09, 2009
Chi day 3. Everyone here has done a formative study to show how users go about some task on the computer or in their lives, generated a list of design criteria from that, mocked up a prototype, tested it in lab study, and shown with bar graphs that people do whatever it is 22% more efficiently.
If I add up all these 22% more efficiencies in every area of my life, I somehow don't see myself as 22% better off. Lots of things seem kind of cool in isolation, and I wouldn't mind being able to use them. But there's some kind of deeper "why" question I don't really see being addressed in the formative studies. Maybe it comes down to an ROI analysis: just because something is more efficient doesn't mean it's worth your time to switch to it: consider the costs versus benefits of switching the world from QWERTY to Dvorak. Dvorak is probably slightly more efficient, but the amount of campaigning it would take to change this, would be better spent campaigning for something else.
So, when studying people doing stuff with an eye towards making that stuff easier, there ought to be a rule of thumb you can apply to find out when to drop the whole thing and say "this person is actually doing OK; they don't really need something new". I don't know what that rule might be; it would have to be pretty subtle. The most comically inappropriate example was the guy presenting on Monday who was studying love with the idea of how a device might "improve" it. Sometimes people miss each other: this emotion serves a purpose, and it's not at all clear to me that we would be better off if we had technological gimmicks to trick ourselves out of it.
That's obviously a more extreme example than someone trying to help you search for restaurant reviews faster, or keep "to-do" notes all in one place, but maybe there's some ineffable factor in all of these things that's being missed.
Posted by Chris Bogart at 7:29 AM
Tuesday, April 07, 2009
Some interesting stuff at CHI today:
- "Difficulties in establishing common ground in multiparty groups using machine translation": Naomi Yamashita talked about a study where they had a Chinese speaker, and Korean speaker, and a Japanese speaker all working together on a task over machine-translated instant messaging. Apparently this kind of works with two languages, but it's a disaster with three. She did detailed analysis of their chat logs to figure out where things went wrong: because you can't see the translation flaws between the other two speakers, you don't have enough information to understand why they make the adjustments they do in trying to communicate. So things spin out of control. They had some elaborate suggestions for fixing this, by loading up everyone with more information. I didn't really care for their solutions; I think doing three-way translations probably requires a specialized translation engine that maps words consistently among the three languages, even at the cost of some inaccuracy. I bet people could adapt better to some translation weirdness if it least everything was consistent and predictable.
- "Spectator Understanding of Error in Performance": this was a poster in the poster room, and I chatted with one of the authors. They had a psychological theory about how people interpret mistakes in musical performances, especially for unfamiliar kinds of music: did the musician hit the wrong note, or did the listener misunderstand what the musician was trying to do? The guy kindly engaged me in philosophical musing about it as I tried to think how it might relate to a programmer's perception of program errors when debugging.
- "Resilience through technology adoption: merging the old and the new in Iraq": Gloria Mark talked about internet, cell phone, and satellite TV use in Iraq. Fascinating stuff, but the biggest takeaway for me was this: bring a printout of your slides, so if the projectors or computers go horribly, horribly awry, you can give your talk without a computer in front of you. I gather that an Iraqi would have known that!
- Another poster by Anja Austerman, and Seiji Yamada: It was a very clever study about how people try to train robots. They gave people the task of training a robot. But the robot in fact already knew the task, and was programmed to behave or misbehave at certain times, in order to collect data about how the people tried to train it. Her idea was that a robot for the home might go through a pre-training phase like this, to learn its owners training styles; then later they could train it to do real tasks in a way that felt natural to them.
- alt.chi is a set of sessions with stuff that doesn't get accepted into the regular conference. So the talks are a little weird or speculative. There was one about how computers can help "improve love"; one on "thanatosensitivity" in design (product designers should plan for the death of the user and how the family is going to deal with the deceased's online identity, cell phone contacts, etc); and a study about people who do medical research online before they go see their doctor.
- Jet Townsend from CMU had a gizmo you can wear on your arm, that tells you where there are strong electromagnetic signals in the room around you. I wish I had the skills to do hardware hacking like that.
- Todd's been all a-twitter, and I've found out about cool stuff 2nd hand through him, like the MIT tour last night, and a Stockholm University shindig tonight, at which I met many cool people.
Posted by Chris Bogart at 8:52 PM
Monday, April 06, 2009
I'm in Boston for CHI, a conference about human-computer interaction. It's interesting, although some of the language in the talks is a little fluffy: lots of people doing research through design to explore a process of discovery that reveals empowering affordances.
Some of the actual stuff people are coming up with is pretty cool though, so I don't meant to knock the work itself. These designers have a genuine interest in making technology work better for people, so they're trying new interface techniques all the time and reporting on how well they work for people. For example I saw today some multi-touch pressure-sensitive mousepads, only with no mouse. Simple idea but it works very nicely, and probably a lot cheaper than multi-touch sensitive screens.
There are also some wacky muppet-labs kind of inventions, like the "crowdsourced haptic interface" where you let anonymous strangers give you backrubs over the internet. I'm not sure what problem that solves, but I guess it was worth trying.
After a reception tonight, some nice MIT grad students took a bunch of us over on a tour of the CSAIL building, where we saw roombas watering plants, Richard Stallman's office, discarded dusty robots lurking in corners, and we walked the whole length of an infinite hallway. CSAIL is housed in a Frank Gehry building, and all the plants died when they finally plugged up all the leaks that had been watering them.
Posted by Chris Bogart at 8:35 PM
Wednesday, March 25, 2009
I learned a new word today, "syntonicity". Syntonicity is when you understand something by identifying with it, rather than understanding it purely abstractly. Here's a presentation by Stuart Watt talking about syntonicity in the context of programming language design.
Beginners to programming (and, frankly, all of us sometimes) have problems stepping through a program in their heads to see if it will work. They remember their intent when they wrote the code, but they have trouble putting themselves in the computer's shoes and seeing the program through its eyes. Watt claims that Seymour Papert was trying to aid this kind of thinking when he invented the Logo language. In most graphics languages, there's a set of coordinates on the screen, with (0,0) in the upper left corner, from the viewer's perspective. In Logo, everything's done from a cursor's perspective (the "turtle") as it moves around the screen.
Watt was involved in the creation of Hank, a specialized programming language for psychology students where information is shuffled around by a character named Fido. John Pane's language for children, Hands, does very nearly the same thing, but actually draws a dog ("Handy") on the screen with a pack of cards in its paw.
I don't know one way or the other if explicitly naming some part of your language or environment after an animal helps people think about programming the right way. But I can see where syntonicity could be a way of evaluating more subtle features about a language.
Consider for example the naming difference between F#'s workflows and Haskell's monads. They're similar concepts: a way of kind of quoting some code, so you can later say "execute it in the following peculiar way". The lines of the code are there, and you later say how information is passed from line to line. "Workflow" isn't a perfect description of that, but it gives the impression that you're going to hand it over to the processor later, and say, "do this work". "Monad" on the other hand is a mathematical term from category theory, that describes the mathematical relationship between the type signatures of the lines of code.
"Workflow" leads you to identify with the good old blue-collar working function you're going to hand this thing off to to execute thing, while "Monad", if you're a category theoretician, leads you to think about the types underlying the code as interesting theoretical objects: how can they be twisted around and changed and reapplied in new ways?
This fits nicely with the goals of F# and Haskell: the former is meant to be a practical programming language for scientific and financial programming, and the latter is primarily a research testbed and playground.
So, I think the names for things do matter when designing a language. If you choose names that sound like verbs, and those verbs have subjects that might be imagined to have a consistent point of view about your programming model, maybe it will help guide users into the right mindset for reasoning about your language the way you'd like them to.
Posted by Chris Bogart at 5:17 PM