What kind of debugging should we be studying?
I really think, but just can't back this up, that a lot of the time when programmers are debugging, they're doing it on code that they're pretty familiar with. The research literature empirically studying people debugging, seems heavily biased towards comprehension and debugging of code that people are seeing for the first time. This has got to be a methodological issue -- it's just so much easier to bring people into a lab and hand them code they haven't seen before. Some people I've asked about this seem to disagree -- they think it's commonplace for programmers to be thrown into new unfamiliar code in their workplace and have to figure it out. Could be, but I still think they spend most of their time around familiar code, hour by hour. And that will judge their debugging tools by how well they work for them in familiar code. This makes a big difference, because people who are debugging familiar code already have some understanding of how it works, and they're trying to compare the real code against the understanding in their head, to see where they, or it, are going wrong. So it's kind of a scientific process, coming up with hypotheses and refuting them by testing them out, until they narrow down the exact point where the code goes astray from their notions about it. People debugging unfamiliar code, on the other hand, are going to start with a comprehension phase where they go bottom up, looking at it and just trying to make sense of it. If academic research, and product development research, puts too much focus into understanding that particular situation, I think it will be giving short shrift to the later phases of code maintenance.