Wednesday, October 10, 2007

5 Hints for Beginning Haskellers

Here are some things that have caused me headaches in learning Haskell.

  1. Function calls. Functions are called by giving the name and the arguments separated by spaces. If a function f has two parameters, a and b, you call it with f a b not f(a,b). The latter would be a one-argument function whose argument is a pair. You can do things that way if you like, but it'll make things a lot trickier later.
  2. Precedence. Those spaces in the function call syntax are about the tightest binding thing in the language. So if distance is an integer, and show is an (Int -> String), then show distance++" miles" is good syntax, because show distance is evaluated before the ++. On the other hand, if c is a character and cs is a string, then length c:cs is a type error. Haskell will try to evaluate length c first, then prefix that number to the front of the list cs. Instead, you want length (c:cs).
  3. Parens in patterns. This is really an example of the precedence issue above, but it trips me up all the time. You need more parens on the left hand side of function calls than you'd think. I don't know how many times I've written code like the sample below, leaving out the parens at first.
    • data Coord = Coord Int Int
    • manhattandist :: Coord -> Int
    • manhattandist Coord x1 y1 = x1 + y1 WRONG
    • manhattandist (Coord x1 y1) = x1 + y1 RIGHT
  4. Arrows in type declarations. The type of (+) is Int -> Int -> Int; and we're supposed to accept that the first two Ints are the arguments, and the last Int is the return value. But turns out there's a cool reason for this. An equivalent type declaration would be Int -> (Int -> Int) (because -> groups right). You can think about addition this way, and Haskell cooperates nicely: + is a function that takes one integer, and returns a function. So plus x y = x+y is equivalent to plus x = \y -> x + y (search for lambda in the Haskell docs if you don't know that backslash syntax). So if you need a one-argument function (that returns a function) for some reason, it's OK to go ahead and define it as a two-argument function returning a number; Haskell doesn't make a distinction, and it's more convenient sometimes to express it that way.
  5. Monads. These are famously hard to get. There are 10,000 tutorials out there, and most of them help a little. I'd recommend you start by learning to use the important standard ones, like Maybe, IO, and List, more or less by rote. Beyond that, if you're in a situation where you'd need to roll your own, I'd say just forget monads exist and write the code yourself. Once you get so you really understand the ugliness involved firsthand, monads will be an obvious benefit. Anyway, that's my plan right now. I think I get them well enough to explain them, but I make a lot of stupid mistakes when I try to use them, so I probably don't understand them as well as I think I do. (More on this later -- I'm taking two classes right now that rely on Haskell, so I'll comment on this more as I have little insights).

Friday, September 07, 2007

Types to distinguish parameters and dimensions

Suppose a language had a rule that said a function could only have one parameter of a given type, and each dimension of a multidimensional array had to be indexed by a different type.

This would have several benefits:

  1. You'd get extra fine-grained checking; you couldn't possibly get parameters mixed up
  2. When making array slices or currying, you wouldn't have to concern yourself with the ordering of the parameters.
  3. You wouldn't need to distinguish between optional named parameters and ordinal parameters. The name of a parameter would be (or would be derived from) it's type.
With functions like + that seem to require two items of the same type, you could interpret the two (or more) arguments as a collection. Lists certainly could contain homogeneously typed members.

Monday, July 23, 2007

Python to the rescue: ICFP contest 2007

The ICFP contest was great fun this year, as usual, and I actually arranged travel and moving plans so that I would be in one place for the weekend to work on it.

I resolved to work in Scala this year, because it seemed like a practical functional language I'd like to know better; and it allows imperative programming with the java libraries, so I thought it would be a smooth learning curve.

Well, I spent the first day butting my head up against Scala, and dealing with condo repair contractors and a fussy HOA board in the meantime: at the end of the day I had basically no running code and a splitting headache. The cure: beer and python.

The second day I changed to Python, which I really haven't used *that* much, but it's easier to figure out how to do things. I'm not sure why -- maybe it's a question of documentation, or maybe it's just that I don't get functional programming as well as I thought I did.

In Python I managed to get an interpreter that basically worked, reading the DNA file and creating images using Tkinter. I don't think it was ever flawless. There was a self-test screen that I eventually got all "OK"s on, but I never did see these manual pages that people on the discussion list were talking about.

The Python was still too slow to make further debugging feasible, so I rewrote it in C. Got that working sometime Sunday, and it was faster, but it wasn't faster enough. The real solution needed to be linked lists to pointers to memory, so one wouldn't need to shuffle megabytes of DNA around -- but I had no mental energy to implement anything like that by that point.

Lessons learned:
Either scala documentation needs work, or I just need to spend more time learning the paradigm
When the problem statement says that shifting byte strings around needs to be done in faster than linear time, that's an optimization I should think through up front, not "get to it later". It made all my implementations useless.
Python is the best language I know of for expressing algorithms. It isn't fast enough, though. (although I wonder what if I tried the linked-list thing in python? I may try that as an experiment)

There were some complaints on the discussion list by people saying that the up-front work was too hard, and that most people never got to the meat of the problem. I totally disagree with that. ICFP is all about making functional programming usable and efficient. If, with current tools, we are forced into low-level bit-schlepping in C, it points to a deficiency with the most common tools of functional programming, or in the training of programmers. The contest ought to be about illustrating that kind of issue in a fun way.

Here's an interesting point: languages with higher-order kinds of functionality tend to abstract one away from the machine more; working with heavy integer "objects", for example, instead of raw machine integers as C does. Is it possible to have a language where you can talk about bare metal, even registers and such, but do it in a higher-order abstract way when desired? I guess C++ attempts to do this in some ways, but it's an extremely complex language. Is there no better compromise?

Sunday, June 03, 2007

Representin' in Oregon

I haven't posted in a while, but I have good news -- I've been accepted into the PhD program at Oregon State University! I first discovered the program by googling what turned out to be Margaret Burnett's turn of phrase, "HCI of Programming".

It's been 15 years since I got my master's at Colorado State. Turning 40 last year, I've been thinking about where my life is headed and what I wanted to do next. I've been doing very practical programming in the title industry in the daytime, and thinking and writing about programming languages at night. So now I'm back in school, and although the faces have changed, the hassles are all the same.

Up until now I have thought more about the challenges of language design for professional programmers, but at OSU I'll be diving into the whole realm of end user software engineering.

When designing a language for a professional programmer, maybe we can assume that the user knows what he or she is doing. Programmers are expected to have an analytical mindset, and plenty of practice at debugging things. They have endured years of indoctrination into the sacraments of testing, code correctness, documentation and version control. They have fingernails that shine like justice. They have RTFM. It's safe to assume that they have this expertise. It may not actually be true that they have it, but it's safe: our asses are covered, because we said on the box "professionals only".

An "end user" may in fact be a professional programmer, but we no longer have our asses covered; the environment must share in the responsibility. There is an intelligent being sitting in front of us, who thinks in a very different way than the computer does. They have some concept of what they want the computer to do, and we have to cooperate with them in representing that concept. Without being making so many wrong assumptions, or so few right assumptions, that we annoy the user into walking away in frustration.

When I was going through the application process, I went for advice to University of Colorado professor, Clayton Lewis, who introduced me to the notion of Theory of Representation: that a lot of what's interesting in computing can be seen in terms of making valid mappings from one representation system to another. In those terms, HCI is about mapping between formal and human representations. But the human representation system is a pretty quirky thing that cognitive scientists are still attempting to explore; so HCI is kind of an engineering discipline working ahead of its time: trying to translate in and out of human representations before human representations can really be said to be understood by science.

This process has got to be a two-way street, because human representations of programming solutions are invariably wrong to start with, at least at a sufficiently fine level of detail. They need refinement, and the process of formal representation helps with that refinement.

In HCI of programming there is more going on, however. Engineering is not just a set of concepts, it's a discipline. "Discipline" is a deep set of mental habits. So to turn end users into software engineers, or to turn software engineers into better software engineers, it seems to me that an ideal software engineering environment will do more than represent users' ideas; it will help teach the user the mental habits necessary to refine those ideas efficiently. Maybe that goes beyond theory of representation.

Saturday, May 19, 2007

Bring to a boil: programming with cairns

I'd like to see more computer language instructions like the "bring to a boil" instruction you see in recipes. "Bring to a boil" doesn't tell you to put the pot on the stove, or how you turn the burner on, or how long it takes to boil. In fact, that may vary based on your altitude, the quality of your tap water, and the shape of your pot.

"Bring to a boil" jumps ahead past a bunch of obvious steps. It's not just like a subroutine call, encapsulating a handful of steps for brevity; it just describes the endpoint of a bunch of steps, and it's assumed you can find your way there yourself.

Finding its way there itself is something computers can do. Algorithms to traverse mazes are not that tricky, and in some limited domain, figuring out how to accomplish some process could be likened to navigating through a maze. In some limited domain where an algorithm can be generated to get from here to there, why not give programmers that for free?
Cairn
Presuming that automatic programming is hard in the general case, but easy in sufficiently constrained cases, then programmers should be able to set a series of cairns for the compiler to follow, each close enough to the last for automatic programming to find its way, but far enough apart to save having to type in a lot of tedious steps.

The other nice thing about this kind of instruction is that it entails a test. Often we write code that clanks along like an insane robot, following instructions without checking every so often to see that it's still on course. "Bring to a boil" isn't the sort of instruction that can accidentally be followed when you're in the bedroom instead of the kitchen, blindly going through the motions of turning on an imaginary stove while standing in front of the laundry hamper.

Sunday, March 25, 2007

Hobbling scheme for readability?

There's a discussion going on over at Lambda the Ultimate about the use of Scheme as a shell scripting language (scsh). As is typical in such discussions, the people accustomed to parentheses are saying that all the parentheses aren't so bad, while the people unaccustomed to them feel that traditional shell scripting is clearer.

I gotta come down firmly on the no-parens side of this debate. Lispy parentheses are an egregious example of center embedding, which are an impossible strain on short-term memory. It's quite hard to write lisp code without help from an editor program that counts parentheses for you. Lisp is optimized for machine parsing, but pessimized for human parsing.

It would be interesting to see just what limits there are on programs you can write without center embedding. Suppose you had a dialect of lisp in which the outermost expression did not need parentheses around it, a comma would close a single paren, a period would close multiple parens, and whenever there are two levels of paren, the ONLY way to close them would be with a period, which would end the outermost expression. So something like this would be legal:


a (b c, d (e f, (g (h (i j.

and would look like this in traditional syntax:

(a (b c) d (e f) (g (h (i j))))

The point being that you could nest as deeply as you liked, but only at the end of the expression. I don't know, it looks kind of ugly at first glance, but it parallels nicely English structures like:

This is the cat that ate the rat that lived in the house that Jack built.

If you forced yourself to write in that style, and just defined variables to refer to when you couldn't find a way to structure things in this way, I wonder what the program would look like.

Saturday, March 17, 2007

select * from universe where planet=earth

In Ten Ways to Sunday, Nathan Allen discusses the fact that the powerful relational model you have for referring to data in a database, does not extend to entities outside the database, and he proposes that similar relational model should be available to refer to everything a computer has access to, with some way of explicitly narrowing the scope of this system for the context of a particular task.

Referring to everything in the world with a consistent relational model is a conceptual nightmare, as Nathan hints at, and as you'll soon discover too if you peruse an upper-level ontology like SUMO or cyc, and contemplate relating some dataset that you work with into one of these ontologies. But I think something along these lines is going to end up bearing fruit in the long run. Human communication always uses words with known meanings, linked into a shared cultural knowledge set. Because of that, we can describe data, processes, and new ideas to each other without having to define terms nearly as often as we must when writing software or laying out databases. Our data is implicitly embodied as part of society's larger body of knowledge. Today's databases, on the other hand, are a bunch of arbitrary items whose interpretation is described by machine-unreadable documentation, and implicitly described by the the way software happens to be written to use it.

While humans seem to share an implicit upper-level ontology, we don't actually know what it is, as evidenced by the difficulties and controversies surrounding the building of formal upper level ontologies. If we can figure out why we can communicate so well with each other despite that, I think we'll have learned something important. I suspect what's going on is that we have multiple ontologies for different kinds of ideas and objects, that are loosely and inconsistently interrelated.

The inconsistency of our ontological landscape is evidenced by those weird questions we stay up late at night arguing about like "why is there a universe" or "does red look the same to you as it does to me" or "is there a God". It's also where "religious" debates come from, by which I mean those debates where your position seems so obvious that you think the other guy must be being deliberately obtuse not to see what you see so clearly.

Someday AIs in charge of personal data management will fritter away their spare CPU cycles arguing about whether postal codes should be intrinsic parts of addresses or derived data items by way of post office servers. During the day, though, they'll warily agree to disagree and find a practical workaround.

Sunday, February 25, 2007

Lojban

I just realized that although I chose a Lojban name for this blog, I haven't written much about it.

Lojban (and its predecessor/competitor Loglan) was conceived as a language that would fit more or less within the realm of human language universals, but that at the same time would be syntactically unambiguous, and its semantics would be "based on" predicate calculus. If some version of the Sapir-Whorf Hypothesis is true, then maybe learning a language based on predicate calculus would cause you to be a more logical thinker.

The Lojban experiment has been a success in the sense that a complete, usable language was developed, and there are a community of speakers doing translations, original writing, and holding conversations in it. I participated in the Lojban community in the mid 90s, and I found learning it to be an interesting mental exercise for several reasons:

  • It required some distinctions that are not habitually made in English; for example there are 14 logical connectives to choose from, plus some non-logical connectives. At first you find yourself drawing truth tables to figure these things out, but eventually they come more naturally as you use them enough.
  • Other things were easier than English. As in Japanese, in Lojban you can often elide important words, with the hole being implicitly filled in with "whatever is obvious to both speakers".
  • There was an incredibly fun and productive word construction process, where you'd take these three-letter word bricks and glue them together to make compounds. German's got nothing on Lojban.
  • The language is excessively rich in hundreds of little words that approximately fill the role of prepositions. Some of these were thought up rather cavalierly for theoretical reasons or out of a sense of completeness, to fill out a pattern of similar words, so while their meaning was clear, it was not always clear when they would be useful. Coming up with plausible usages and trying to nonchalantly work them into forum conversations was great fun.
  • There was also a composable vocabulary of short words made exclusively of vowels and glottal stops, called "attitudinals", representing emotions. You could often get your point across by just expressing how you felt about it, and let the content be implied.
Some of Lojban's downsides, that eventually led me away from spending time on the project:
  • While Lojban borrowed some concepts from propositional logic to great benefit, in the end I don't think it would be fair to say Lojban was "based on" logic. To me that would mean that language structures would be defined in terms of a simpler and pre-existing formal logic already known and understood by logicians. As it was, we had endless arguments about how to correctly translate example sentences from Quine, like, say, "I want a sloop" in the case where there is not a particular sloop you have your eye on. I think problems of this sort went pretty deep, and probably the language should have been designed from the ground up around a particular theory of meaning, right or wrong, rather than making up a grammar and arguing about what it meant later.
  • The community was fascinated with navel-gazing issues of this sort, which was highly addictive but not very fruitful in my opinion. It was educational and thought provoking for me, but I did not have the formal logic/philosophy of language background to contribute as productively as I wished I could have.
All that being said, I think Lojban has made an important gesture by exploring a wholly new point along the computer/human-language spectrum. One lesson I'd like to see the programming language world learn from it, is that it the names we assign to things in programming languages are almost always completely arbitrary and unanalyzable. What if something that the computer could actually dissect, akin Lojban's word-building system ("lujvo"), were mandatory for most variable/function/class names in a program? In no human language do we invent most of the words used in a paragraph, and begin by defining them; we instead stretch existing vocabulary to meet our needs. Perhaps that contributes to the error-pronity of software development in some way.

Friday, February 16, 2007

Evolution of AI

Grey Thumb brings our attention to this article: The Evolution of AI, published in Artificial Intelligence last year.

The author accuses AI researchers of being crypto-creationists, for not putting more emphasis on evolutionary techniques in their research. While I think the author is essentially right in saying that evolutionary techniques will play an important role in the research and development of artificial intelligence, it seems to me to be merely practical tool in that field, not a relevant object of study in that context. What's interesting about AI is not the mere production of software that has intelligence-like features, but the deeper understanding this process can lead to: just what intelligence and consciousness really are, and how the human mind might work.

The problem with evolutionary techniques is they often lead to designs we can't understand, or that, on analysis, turn out to work for strange, quirky reasons.

I call that a "problem" only in that it doesn't naturally lead to a better understanding of a problem domain. If you're trying to design a tone discriminator, as in the article above, or a better antenna, it's fantastic what genetic algorithms can do for you. But in trying to understand intelligence, the project as a whole would be too large for a genetic algorithm to tackle and would lead to an unanalyzable mess. I'm not making that statement without evidence -- look at what nature has given us: our brains took billions of years to evolve, and after thousands of years of trial, error, and science we still can't reliably tweak it to cure common problems like depression and schizophrenia.

I'm sure as evolutionary techniques improve, people will use them to design components of intelligent systems, and analyze those designs to come up with theories about how they work. In fact we'll probably do a lot more of that in a lot of fields. But the only way I see evolutionary theory as being directly a part of the AI field is if it turns out that there are evolutionary processes going on within the human mind in real time -- a possibility the paper does not even discuss.

In short, he's conflating a research technique with a field of study.

Tuesday, January 23, 2007

Burrows-Wheeler Transform in Haskell

I got curious about program transformation techniques, and went looking to see what kind of work had been done on round-trip parsing: i.e. using the same grammar to turn a string into an abstract syntax tree, and then to turn it back into a string.

Well, I got distracted by this very odd thing I found called the Burrows-Wheeler transform. It's a weird way of sorting a string that is reversible. (And happens to be likely more easily compressed with something like run-length encoding, hence its use in compression programs such as bzip2). Basically you sort the string, then replace each character with the one that used to appear to the left of it back in the original ordering (or an "end of string" marker for the character that came from the front of the string).

Anyway, as an exercise I wrote an encoder and decoder in Haskell. Here it is, for your amusement. Feel free to use it if you want, but it's probably not very efficient. I need to read up on when Haskell does and does not save results of function calls -- are they always memoized?



module Bwt3 where
import Ix
import Data.List

--
-- For BWT we want to compare strings with the special
-- rule that the end of string sorts as greater than any
-- other character
--
compHighEol [] [] = EQ
compHighEol lst1 [] = LT
compHighEol [] lst2 = GT
compHighEol (x:xs) (y:ys)
| x > y = GT
| x < y = LT
| x == y = compHighEol xs ys

-- Datatype for rotated string; the string starts where the
-- integer member says it does, and wraps around when it hits the
-- end, conceptually
-- This lets us store multiple rotations of the same string
-- without using storage for each one; they all point to the same
-- structure
data Rotation = Rot Int [Char] deriving Eq
instance Show Rotation where
show (Rot k l) = (drop k l) ++ ['@'] ++ (take k l)
instance Ord Rotation where
compare (Rot a as) (Rot b bs) = compHighEol (drop a as) (drop b bs)


-- List of all possible rotations of a string
allrots str = [ Rot i str | i <- range(0, length str) ]

-- The actual output of this algorithm is a string with a flag
-- embedded in the middle of it, which can't be a character. So
-- we introduce a type for this purpose that
-- allows for an EOF symbol. Which by the way, sorts after
-- all other characters, to help with decoding

data FlaggedChar = FC Char | FC_EOF deriving Eq

instance Show FlaggedChar where
show FC_EOF = "@"
show (FC x) = show x

instance Ord FlaggedChar where
compare FC_EOF FC_EOF = EQ
compare _ FC_EOF = LT
compare FC_EOF _ = GT
compare (FC x) (FC y) = compare x y


-- BWT encoding: make all rotations, sort them, and take the last
-- character of each.
encode str = map lastchar (sort (allrots str))


-- lastchar just pulls the last character out of the
-- rotated string, or FC_EOF if that's the last item
lastchar (Rot 0 xs) = FC_EOF
lastchar (Rot ix xs) = FC (head (drop (ix-1) xs))

-- The rest of this stuff is for decoding the [FlaggedChar]
-- list we made above
--------------------------

-- Given a list (of anything), return a list of Ints
-- showing where the elements of the list would end up
-- if sorted.
--
-- We do that by tagging each element with an integer, sorting
-- those tagged items, and
-- collecting the tags afterwards to see where they ended up
sortPermutation xs =
map snd ( sortBy compfst (zip xs [0..]))
where compfst x y = compare (fst x) (fst y)

-- Turn that into a cycle by cycling through it once
getCycle xs =
take (length xs)
((iterate ((sortPermutation xs) !!) . fromInteger ) 0)

-- apply the permutation to the encoded item
-- and rotate the result so the end of string flag comes at the end
applyCycle cycle encoded =
fCtoString $
tail $
dropWhile (/= FC_EOF) answer ++ takeWhile (/= FC_EOF) answer
where answer = map (encoded !!) cycle -- Oops: fixed typo 9/2008

fCtoString :: [FlaggedChar] -> [Char]
fCtoString [] = ""
fCtoString ((FC c):cs) = [c] ++ (fCtoString cs)
fCtoString ((FC_EOF):ds) = fCtoString ds -- ignore FS_EOF

-- Finally the decode function puts it all together
decode xs = applyCycle (getCycle xs) xs

Sunday, January 07, 2007

Abstract Refactoring

This week I had a subroutine, let's call it Bedrock, whose functionality I needed to expand to handle a new situation. I won't bore you with the details, but the short story is that it created and handled exceptions from an object, Fred, and now I needed the same thing done with a very similar object, Wilma, which has almost, but not quite, the same behavior.

Of course, there's always this dilemma: is it clearer to just throw a little logic in the function to handle the two cases, something like:


sub bedrock(bool need_caveman) {
Caveperson c;
if (need_caveman) { c = new Fred(); }
else { c = new Wilma(); }

try {
c.bang_on_rocks();
} catch (Exception e) {
if (need_caveman) { printf("Fred can't find a rock\n"); }
else { c.call_betty(); }
}
}
...or is it better to separate the whole thing into separate functions:

sub bedrock(bool need_caveman) {
if (need_caveman) { bedrock_fred(); }
else { bedrock_wilma(); }
}

sub bedrock_fred() {
Caveperson c = new Fred();
try {
c.bang_on_rocks();
} catch (Exception e) {
printf("Fred can't find a rock");
}
}

sub bedrock_wilma() {
Caveperson c = new Wilma();
try {
c.bang_on_rocks();
} catch (Exception e) {
c.call_betty();
}
}
(the latter case further invites us to make bedrock() a virtual function in the superclass of Fred and Wilma so the dispatching function is handled more discreetly, but that's an optimization for another day).

In my case, sitting there in my IDE, I wasn't sure which would be clearer. I knew what needed to be done, but I couldn't quite visualize which way would be easier to read until I typed it out. (I have to admit I'm not much of a flowcharty, UML-y kind of thinker; I have to type out code, or at least pseudocode, to see what it looks like, then refactor from there and draw diagrams after the fact.)

The thing is, there's no functional difference between these two options, outside of some extremely trivial performance issues (the second one uses one more stack frame than the first -- whoop-de-doo). Why do I have to decide at all?

The first code example has the virtue of showing you clearly the relationship between Fred and Wilma -- where their needs differ, you have an explicit if statement. Its flaw is that if you want to know just about Wilma, you have to read past the clutter of irrelevant stuff about Fred. The second example obviously has the converse strength and weakness.

I'd like my IDE or revision control system to know about this trade-off, and be able to display the underlying functionality in either manner. Think of it as something like currying. Currying, if you're not familiar with it, is where you take a function with two arguments, fill in one of them, and treat that half-filled-in function call as a new function of just one parameter. For example, "+" takes two arguments, as in 3+4 = 7. If you curry it with a "3", you get a new function, "3+_", taking one argument and returning that number plus three.

Now in my bedrock example, looking at the first function where I have everything all lumped together; suppose you curried it with the value need_bedrock = true. In two places you'd end up with:
if (true) then { foo foo foo }
else { bar bar bar }
which a smart IDE ought to be able to display simply as:
foo foo foo
Much clearer, no?

In general, then, I'd like to have some kind of odd IDE modality that was like a debugger, in that I could play with variable values, but in which I could mutate the code to show what it would reduce to under current conditions. If I can rule out somehow that Wilma isn't relevant to the bug I'm tracking down, then the code logic can be automatically simplified for me.

What would be really cool would be if I could even edit the deWilmafied "curried" code, and it would be able to merge my changes back to the canonical codebase, just as changes to conflicting revisions in a source control system are merged. That could get messy though, so much careful treading would be in order.