Sunday, February 25, 2007


I just realized that although I chose a Lojban name for this blog, I haven't written much about it.

Lojban (and its predecessor/competitor Loglan) was conceived as a language that would fit more or less within the realm of human language universals, but that at the same time would be syntactically unambiguous, and its semantics would be "based on" predicate calculus. If some version of the Sapir-Whorf Hypothesis is true, then maybe learning a language based on predicate calculus would cause you to be a more logical thinker.

The Lojban experiment has been a success in the sense that a complete, usable language was developed, and there are a community of speakers doing translations, original writing, and holding conversations in it. I participated in the Lojban community in the mid 90s, and I found learning it to be an interesting mental exercise for several reasons:

  • It required some distinctions that are not habitually made in English; for example there are 14 logical connectives to choose from, plus some non-logical connectives. At first you find yourself drawing truth tables to figure these things out, but eventually they come more naturally as you use them enough.
  • Other things were easier than English. As in Japanese, in Lojban you can often elide important words, with the hole being implicitly filled in with "whatever is obvious to both speakers".
  • There was an incredibly fun and productive word construction process, where you'd take these three-letter word bricks and glue them together to make compounds. German's got nothing on Lojban.
  • The language is excessively rich in hundreds of little words that approximately fill the role of prepositions. Some of these were thought up rather cavalierly for theoretical reasons or out of a sense of completeness, to fill out a pattern of similar words, so while their meaning was clear, it was not always clear when they would be useful. Coming up with plausible usages and trying to nonchalantly work them into forum conversations was great fun.
  • There was also a composable vocabulary of short words made exclusively of vowels and glottal stops, called "attitudinals", representing emotions. You could often get your point across by just expressing how you felt about it, and let the content be implied.
Some of Lojban's downsides, that eventually led me away from spending time on the project:
  • While Lojban borrowed some concepts from propositional logic to great benefit, in the end I don't think it would be fair to say Lojban was "based on" logic. To me that would mean that language structures would be defined in terms of a simpler and pre-existing formal logic already known and understood by logicians. As it was, we had endless arguments about how to correctly translate example sentences from Quine, like, say, "I want a sloop" in the case where there is not a particular sloop you have your eye on. I think problems of this sort went pretty deep, and probably the language should have been designed from the ground up around a particular theory of meaning, right or wrong, rather than making up a grammar and arguing about what it meant later.
  • The community was fascinated with navel-gazing issues of this sort, which was highly addictive but not very fruitful in my opinion. It was educational and thought provoking for me, but I did not have the formal logic/philosophy of language background to contribute as productively as I wished I could have.
All that being said, I think Lojban has made an important gesture by exploring a wholly new point along the computer/human-language spectrum. One lesson I'd like to see the programming language world learn from it, is that it the names we assign to things in programming languages are almost always completely arbitrary and unanalyzable. What if something that the computer could actually dissect, akin Lojban's word-building system ("lujvo"), were mandatory for most variable/function/class names in a program? In no human language do we invent most of the words used in a paragraph, and begin by defining them; we instead stretch existing vocabulary to meet our needs. Perhaps that contributes to the error-pronity of software development in some way.

Friday, February 16, 2007

Evolution of AI

Grey Thumb brings our attention to this article: The Evolution of AI, published in Artificial Intelligence last year.

The author accuses AI researchers of being crypto-creationists, for not putting more emphasis on evolutionary techniques in their research. While I think the author is essentially right in saying that evolutionary techniques will play an important role in the research and development of artificial intelligence, it seems to me to be merely practical tool in that field, not a relevant object of study in that context. What's interesting about AI is not the mere production of software that has intelligence-like features, but the deeper understanding this process can lead to: just what intelligence and consciousness really are, and how the human mind might work.

The problem with evolutionary techniques is they often lead to designs we can't understand, or that, on analysis, turn out to work for strange, quirky reasons.

I call that a "problem" only in that it doesn't naturally lead to a better understanding of a problem domain. If you're trying to design a tone discriminator, as in the article above, or a better antenna, it's fantastic what genetic algorithms can do for you. But in trying to understand intelligence, the project as a whole would be too large for a genetic algorithm to tackle and would lead to an unanalyzable mess. I'm not making that statement without evidence -- look at what nature has given us: our brains took billions of years to evolve, and after thousands of years of trial, error, and science we still can't reliably tweak it to cure common problems like depression and schizophrenia.

I'm sure as evolutionary techniques improve, people will use them to design components of intelligent systems, and analyze those designs to come up with theories about how they work. In fact we'll probably do a lot more of that in a lot of fields. But the only way I see evolutionary theory as being directly a part of the AI field is if it turns out that there are evolutionary processes going on within the human mind in real time -- a possibility the paper does not even discuss.

In short, he's conflating a research technique with a field of study.