Sunday, December 17, 2006

Questions on Haskell Style (and Polynomials redux)

Someone posted, on the Haskell "Blow your Mind" page, a much more concise implementation of Polynomials as Numbers than I managed to come up with. There are a couple things I learned by comparing it with my version:

  • Use of zipWith (+) instead of map (\v -> (fst v + snd v)) zip; obviously I need to get to know the libraries better
  • zipWith' is introduced to zipWith two lists together, padding the shortest one with zeroes. Jeremy Gibbons suggested the same thing, calling it longzipwith, and this is parallel to what I did without the "with" in my zipLong function.
  • They had the same questions I did about what to do with the required signum and abs functions. Mikael suggested in a comment that I should use the NumericPrelude library instead of the standard one, for a more sensible generic definition of a number.
  • Jeremy and Blow-your-mind both used one less line in their zip function definition than I did: they manage to define it without explicitly handling the case where both lists are null.
And here are some questions I have -- please comment if you have any ideas!
  • The Blow-your-mind version uses [a] instead of Poly [a] as its polynomial type. My feeling is that an explicit constructor and data type like Poly is better, because a polynomial is not the most obvious way to interpret a list of numbers. However, my functions ended up being a lot wordier, pulling that constructor on and off the list all the time. What's the better style choice here?
  • The multiplication function has me scratching my head. It seems terser, but less communicative to me. If you're got more experience with functional programming than I do: is the Blow-your-mind version much more clear or efficient, and is it worth it? Is mine only clearer to me because I'm not used to FP style yet? Or maybe tight APL-like gnomishness is part of the point of the Blow-your-mind page. (Here they are again if you don't want to follow all the links).
My version:
> scalarPolyMult :: (Floating a) => a -> Poly a -> Poly a
> scalarPolyMult c (Poly rs) = Poly $ map (*c) rs

> multByUnknown (Poly rs) = Poly (0:rs)

> multPoly :: (Floating a) => (Poly a) -> (Poly a) -> (Poly a)
> multPoly (Poly []) cs = Poly []
> multPoly (Poly (b:bs)) cs =
> addPoly
> (scalarPolyMult b cs)
> (multPoly (Poly bs) (multByUnknown cs))
Blow-your-mind version:
  xs * ys = foldl1 (+) (padZeros partialProducts)
where
partialProducts = map (\x -> [x*y | y <- ys]) xs
padZeros = map (\(z,zs) -> replicate z 0 ++ zs) . (zip [0..])

Wednesday, December 13, 2006

Automatic Spaghetti Refactoring

I wonder under what conditions it would be possible to take a working body of spaghetti code and automatically refactor it to pull out all references to one specific resource, or some other definable subset of the program's activity. Afterwards, your code would be as bad as before, or worse, in terms of organization, but it would at least be semantically equivalent.

If you could do this quickly, on demand, you could present the object to a programmer in some kind of constrained editor for modification, then re-inline the changes back into the original structure of the code. This would let a programmer pretend a design was nicely modular and object-oriented for the sake of conceptual simplicity in making a change, but without requiring the code to actually be organized at all.

Why bother doing this? It could be that the code is already organized according to some orthogonal concern that you don't want to mess up; or it's truly messy spaghetti code that you don't understand; or maybe more interestingly, it's represented internally as a dataflow graph or something, where there may be no linear or hierarchical structure to it by nature.

I know refactoring gets done piecemeal by menu options in some IDE's, and I know something similar is done within compilers as optimization steps, for example inlining, and loop unrolling, but I haven't heard of large scale, temporary changes just for the purposes of presentation.

The idea of making a transformation, applying a change, then reversing the transformation, puts me in mind of the notion of "conjugation" from group theory. However I haven't gotten to that chapter yet in my Algebra book, so I don't really know if there are any monkeys in that tree.

Thursday, December 07, 2006

Awkward db2 SQL syntax

I'm continually frustrated by the awkwardness of SQL syntax in DB2 (I do database work on an IBM iSeries). You can't do an update on a joined table; you can only update one table, so you have to do a subselect, typically expressing exactly the same subselect in two places in the same query. See Brett Kaiser's helpful example. I keep shying away from doing them this way because it seems so wrong, but I keep coming back to it as my best alternative. A language should not make you say the same complicated thing twice in once sentence. Anaphora is your friend, IBM!

Tuesday, December 05, 2006

Simonyi on Comments in Algol-60

Check out this fascinating slideshow by Charles Simonyi about how the language Algol-60 was originally used in practice -- as a language for comments. He observes that good code comments can be a good source of ideas for new programming language constructs, since comments are where programmers express ideas that they feel can't currently be made clear or concise in the code itself.

Monday, December 04, 2006

The Zen of Referential Transparency

Here's another interesting connection between Buddhist philosophy and math: this blog post by metaperl draws a connection between Haskell's lack of state variables, and Eckhard Tolle's belief that "the only thing that is real is the present moment."

I'm not familiar with Tolle, but this is a part of Zen thinking as well. You could choose to look at the past and future only as memories and projections that exist now, rather than as independent realities. I'm not claiming to know something that arcane about the universe. But it makes sense, using category theory as a metaphor, that you could choose to view the past and present and future as three real objects; or you could choose to look at the currently existing traces of the past, and currently existing precursors of the future as categorical arrows, and merely define the past and future in terms of those.

A pure function will return the same value every time it is called with the same arguments; this is convenient because it makes them timeless in a sense. If you define some function


f(x) = x+4,

you can say that f(2) = 6; and it's always true. If f(2) gets called 10 times during a program's execution, you only have to really do the addition once; after that you know it as a timeless fact. Even calculating it the once is bending to necessity; after all, it was already true before we calculated it.

But if we have some other function, g(), and we pass the result of f into it

g(x) = x * 7
print g(f(2))

Well, g(6) = 42 no matter what, and g(f(2)) = 42 no matter what. They're both universal timeless facts about this little system. But in practice the computer has to calculate f(2) in order to know what to feed to g, so that g(f()) relationship works like time. f is "before" g, even though they're both mathematically fixed, static, and unchanging.

My feeble understanding of Zen is that one can learn, among other things, to experience time as an artificial construct of this sort, and not as something fundamental. It's a pretty hard to nail down what's concretely different in this perspective, though, because it's probably isomorphic to the more conventional view of time as a mighty river, but supposedly there is a psychospiritual benefit to changing one's perspective in this way.

Some links:
An article about Zen, Time, and Bohm's interpretation of quantum physics
A prior post of mine on dependent origination and category theory

Sunday, December 03, 2006

Moving to new URL

Hi,

I'm moving to http://sambangu.blogspot.com. Please update your feed! I was doing an FTP upload from blogger.com to my ISP, but the comments never worked, and it just doesn't seem worth the trouble to do it that way.

Polynomials as numbers

So, I was trying to learn category theory, having gathered it was a good thing to know for people interested in programming language theory. I kind of got bogged down though -- it's castles built on castles built on castles in the air, and I was losing track of reality in all the abstractions. My book would talk about a category of Groups, for example, so I'd go looking up Groups in my faithful Wikipedia. I'd get the concept, it's not that tricky in itself, but obviously understanding the definition of a group does not make one a group theorist.

So, I finally declared "mission accomplished" brought the troops home to a ticker-tape parade, and started in on a group theory textbook instead (Knapp's Basic Algebra). I'm enjoying it a lot more -- it starts up where I left off with math classes, so I don't feel like I'm a freshman in a senior-level class.

Anyway, this book starts with a kind of review of polynomials, simultaneous equations, matrices, vector spaces, etc. that is stuff I've seen before, but with a more theoretical spin, that either was missing from my high school algebra or I've simply forgotten it. The book points out that factoring integers into products of smaller integers, is very much like factoring polynomials into products of polynomials of smaller degree, and a prime number is like a polynomial you can't factor any further (like say x2+9 = 0, when you're dealing with Reals). Of course you can also add, subtract, multiply, and divide polynomials, and the result is always still a polynomial. If a topologist can't tell a coffee cup from a doughnut, then a group theorist can't tell a polynomial from an integer.

Well, maybe she can; obviously integers and polynomials have some different properties. But I'm tickled enough with the idea that a polynomial is a kind of number, that I created a polynomial instance of Num in Literate Haskell as an exercise, reproduced below for your edification.

Since I'm also still learning Haskell, I welcome any critiques you might have of my code. I like the fact that the data structure I used (just a list of coefficients) turned out to make for short code; but it doesn't come out very readable. The fact that prepending a 0 to a list multiplies the polynomial by X seems as cryptic as it is convenient.


> module Main where
> main = print testP

I represent a polynomial as a list of its coefficients,
with the constant first, then the X, the X^2, etc. So
3x^2 - 4 would be Poly [-4,0,3].

> data Floating a => Poly a = Poly [a] deriving (Eq)

Now we evaluate by filling in the unknown. Note that
ax2 + bx + c evaluated at x is the same as
(ax + b)x + c, so you can evaluate it by taking the constant
term, popping it off the list, then multiplying x by the
polynomial interpretation of the rest of the list.

> evalPoly :: (Floating a) => (Poly a) -> a -> a
> evalPoly (Poly []) x = 0
> evalPoly (Poly (c:cs)) x = c + x*(evalPoly (Poly cs) x)

To add two polynomials, add corresponding coefficients. If
one is shorter than the other, you want to use zeroes. I don't
know how to do this beautifully with standard functions, because
zip cuts off when the shortest list runs out. So I defined
a helper function zipLong, that keeps going till the longest
list is done, filling in a default value.

10 Points to whoever emails me with a shorter, cleaner,
idiomatic way to do this.

> addPoly :: (Floating a) => (Poly a) -> (Poly a) -> (Poly a)
> addPoly (Poly r) (Poly s) = Poly $ map (\v -> (fst v + snd v)) (zipLong r s 0)

> zipLong [] [] d = []
> zipLong (x:xs) [] d = (x,d):(zipLong xs [] d)
> zipLong [] (y:ys) d = (d,y):(zipLong [] ys d)
> zipLong (x:xs) (y:ys) d = (x,y):(zipLong xs ys d)

Multiply a polynomial by a scalar. I have a feeling this
could be defined somehow under an instance of Num so that
the * operator is automatically overloaded. Not sure how.

> scalarPolyMult :: (Floating a) => a -> Poly a -> Poly a
> scalarPolyMult c (Poly rs) = Poly $ map (*c) rs

Since (ax^2 + bx + c)x = ax^3 + bx^2 + cx,
then Poly [c b a] * x = Poly [0 c b a]

> multByUnknown (Poly rs) = Poly (0:rs)

To multiply two polynomials, P1 * P2, where P2 = (dx^2 + ex + f),
rewrite as f*P1 + P1*x*(dx + e); the first term is a scalar multiplication
and the second has one less degree, so we can recurse on it. The
(Poly bs) term below is (dx + e) in this example.

> multPoly :: (Floating a) => (Poly a) -> (Poly a) -> (Poly a)
> multPoly (Poly []) cs = Poly []
> multPoly (Poly (b:bs)) cs =
> addPoly
> (scalarPolyMult b cs)
> (multPoly (Poly bs) (multByUnknown cs))

Define a polynomial as a number. I'm cheating a little by
picking a dumb definition of signum; really a polynomial
with an unknown isn't positive or negative before a number
is plugged into it, so I'm just defining it as the sign of
the constant term.

> instance (Floating a) => Num (Poly a) where
> s + t = addPoly s t
> negate (Poly cs) = Poly $ map (\q -> -q) cs
> s * t = multPoly s t
> abs s
> | signum s == -1 = negate s
> | signum s /= -1 = s
> signum (Poly []) = Poly []
> signum (Poly (c:cs)) = Poly [signum c]
> fromInteger i = Poly [fromInteger i]

And define a cheesy way to print out a polynomial

> instance Floating a => Show (Poly a) where
> show (Poly []) = "null"
> show (Poly cs) = concatMap
> (\c -> " + " ++ (show (fst c)) ++ "X^" ++ (show (snd c)))
> (reverse (zip cs [0..length(cs)-1] ))

Define some examples and some tests.

> p = Poly [2,0,1,2] -- 2x^3 + x + 2
> q = Poly [0,0,0,3,4] -- 4x^4 + 3x^3

> testP = (p * q == Poly [0,0,0,6,8,3,10,8]) &&
> (p + q == Poly [2,0,1,5,4]) &&
> (evalPoly p 3 == 65.0 ) &&
> (3 * p == Poly [6,0,3,6]) &&
> (show p == " + 2.0X^3 + 1.0X^2 + 0.0X^1 + 2.0X^0")

Friday, November 17, 2006

SOAP: follow-up

Here's a fantastic dialog that exactly captures my experiences with SOAP. (via Anarchaia)

Wednesday, October 25, 2006

Follow up on kanji coding method: Wuji

Turns out there is a character input method like the one I described in my previous post.
Sounds like it turns out to be kind of awkward to use. Ah well...

Sunday, October 15, 2006

Ban the dangling else!

I was just reading through Niklaus Wirth's paper, "Good Ideas, Through the Looking Glass" (found through Lambda the Ultimate, of course!) where he talks about the Dangling Else problem.

Some programming languages have statements of the form:

if X then Y
and
if X then Y else Z
with no end keyword or bracket, leading to the problem of how to interpret:
if it's Saturday then if it's sunny then have a picnic else see a movie
Do we see a movie if it's not Saturday, or if it is Saturday, but it's cloudy?

Natural languages have exactly the same sort of problem (example from the AP Press Guide to News Writing, quoted in Language Log):
We spent most of our time sitting on the back porch watching the cows playing Scrabble and reading.
It reads funny in English, but we resolve it easily because we know the context. Obviously context is not as helpful for a compiler.

How does Language Log suggest fixing that problem in English? They give two suggestions:
We spent most of our time sitting on the back porch watching the cows, playing Scrabble and reading.
or
Playing Scrabble and reading, we spent most of our time sitting on the back porch watching the cows.
The first suggestion corresponds approximately to Wirth's preference for an end keyword; the comma signals a break of some kind, and the obvious interpretation is that the cows and the scrabble shouldn't be too closely associated. It's not nearly so rigorous as end, of course.

The other suggestion is a bit more interesting; they rework the entire ordering of the sentence. Although the two examples aren't really parallel, it does suggest another way to rework our if-then-if-then-else, if we intend the final else to correspond to the first if-then:

if it's not Saturday then see a movie else if it's sunny then have a picnic.

In other words, we're punting -- not solving the original ambiguity but wording around to avoid the issue.

I think that's actually a better strategy from a human reader's standpoint. We do not have much in the way of "closing braces" in natural language: they tend to lead to center embedding issues which our brains just aren't equipped to deal with. So maybe a good policy would be for a compiler to simply disallow the "if-then-if-then-else" construction with either interpretation, and force the user to rework their logic a little. Such a restriction looks like an ugly hack to a computer scientist, but I think its necessary if we take seriously the idea that programs should be human languages as well as computer languages.

Tags: , , ,

Sunday, October 08, 2006

Metalanguage for Software Development

Every software developer uses a variety of languages to develop a program. It may seem like you're developing something purely in Perl or Java or VB or whatever, in fact you're using a lot of mini-languages to manage the development process:

  • Shell: if you're using a command line, you have to know shell commands for managing files and directories, invoking the compiler, etc.
  • IDE: On the other hand if you're using an IDE, you could kind of consider it language-like; you invoke series of drop down menus and click options on and off
  • source control: whether it's IDE-based or command-line based, you're describing and querying a specialized model encompassing time, files, directories, versions, and maybe different user identities
  • deployment: you use FTP commands, or something equivalent, to put files on a server, configure the server to run your programs at the appropriate time, etc.
  • building and linking: Like makefiles or visual studio "projects"
  • profiling: turning a profiler on or off, configuring it, and interpreting its output, is a language-like interaction
  • debugging: another model of the program, where you communicate with the debugger about variable values and code structures
  • SQL: typically any interaction with a database within your program, is done from within a walled-off sublanguage; maybe SQL built into strings, or maybe a specialized, but usually somewhat awkward, object or function call model.
  • Database configuration: setting up tables and so forth is often done with a combination of SQL or database management configuration IDE manipulation
When programmers think about the development process, we have an integrated mental model of all these aspects of the process, and from that figure out how to use them all together to do what needs to be done. It would be interesting to have a single, consistent meta-language for software development that encompassed all these tasks.

The closest thing we have to that, I think, is the command-line shell. The simpler "languages", such as the compiler settings language, are encapuslated as command-line arguments, so technically from the same prompt you are doing diverse tasks like compiling your program or renaming files. But useful as it is, it's kind of a gimmick. You can't easily pull together information from, say, the profiler, the debugger, and some unit test results, and ask questions that cut across these different domains.

For example, suppose you made a change to a procedure last week, and now you think it may be running too slow on a particular dataset. A test of that is easy to express in English: run the current version of the procedure X against dataset Y and note how long it takes; also run X against Y using the version of X that was current last Thursday. Implementing it would take a little work; we'd have to check out two versions, compile them in separate directories, run them both under a profiler, and know how to interpret the profiler results. Speaking for myself, I'd probably make a mistake the first time through -- I'd check out the wrong version of the code, or run the compiler with different optimization flags or something.

There oughtta be a language that has standard terminology for all these sorts of tools, and some easy way to build little modules onto the front end of a tool, that translates this language into the tool's configuration settings, and translates its results or error messages back into the language.

The trick is you'd have to have a pretty smart front end that could pull apart commands or queries that involved multiple tools and figure out what commands to pass along to the individual tools; then integrate the results it gets back. This would not be a trivial problem, but it would be a good start just to make this kind of task *expressible*, and require a lot of user guidance at first.

Tag:

Thursday, September 07, 2006

Buddhism and Category Theory

In my previous post today I mused about using standard ontologies in programs as a way of grounding the web of connections between your objects in a framework of common terminology.

There is a concept in Buddhist metaphysics called , which states that nothing exists on its own, but only manifests itself through its connections with everything else in its environment. One illustration of the concept is Indra's Net, an infinite spider web with little silver balls at all the junctions, each reflecting all the others in a way that would gum up any ray tracer.

A program that doesn't refer to much of anything external to itself is like that -- you can define data structures and pointers and files that all point to each other, but it's all meaningless without the interpretation that a programmer or user gives to it in the data they feed to it and their interpretation of its output.

I'd say that is a mathematical restatement of Dependent Origination.

Tags: ,

Organizing programs by their invariants

It seems to me that natural langauge program specifications tend to consist of a lot of statements which are all independently true, and only dependent on context for deictic references (i.e. pronouns and pronoun-like references to concepts in surrounding sentences). In other words, I'd claim, you could make sense of a good proportion of a scrambled specification if:

  • Sentences remained intact
  • Deictic references were replaced by full references to global names of entities or processes
Now obviously, not everything in such a document will behave that way; for example an ordered list of steps will have their ordering and context lost. But I think my claim is more true of an English-language spec, then, say, a large FORTRAN program. If you have a spec you're working with closely, you might have sticky notes on different pages, and individual sentences highlighted, which you can turn to and refer to instantly without necessarily reading for context. An old-skool not-very-modular FORTRAN program, on the other hand, is meant to be executed in one fell swoop. A programmer might turn to a particular page of a printout and look for some information, but will require a lot more scanning around and reasoning to come up with an answer to their question about what's going on in the program.

A Java program might fall somewhere in between English and Fortran in that regard -- Java tends to be written with lots of smallish functions, each of which might be comprehended on its own.

Another reason a natural-language spec is easier to read in a random order, is that the words in an English sentence are more likely to be standard vocabulary not defined in the spec; and the ones that are defined in the spec are likely to be motivated* narrowings or metaphors of standard terminology. In all the programming languages I'm familiar with, almost all the entities defined in a particular system can be freely named: a there may be conventions telling a programmer what sort of thing a WidgetWindowHandlerFactory is, but the compiler doesn't care; it could be a synonym for Integer.

So this habit of natural language helps make randomly-chosen sentences more comprehensible on their own. That in turn makes it easier for us short-attention-span programmers to digest the piles of paper our managers and user groups churn out.

This suggests a couple interesting features for programming languages:
  1. Structurally organize programs around a series of declared invariants, with the compiler (when possible) or the programmer (the rest of the time) filling in associated code to ensure the invariant remains true. This could be done in a lot of different ways depending on the need -- type systems, aspects, agents.
  2. Create a large and varied, but standard and fixed, ontology of types that the user should almost always derive from. Give them all short names and require user-defined types and variables to end with those type names. This would have to be done carefully to avoid javaStyleWordiness(), and it would also be a larger learning curve/burden on the programmer. I'm picturing something vaguer and more flexible than a standard library; rather than providing a bunch of standard implementations, the ontology would provide standard invariants.
* "motivated" -- I got this term from George Lakoff's book "Women, Fire, and Dangerous Things" where he talks about how new uses for old words are motivated by metaphorical relations with older meanings, but not predictable from those meanings.

Tags: , ,

Saturday, August 19, 2006

Saponomancy

This week I was asked to write some code to mimic a SOAP service. How hard could that be? It was a very simple service, just a couple remote procedure calls, one of which could return a large binary file.

Of course, the original one was implemented in Delphi, to run under IIS, and the new one needed to run on a UNIX-like system, but SOAP is intended on making all this easy, right? How much did I need to know?

Well, a lot, it turns out. I played with a couple different toolkits for creating SOAP services, and they kept coming up with slight variations, none of which seemed to interoperate out of the box. Little things would be different in the XML message, like whether namespaces were mentioned in the tags, or how the attachment was referenced and sent. It was easy to see and understand the problem by looking at the dumps of the XML and MIME stuff going over the wire, but much harder to plow through the documentation of the various API's to see what flags needed to be set in their object model, or what objects needed to be created, or who was responsible for allocating and freeing memory, etc.

In the end, I realized that since this was an internal service, and I controlled both endpoints, I could easily make a case to management for just using the raw HTTP protocol, and in fact as I played with that solution, it turned out to be cleaner, lighter, more maintainable, and easier to document. I'm changing my party affiliation to REST, even though it's too late to vote in the primaries.

Well, that's fine; I'm sure there must be cases where SOAP is a better solution, but it got me to thinking about why it is that the raw XML is so much easier to debug than the supposedly labor-saving frameworks built on top of it. I guess it's just that what goes across a line is easier to capture and pin down -- I can run two services, capture their input and output, and just compare the logs of them. But I can't compare the operations of their object models, because they're different.

What would be very cool, is if there were a formal way to specify a relationship between all the possible operations of an object model, and the grammar of its input and output streams. Then I could point to a rule in the grammar that I wanted to come out a certain way, and ask "what sequences of user-accessible object operations can cause this". The closest thing I've seen to this is the Whyline from project Marmelade at CMU. The Whyline is a thingy that looks like it's geared towards helping a programmer find bugs in their code, by tracing out why a particular condition was arrived at in code execution. Seems like it could be just as useful for prying apart the mysteries of someone else's prepackaged API, especially if it was closed-source, if somehow the Whyline could still operate without showing you precious vendor secrets.

Unfortunately the Whyline is still a research project built into a research language called Alice, not a handy button on my Visual Studio menubar.

Monday, August 14, 2006

Abstracting Bubblesort

In my August 4th entry I conjectured that monads might be a good way to abstract away the question of whether a bubblesort was done over time or laid out across memory as a sequence of permutations.

I thought that was a good exercise for me to brush up on Haskell monads, so here's the program I wrote. It's "literate haskell", so you can take this whole posting, save it as an whatever.lhs, and it should compile.

I especially relied on a monad tutorial at A Neighborhood of Infinity, that Sigfpe happened to post just as I was in need of it. Thanks, Sigfpe!


I'll be using the State, List, and IO monads. Eek, trial by fire.
To start with, State has to be imported:

>import Control.Monad.State

So here's my generic bubblesort to start out with. It sets everything
up without actually having the list available yet; all it needs is the
length for now.

The cf and swap parameters are functions that compare and swap elements,
respectively. Actually, they just take the indexes of the element(s)
and return a function which will be responsible for testing and permuting
anything the caller wants, however they want. cf x y should compare elements
x and y (whatever that may mean), and swap x should swap the xth and (x+1)th
elements.

So this function cycles through pairs of adjacent indices in bubblesorty
fashion, and builds a list of functions, each of which is responsible
for testing, and maybe swapping, a pair of elements, with those
user-supplied comparison and swapping functions.

>bubblesort cf swap theLen = do
> i <- reverse [0..theLen]
> j <- [0..(i-2)]
> [do
> i <- cf j (j+1)
> if i
> then swap j
> else return ()]

Now here's a simple comparison function: it assumes that the
list will be just an ordinary list, with elements instances of
Ord (so that > works). What it returns is (State [a] Bool),
which is actually another function, taking one list [a]
and returning the same unchanged list [a] and a Boolean.

>cf1 :: Ord a => Int -> Int -> (State [a] Bool)
>cf1 i j = do
> ls <- get
> return $ (ls!!i) > (ls!!j)

And here's the corresponding swap function. It returns a
function which takes a list, and returns a list with the
jth and (j+1)th elements swapped.

>swap1 :: Int -> (State [a] ())
>swap1 j = do
> theList <- get
> put $ (take j theList) ++
> [theList!!(j+1)] ++ [theList!!j] ++ drop (j+2) theList

Let's look at another pair of swapping/comparison functions,
before putting everything together:

This set has a more complicated state -- it's an ordered pair,
of the current permutation [a], and a list of strings [[Char]]
representing the Plinko toy that
would produce the sort so far.

>cf2 :: Ord a => Int -> Int -> (State ([a],[[Char]]) Bool)
>cf2 i j = do
> (theList, history) <- get
> return $ (theList!!i) > (theList!!j)

And here's the corresponding swap function:

>swap2 j = do
> (theList, history) <- get
> let newList = (take j theList)
> ++ [theList!!(j+1)]
> ++ [theList!!j]
> ++ drop (j+2) theList in
> let newHistory = (( (replicate j '|')
> ++ "><"
> ++ (replicate ((length theList)-j-1) '|'))
> : history) in
> put (newList, newHistory)

In this second example, the state is more than just the
list, so we need a function to create the original ([a], [[Char]])
structure:

>buildstate2 :: [a] -> ([a], [[a]])
>buildstate2 a = (a, [[]])

And one to get our plinko toy out after we're done sorting:

>getresult2 :: (Show a) => ([a], [[a]]) -> [Char]
>getresult2 (_, b) = foldl (++) "" $ map (\b -> "\n" ++ show(b) ) b

I skipped the equivalent functions before for the straightforward
sorting example because they are straightforward:

>buildstate1 :: [a] -> [a]
>buildstate1 a = a

>getresult1 :: (Show a) => [a] -> [Char]
>getresult1 a = show a

So now we're ready to call this contraption. dosort takes
the four functions that relate to our state: buildstate, getresult,
swap, and cf; along with the string to be sorted (I call it string,
but it could have been a list of anything in Ord, really).

The "sequence" function links up that list of functions that
bubblesort created, and execState is what runs them, when supplied
with the output of buildstate.

>dosort buildstate cf swap getresult string =
> let sorter = bubblesort cf swap $ length string in
> getresult $ execState (sequence sorter) (buildstate string)

Here are some test functions which call both suites on the same string:

>test1 = dosort buildstate1 cf1 swap1 getresult1 "sambangu!"
>test2 = dosort buildstate2 cf2 swap2 getresult2 "sambangu!"
>
>main = do
> putStrLn test1
> putStrLn test2



I'm certain I'm not doing this in a very elegant way. It appears that those four characteristic functions belong together in a class declaration, and somehow different execution types would be instances of that class, but I was unable to get it to compile that way. Could be my syntax was wrong, or more likely, fuzzy thinking, which Haskell seems to be pretty (justifiably) intolerant of.

Keystroke coding system for Kanji

I was thinking about ways of entering Chinese characters, and wishing there were a more intuitive way to do it. So here's a scheme I came up with. Obviously a lot of people have been thinking about how to type kanji for a long time, so it's probably been thought of and (implemented | dismissed) before, but anyway, it seems to me like it could work:

In this system each character is represented by a sequence of the letters b, n, m, and k. Each letter represents a stroke or part of a stroke:

b = diagonal downwards to left
n = downwards
m = diagonal downwards to right
k = horizontal

So for every stroke you type a letter, and if the stroke changes directions, you type the new letter, making no distinction between strokes and parts of strokes. I wonder how many ambiguities there would be? Are there any kanji with disputed stroke orders, or is it totally standardized?

Here are the numbers 1 - 10:

一 k
二 kk
三 kkk
四 nknknbnk
五 knknk
六 nkbm
七 knk
八 bkm
九 bknk
十 kn

There would have to be some standard rules about how to represent the little hooks (like the last strokes of 四 and 九) and the ones that kind of curve from one heading to another (like the first stroke of 九).

It seems like a person adept at writing kanji might be able to kind of mentally translate their mechanical skill of writing into these keystrokes.

Friday, August 04, 2006

How we talk about algorithms

It's often the case that a programmer can quickly state the algorithm they want to use to solve a problem, but it takes significantly more time to cobble together the implementation. There are lots of good reasons why it has to be that way -- explaining something in English presumes an intelligent listener, but the computer you're programming to is not intelligent. But still, I'm wonder if part of the mental transformation that takes place is a conceptual restrucuring, that wouldn't be necessary if a computer language were structured differently.

A good way to look at this might be to examine some programs along with descriptions of their algorithms, and see if we can discern any structural differences, and propose new computer language structures that are closer to the human abstraction mechanisms.

So I want to take as a particular example the "BLACK" problem from this year's ICFP 2006 programming contest. After the end of the contest, participants discussed their solutions to some of the problems, and in some cases posted their code, in a public discussion list. As this was a problem I spent some time with (and didn't solve) I decided to look at the algorithms that other teams used.

Spoiler alert

The contest puzzle was an amazing construction that I'd highly recommend working on, even though the contest is over. I'll be giving away the problem description here (which is a good piece of work just to get to), and some solution algorithms.

The Problem

The problem is described here; basically you're doing a permutation of an ordered list, in steps, where each step can exchange pairs of adjacent items. When a step shifts any item to the right, there is a "plink" sound. There's no sound when shifting to the left. The task is to recreate the steps needed, given an ending permutation and the number of plinks each item would experience in tracing through the steps. So the first problem they provide looks like this:

* x -> (y,z)
* Means that if you drop a marble into pipe x, it comes out pipe y,
* and you hear z plinks
0 -> (3,4)
1 -> (2,3)
2 -> (0,1)
3 -> (1,1)

And one solution is this:

><||
|><|
><><
|><|
><><
><><

Where the ><'s represent swaps, and the |'s represent items that don't change in that step. The problem set started with this puzzle of width 4, and also included problems of width 10, 20, 30, 40, 50, 100, 200, 300, 400, and 500. Your score was based on submitting solutions; the code was not considered -- solving by hand was perfectly legal, if impractical for the larger problems.

Notation

I'll use this notation in the discussion below:
  • In{st}[x]: the xth hole going into step {st} of the machine (numbering starting with 0)
  • Out{st}[x]: the xth hole coming out of step {st}.
  • Target{st}[x]: the desired destination of a marble placed at In{st}[x]
  • Plinks{st}[x]: the plink count of a marble found at In{st}[x]
  • Plinktarget{st}[x]: the desired plink count of a marble found at In{st}[x]
  • Step{st}: The string of |'s, <'s, and >'s representing step #st of the machine

Solution Strategies

I studied the descriptions of solutions on the mailing list, and on the web pages of participants if they were linked from the list archive.

Divide and Conquer

The first insight that most solvers shared was a division into two sequential phases. As stated by Alain Frisch:

All the puzzles in the contest had a lot of plinks, [...] and it was indeed possible to first bubble sort the wires and then start from the identical permutation. That's what our team's solver does.

Jed Davis wrote:
Mine likewise -- it first does the usual selection sort, then figures
out what combination of maneuvers like that pictured above (moving a
ball N places to one side and then back) need to be placed after the
sort to reduce the remaining needed plinks to a form that can be handled
by appending only double-swaps.


Marcin Mucha wrote:
I guess you know that, be some puzzles do have solutions, but you can't decompose any solution as a sequence of a bubblesort and a solution starting from an identical permutation.
It's interesting that they all emphasize the sequentiality; the two phases can' t be done in parallel. Frisch says "to first xxxxxx and then yyyy"; Davis wrote "first xxxxxxx, then yyyyyy". Mucha uses the word "sequence". In your typical imperative language, simply listing two steps in order is sufficient to say "do one, then do the other"; this explicit statement of ordering suggests that sequentiality is not necessarily the default in English.

The Bubblesort

In two of the three quotes above, the posters mention bubblesort with the implied presumption that its application is obvious. Bubble sort is an apt description of an algorithm realizable by this type of machine, since every swap is done between adjacent elements. Here is the code Wikipedia gives for bubblesort:
function bubblesort (A : list[1..n]) {
var int i, j;
for i from n downto 1 {
for j from 1 to i-1 {
if (A[j] > A[j+1])
swap(A[j], A[j+1])
}
}
}
A reasonably adept programmer should be able to read the description of the problem, and the vague advice to "apply a bubblesort", and implement it with some thought, but without any further clarification. They are making a conceptual mapping like this:


List parameter A
Mapping from exit hole# of current step to exit hole# of final step
n
The number of pipes in the machine
A[j] > A[j+1]
Pipe at j at this step is destined to a place further to the right of pipe at j+1
swap(A[j], A[j+1])
A pipe crossing is added


...so substituting the notation above, we have something like this:


function bubblesort (Dest{st}) {
var int i, j;
st = 0
for i from n downto 1 {
for j from 1 to i-1 {
if (Target{st}[j] > Target{st}[j+1]) {
Target{st+1}[j] := Target{st}[j+1]
Target{st+1}[j+1] := Target{st}[j]
st ++
}
}
}
}


This is an interesting mapping because the paradigmatic bubblesort involves mutable storage, but this version builds a list of successively refined permutations. Wikipedia of course doesn't define bubblesort this way. Programmers can make this leap with some thought. Can the concept of "bubblesort" be defined such that something so fundamental is abstracted away? I'm still wrapping my brain around monads, but I think this is what they are good for.

Sunday, January 22, 2006

A Different Implementation of Relation-Chaining

OK, I wasn't really happy with the implemenation the relation-chaining thing I did last week, so I rewrote it. This won't make a lot of sense if you haven't read last week's installment, so go read it if you haven't yet...

This time I have the forward-linking operator sort of equivalent to the python "." operator. If romulus.father == mars, then romulus >> father == rset([mars]); in other words, the set containing mars. rset is a class derived from set. It's convenient to work with sets, because in the case of the reverse-linking operator, there could be more than one result, and I wanted some symmetry between the two cases.

To do the reverse linking, I use gc.get_objects(), which is python's garbage collector's master list of all the objects in the system. I guess that's not very efficient for a large program. In a large program I'd recommend creating a "universe" dictionary, put the objects you really care about it in it, then search that instead of get_objects().

A nice result of using sets is shown in the brother() function; the set operator "-" means to remove items from a set, so (x >> "father" << "father") - x means: find the father of x, then find the people x is a father to, then subtract x from that set, leaving only the siblings.

The only reason I create my own set class, or my own relational_object class, is to get the syntax of << and >> to work. Really, this is a demonstration of a feature I think could be central to a language, and in that hypothetical language you'd build this feature in so it could be used on any objects.

Well, here's the code. I promise I'll move on to something different next week! I just had to get this out of my system.

#!/usr/local/bin/python2.4
from gc import get_objects

class relational_object(object):
def __repr__(self):
return ""

def get_conv_gen(self, att):
for other in get_objects():
try:
if other.__dict__[att] == self:
yield other
except Exception, e: pass

def get_conv_any(self, att): return self.get_conv_gen(att).next()
def get_conv_all(self, att): return rset([ x for x in self.get_conv_gen(att)])

def __rshift__(self, att): return rset([self.__dict__[att]])
def __lshift__(self, att): return rset(self.get_conv_all(att))

class rset(set):
def __rshift__(self, att):
result = rset()
for x in self:
result = result | rset([x.__dict__[att]])
return result
def __lshift__(self, att):
result = rset()
for x in self:
result = result | x.get_conv_all(att)
return result

class person(relational_object):
def __init__(self, theName):
self.name = theName
def __repr__(self): return "<>"

mars = person("mars")
romulus = person("romulus")
remus = person("remus")
assert mars == mars
assert mars != remus

romulus.father = mars
remus.father = mars

assert mars << "father" == rset([romulus, remus])

Saturday, January 14, 2006

Relation-chaining gimmick in Python

This is just a piece of a whole idea, but I thought you might find it amusing.

There is an RDF notation called N3 that has an interesting syntax for quickly describing a chain of relations linking one node to another.

Here are some examples from the N3 paper:

:joe!fam:mother!loc:office!loc:zip   Joe's mother's office's zipcode
:joe!fam:mother^fam:mother Anyone whose mother is Joe's mother.
:joe ("Colon Joe" -- sounds like a brand of medicinal coffee, doesn't it?) is an RDF node. The other strings with colons in them, fam:mother, loc:office, and loc:zip, represent binary relations. I think the part before the colon tells you the namespace it's from, but ignore that for now.

The ! is like the "." operator in most object-oriented languages. :joe!fam:mother means "Joe's Mother", and you might say this as joe.mother in some language like Python or C.

The ^ is the converse of !; so if joe!mother == sue, then sue^mother == joe. You can think of !mother as meaning "mother" and ^mother as meaning "child". ^employer means employee, ^owner means possession. "hello, world"!length == 12, so 12^length == "hello, world" (among many other strings).

My mind was brought back to this when I was working on a project with a bunch of related objects in Python, and I was looking for a clean way of expressing complicated queries among them. In my day job I do a lot of ad-hoc SQL querying, and while I don't love SQL, it is nice to be able to fluently, readably, ask some complex questions of a bunch of relations. I'd read that list comprehensions were more or less isomorphic to SQL, but I found when playing with them that my queries were inefficient and not very readable; they were no better than just writing explicit code to traverse the object tree and collect the information I wanted.

I'm not sure if these N3-style paths will be useful in practice, yet, but it was worth playing around with them. The biggest hurdle with them, as I see it, is that there's no easy way when presented with 12^length to know what string is wanted, so you have to have a well-defined universe of objects to work with, and be willing to accept multiple results from every query.

So anyway, here's some code to give you an idea what I've come up with so far:

import pprint

class relation:
def __init__(self):
self.forward = dict()
self.backward = dict()

def add(self, arg1, arg2):
if not self.forward.has_key(arg1):
self.forward[arg1] = dict()
if not (arg2 in self.forward[arg1]):
self.forward[arg1][arg2] = True
if not self.backward.has_key(arg2):
self.backward[arg2] = dict()
if not (arg1 in self.backward[arg2]):
self.backward[arg2][arg1] = True

def __call__(self, arg1, arg2):
try:
return self.forward[arg1][arg2]
except Exception, e:
return false

def get_all_keys(self, hash, key):
try:
return hash[key].keys()
except Exception, e:
return []

def __rlshift__(self, other):
result = []
for item in other:
result = result + self.get_all_keys(self.forward, item)
return result

def __rrshift__(self, other):
result = []
for item in other:
result = result + self.get_all_keys(self.backward, item)
return result

father = relation()
phone = relation()

class frank: pass
class jeff: pass
class chris: pass
class andy: pass

father.add(frank, jeff)
father.add(jeff, chris)
father.add(jeff, andy)
phone.add(andy, "101-555-1249")
assert [chris] >> father == [jeff]
assert [jeff] << father == [chris, andy]
assert [chris] >> father == [jeff]
assert [jeff] << father == [chris, andy]
assert [andy] << phone == ["101-555-1249"]
assert [chris] << phone == []
assert [andy, chris] << phone == ["101-555-1249"]
assert (([chris] >> father) << father) << phone == ["101-555-1249"]
assert [chris] >> father << father << phone == ["101-555-1249"]

Some things to note about this implementation:
  • I used classes like "jeff" and "chris" for nodes. You could use any Python objects; the "class X: pass" syntax is just a cheesy quick way to create a dummy object to play around with. In Python a class is a kind of object.
  • I used >> and <<>
  • nstead of the RDF "node", I use a list with one or more python objects in it, in this case strings. I'm using a list as kind of a half-assed monad. Since a >> or <<>
  • I define a relation object which holds all its pairs of related things in hashes. It's probably not very efficient. I originally thought I'd just use the members of objects as the relations, so that the natural joe.mother would be the basis for the relation joe >> mother. But that didn't fit in very well with the list thing. If Heather has two mommies, you can have:

    mother("Ann", "Heather")
    mother("Patricia", "Heather")
    assert ["Heather"] >> mother == ["Ann", "Patricia"]
    as opposed to

    heather.mother = ann
    heather.mother = patricia
    where Patricia just overwrites Ann. This makes Ann sad.

    Maybe there's a better way around this.

  • I'm not sure I've got the precedence the same as N3 has it.
  • It should be sets, not lists. ["Ann", "Patricia"] should be considered the same result as ["Patricia", "Ann"].
Next time I'll try to apply this to a more interesting problem to see if it holds up.