Saturday, August 19, 2006


This week I was asked to write some code to mimic a SOAP service. How hard could that be? It was a very simple service, just a couple remote procedure calls, one of which could return a large binary file.

Of course, the original one was implemented in Delphi, to run under IIS, and the new one needed to run on a UNIX-like system, but SOAP is intended on making all this easy, right? How much did I need to know?

Well, a lot, it turns out. I played with a couple different toolkits for creating SOAP services, and they kept coming up with slight variations, none of which seemed to interoperate out of the box. Little things would be different in the XML message, like whether namespaces were mentioned in the tags, or how the attachment was referenced and sent. It was easy to see and understand the problem by looking at the dumps of the XML and MIME stuff going over the wire, but much harder to plow through the documentation of the various API's to see what flags needed to be set in their object model, or what objects needed to be created, or who was responsible for allocating and freeing memory, etc.

In the end, I realized that since this was an internal service, and I controlled both endpoints, I could easily make a case to management for just using the raw HTTP protocol, and in fact as I played with that solution, it turned out to be cleaner, lighter, more maintainable, and easier to document. I'm changing my party affiliation to REST, even though it's too late to vote in the primaries.

Well, that's fine; I'm sure there must be cases where SOAP is a better solution, but it got me to thinking about why it is that the raw XML is so much easier to debug than the supposedly labor-saving frameworks built on top of it. I guess it's just that what goes across a line is easier to capture and pin down -- I can run two services, capture their input and output, and just compare the logs of them. But I can't compare the operations of their object models, because they're different.

What would be very cool, is if there were a formal way to specify a relationship between all the possible operations of an object model, and the grammar of its input and output streams. Then I could point to a rule in the grammar that I wanted to come out a certain way, and ask "what sequences of user-accessible object operations can cause this". The closest thing I've seen to this is the Whyline from project Marmelade at CMU. The Whyline is a thingy that looks like it's geared towards helping a programmer find bugs in their code, by tracing out why a particular condition was arrived at in code execution. Seems like it could be just as useful for prying apart the mysteries of someone else's prepackaged API, especially if it was closed-source, if somehow the Whyline could still operate without showing you precious vendor secrets.

Unfortunately the Whyline is still a research project built into a research language called Alice, not a handy button on my Visual Studio menubar.

Monday, August 14, 2006

Abstracting Bubblesort

In my August 4th entry I conjectured that monads might be a good way to abstract away the question of whether a bubblesort was done over time or laid out across memory as a sequence of permutations.

I thought that was a good exercise for me to brush up on Haskell monads, so here's the program I wrote. It's "literate haskell", so you can take this whole posting, save it as an whatever.lhs, and it should compile.

I especially relied on a monad tutorial at A Neighborhood of Infinity, that Sigfpe happened to post just as I was in need of it. Thanks, Sigfpe!

I'll be using the State, List, and IO monads. Eek, trial by fire.
To start with, State has to be imported:

>import Control.Monad.State

So here's my generic bubblesort to start out with. It sets everything
up without actually having the list available yet; all it needs is the
length for now.

The cf and swap parameters are functions that compare and swap elements,
respectively. Actually, they just take the indexes of the element(s)
and return a function which will be responsible for testing and permuting
anything the caller wants, however they want. cf x y should compare elements
x and y (whatever that may mean), and swap x should swap the xth and (x+1)th

So this function cycles through pairs of adjacent indices in bubblesorty
fashion, and builds a list of functions, each of which is responsible
for testing, and maybe swapping, a pair of elements, with those
user-supplied comparison and swapping functions.

>bubblesort cf swap theLen = do
> i <- reverse [0..theLen]
> j <- [0..(i-2)]
> [do
> i <- cf j (j+1)
> if i
> then swap j
> else return ()]

Now here's a simple comparison function: it assumes that the
list will be just an ordinary list, with elements instances of
Ord (so that > works). What it returns is (State [a] Bool),
which is actually another function, taking one list [a]
and returning the same unchanged list [a] and a Boolean.

>cf1 :: Ord a => Int -> Int -> (State [a] Bool)
>cf1 i j = do
> ls <- get
> return $ (ls!!i) > (ls!!j)

And here's the corresponding swap function. It returns a
function which takes a list, and returns a list with the
jth and (j+1)th elements swapped.

>swap1 :: Int -> (State [a] ())
>swap1 j = do
> theList <- get
> put $ (take j theList) ++
> [theList!!(j+1)] ++ [theList!!j] ++ drop (j+2) theList

Let's look at another pair of swapping/comparison functions,
before putting everything together:

This set has a more complicated state -- it's an ordered pair,
of the current permutation [a], and a list of strings [[Char]]
representing the Plinko toy that
would produce the sort so far.

>cf2 :: Ord a => Int -> Int -> (State ([a],[[Char]]) Bool)
>cf2 i j = do
> (theList, history) <- get
> return $ (theList!!i) > (theList!!j)

And here's the corresponding swap function:

>swap2 j = do
> (theList, history) <- get
> let newList = (take j theList)
> ++ [theList!!(j+1)]
> ++ [theList!!j]
> ++ drop (j+2) theList in
> let newHistory = (( (replicate j '|')
> ++ "><"
> ++ (replicate ((length theList)-j-1) '|'))
> : history) in
> put (newList, newHistory)

In this second example, the state is more than just the
list, so we need a function to create the original ([a], [[Char]])

>buildstate2 :: [a] -> ([a], [[a]])
>buildstate2 a = (a, [[]])

And one to get our plinko toy out after we're done sorting:

>getresult2 :: (Show a) => ([a], [[a]]) -> [Char]
>getresult2 (_, b) = foldl (++) "" $ map (\b -> "\n" ++ show(b) ) b

I skipped the equivalent functions before for the straightforward
sorting example because they are straightforward:

>buildstate1 :: [a] -> [a]
>buildstate1 a = a

>getresult1 :: (Show a) => [a] -> [Char]
>getresult1 a = show a

So now we're ready to call this contraption. dosort takes
the four functions that relate to our state: buildstate, getresult,
swap, and cf; along with the string to be sorted (I call it string,
but it could have been a list of anything in Ord, really).

The "sequence" function links up that list of functions that
bubblesort created, and execState is what runs them, when supplied
with the output of buildstate.

>dosort buildstate cf swap getresult string =
> let sorter = bubblesort cf swap $ length string in
> getresult $ execState (sequence sorter) (buildstate string)

Here are some test functions which call both suites on the same string:

>test1 = dosort buildstate1 cf1 swap1 getresult1 "sambangu!"
>test2 = dosort buildstate2 cf2 swap2 getresult2 "sambangu!"
>main = do
> putStrLn test1
> putStrLn test2

I'm certain I'm not doing this in a very elegant way. It appears that those four characteristic functions belong together in a class declaration, and somehow different execution types would be instances of that class, but I was unable to get it to compile that way. Could be my syntax was wrong, or more likely, fuzzy thinking, which Haskell seems to be pretty (justifiably) intolerant of.

Keystroke coding system for Kanji

I was thinking about ways of entering Chinese characters, and wishing there were a more intuitive way to do it. So here's a scheme I came up with. Obviously a lot of people have been thinking about how to type kanji for a long time, so it's probably been thought of and (implemented | dismissed) before, but anyway, it seems to me like it could work:

In this system each character is represented by a sequence of the letters b, n, m, and k. Each letter represents a stroke or part of a stroke:

b = diagonal downwards to left
n = downwards
m = diagonal downwards to right
k = horizontal

So for every stroke you type a letter, and if the stroke changes directions, you type the new letter, making no distinction between strokes and parts of strokes. I wonder how many ambiguities there would be? Are there any kanji with disputed stroke orders, or is it totally standardized?

Here are the numbers 1 - 10:

一 k
二 kk
三 kkk
四 nknknbnk
五 knknk
六 nkbm
七 knk
八 bkm
九 bknk
十 kn

There would have to be some standard rules about how to represent the little hooks (like the last strokes of 四 and 九) and the ones that kind of curve from one heading to another (like the first stroke of 九).

It seems like a person adept at writing kanji might be able to kind of mentally translate their mechanical skill of writing into these keystrokes.

Friday, August 04, 2006

How we talk about algorithms

It's often the case that a programmer can quickly state the algorithm they want to use to solve a problem, but it takes significantly more time to cobble together the implementation. There are lots of good reasons why it has to be that way -- explaining something in English presumes an intelligent listener, but the computer you're programming to is not intelligent. But still, I'm wonder if part of the mental transformation that takes place is a conceptual restrucuring, that wouldn't be necessary if a computer language were structured differently.

A good way to look at this might be to examine some programs along with descriptions of their algorithms, and see if we can discern any structural differences, and propose new computer language structures that are closer to the human abstraction mechanisms.

So I want to take as a particular example the "BLACK" problem from this year's ICFP 2006 programming contest. After the end of the contest, participants discussed their solutions to some of the problems, and in some cases posted their code, in a public discussion list. As this was a problem I spent some time with (and didn't solve) I decided to look at the algorithms that other teams used.

Spoiler alert

The contest puzzle was an amazing construction that I'd highly recommend working on, even though the contest is over. I'll be giving away the problem description here (which is a good piece of work just to get to), and some solution algorithms.

The Problem

The problem is described here; basically you're doing a permutation of an ordered list, in steps, where each step can exchange pairs of adjacent items. When a step shifts any item to the right, there is a "plink" sound. There's no sound when shifting to the left. The task is to recreate the steps needed, given an ending permutation and the number of plinks each item would experience in tracing through the steps. So the first problem they provide looks like this:

* x -> (y,z)
* Means that if you drop a marble into pipe x, it comes out pipe y,
* and you hear z plinks
0 -> (3,4)
1 -> (2,3)
2 -> (0,1)
3 -> (1,1)

And one solution is this:


Where the ><'s represent swaps, and the |'s represent items that don't change in that step. The problem set started with this puzzle of width 4, and also included problems of width 10, 20, 30, 40, 50, 100, 200, 300, 400, and 500. Your score was based on submitting solutions; the code was not considered -- solving by hand was perfectly legal, if impractical for the larger problems.


I'll use this notation in the discussion below:
  • In{st}[x]: the xth hole going into step {st} of the machine (numbering starting with 0)
  • Out{st}[x]: the xth hole coming out of step {st}.
  • Target{st}[x]: the desired destination of a marble placed at In{st}[x]
  • Plinks{st}[x]: the plink count of a marble found at In{st}[x]
  • Plinktarget{st}[x]: the desired plink count of a marble found at In{st}[x]
  • Step{st}: The string of |'s, <'s, and >'s representing step #st of the machine

Solution Strategies

I studied the descriptions of solutions on the mailing list, and on the web pages of participants if they were linked from the list archive.

Divide and Conquer

The first insight that most solvers shared was a division into two sequential phases. As stated by Alain Frisch:

All the puzzles in the contest had a lot of plinks, [...] and it was indeed possible to first bubble sort the wires and then start from the identical permutation. That's what our team's solver does.

Jed Davis wrote:
Mine likewise -- it first does the usual selection sort, then figures
out what combination of maneuvers like that pictured above (moving a
ball N places to one side and then back) need to be placed after the
sort to reduce the remaining needed plinks to a form that can be handled
by appending only double-swaps.

Marcin Mucha wrote:
I guess you know that, be some puzzles do have solutions, but you can't decompose any solution as a sequence of a bubblesort and a solution starting from an identical permutation.
It's interesting that they all emphasize the sequentiality; the two phases can' t be done in parallel. Frisch says "to first xxxxxx and then yyyy"; Davis wrote "first xxxxxxx, then yyyyyy". Mucha uses the word "sequence". In your typical imperative language, simply listing two steps in order is sufficient to say "do one, then do the other"; this explicit statement of ordering suggests that sequentiality is not necessarily the default in English.

The Bubblesort

In two of the three quotes above, the posters mention bubblesort with the implied presumption that its application is obvious. Bubble sort is an apt description of an algorithm realizable by this type of machine, since every swap is done between adjacent elements. Here is the code Wikipedia gives for bubblesort:
function bubblesort (A : list[1..n]) {
var int i, j;
for i from n downto 1 {
for j from 1 to i-1 {
if (A[j] > A[j+1])
swap(A[j], A[j+1])
A reasonably adept programmer should be able to read the description of the problem, and the vague advice to "apply a bubblesort", and implement it with some thought, but without any further clarification. They are making a conceptual mapping like this:

List parameter A
Mapping from exit hole# of current step to exit hole# of final step
The number of pipes in the machine
A[j] > A[j+1]
Pipe at j at this step is destined to a place further to the right of pipe at j+1
swap(A[j], A[j+1])
A pipe crossing is added substituting the notation above, we have something like this:

function bubblesort (Dest{st}) {
var int i, j;
st = 0
for i from n downto 1 {
for j from 1 to i-1 {
if (Target{st}[j] > Target{st}[j+1]) {
Target{st+1}[j] := Target{st}[j+1]
Target{st+1}[j+1] := Target{st}[j]
st ++

This is an interesting mapping because the paradigmatic bubblesort involves mutable storage, but this version builds a list of successively refined permutations. Wikipedia of course doesn't define bubblesort this way. Programmers can make this leap with some thought. Can the concept of "bubblesort" be defined such that something so fundamental is abstracted away? I'm still wrapping my brain around monads, but I think this is what they are good for.