Moving to new URL
Hi,
I'm moving to http://sambangu.blogspot.com. Please update your feed! I was doing an FTP upload from blogger.com to my ISP, but the comments never worked, and it just doesn't seem worth the trouble to do it that way.
About languages that bridge the gap between minds and machines.
Hi,
I'm moving to http://sambangu.blogspot.com. Please update your feed! I was doing an FTP upload from blogger.com to my ISP, but the comments never worked, and it just doesn't seem worth the trouble to do it that way.
Posted by Chris Bogart at 9:35 PM 1 comments
So, I was trying to learn category theory, having gathered it was a good thing to know for people interested in programming language theory. I kind of got bogged down though -- it's castles built on castles built on castles in the air, and I was losing track of reality in all the abstractions. My book would talk about a category of Groups, for example, so I'd go looking up Groups in my faithful Wikipedia. I'd get the concept, it's not that tricky in itself, but obviously understanding the definition of a group does not make one a group theorist.
So, I finally declared "mission accomplished" brought the troops home to a ticker-tape parade, and started in on a group theory textbook instead (Knapp's Basic Algebra). I'm enjoying it a lot more -- it starts up where I left off with math classes, so I don't feel like I'm a freshman in a senior-level class.
Anyway, this book starts with a kind of review of polynomials, simultaneous equations, matrices, vector spaces, etc. that is stuff I've seen before, but with a more theoretical spin, that either was missing from my high school algebra or I've simply forgotten it. The book points out that factoring integers into products of smaller integers, is very much like factoring polynomials into products of polynomials of smaller degree, and a prime number is like a polynomial you can't factor any further (like say x2+9 = 0, when you're dealing with Reals). Of course you can also add, subtract, multiply, and divide polynomials, and the result is always still a polynomial. If a topologist can't tell a coffee cup from a doughnut, then a group theorist can't tell a polynomial from an integer.
Well, maybe she can; obviously integers and polynomials have some different properties. But I'm tickled enough with the idea that a polynomial is a kind of number, that I created a polynomial instance of Num in Literate Haskell as an exercise, reproduced below for your edification.
Since I'm also still learning Haskell, I welcome any critiques you might have of my code. I like the fact that the data structure I used (just a list of coefficients) turned out to make for short code; but it doesn't come out very readable. The fact that prepending a 0 to a list multiplies the polynomial by X seems as cryptic as it is convenient.
> module Main where
> main = print testP
I represent a polynomial as a list of its coefficients,
with the constant first, then the X, the X^2, etc. So
3x^2 - 4 would be Poly [-4,0,3].
> data Floating a => Poly a = Poly [a] deriving (Eq)
Now we evaluate by filling in the unknown. Note that
ax2 + bx + c evaluated at x is the same as
(ax + b)x + c, so you can evaluate it by taking the constant
term, popping it off the list, then multiplying x by the
polynomial interpretation of the rest of the list.
> evalPoly :: (Floating a) => (Poly a) -> a -> a
> evalPoly (Poly []) x = 0
> evalPoly (Poly (c:cs)) x = c + x*(evalPoly (Poly cs) x)
To add two polynomials, add corresponding coefficients. If
one is shorter than the other, you want to use zeroes. I don't
know how to do this beautifully with standard functions, because
zip cuts off when the shortest list runs out. So I defined
a helper function zipLong, that keeps going till the longest
list is done, filling in a default value.
10 Points to whoever emails me with a shorter, cleaner,
idiomatic way to do this.
> addPoly :: (Floating a) => (Poly a) -> (Poly a) -> (Poly a)
> addPoly (Poly r) (Poly s) = Poly $ map (\v -> (fst v + snd v)) (zipLong r s 0)
> zipLong [] [] d = []
> zipLong (x:xs) [] d = (x,d):(zipLong xs [] d)
> zipLong [] (y:ys) d = (d,y):(zipLong [] ys d)
> zipLong (x:xs) (y:ys) d = (x,y):(zipLong xs ys d)
Multiply a polynomial by a scalar. I have a feeling this
could be defined somehow under an instance of Num so that
the * operator is automatically overloaded. Not sure how.
> scalarPolyMult :: (Floating a) => a -> Poly a -> Poly a
> scalarPolyMult c (Poly rs) = Poly $ map (*c) rs
Since (ax^2 + bx + c)x = ax^3 + bx^2 + cx,
then Poly [c b a] * x = Poly [0 c b a]
> multByUnknown (Poly rs) = Poly (0:rs)
To multiply two polynomials, P1 * P2, where P2 = (dx^2 + ex + f),
rewrite as f*P1 + P1*x*(dx + e); the first term is a scalar multiplication
and the second has one less degree, so we can recurse on it. The
(Poly bs) term below is (dx + e) in this example.
> multPoly :: (Floating a) => (Poly a) -> (Poly a) -> (Poly a)
> multPoly (Poly []) cs = Poly []
> multPoly (Poly (b:bs)) cs =
> addPoly
> (scalarPolyMult b cs)
> (multPoly (Poly bs) (multByUnknown cs))
Define a polynomial as a number. I'm cheating a little by
picking a dumb definition of signum; really a polynomial
with an unknown isn't positive or negative before a number
is plugged into it, so I'm just defining it as the sign of
the constant term.
> instance (Floating a) => Num (Poly a) where
> s + t = addPoly s t
> negate (Poly cs) = Poly $ map (\q -> -q) cs
> s * t = multPoly s t
> abs s
> | signum s == -1 = negate s
> | signum s /= -1 = s
> signum (Poly []) = Poly []
> signum (Poly (c:cs)) = Poly [signum c]
> fromInteger i = Poly [fromInteger i]
And define a cheesy way to print out a polynomial
> instance Floating a => Show (Poly a) where
> show (Poly []) = "null"
> show (Poly cs) = concatMap
> (\c -> " + " ++ (show (fst c)) ++ "X^" ++ (show (snd c)))
> (reverse (zip cs [0..length(cs)-1] ))
Define some examples and some tests.
> p = Poly [2,0,1,2] -- 2x^3 + x + 2
> q = Poly [0,0,0,3,4] -- 4x^4 + 3x^3
> testP = (p * q == Poly [0,0,0,6,8,3,10,8]) &&
> (p + q == Poly [2,0,1,5,4]) &&
> (evalPoly p 3 == 65.0 ) &&
> (3 * p == Poly [6,0,3,6]) &&
> (show p == " + 2.0X^3 + 1.0X^2 + 0.0X^1 + 2.0X^0")
Posted by Chris Bogart at 7:50 PM 2 comments
Here's a fantastic dialog that exactly captures my experiences with SOAP. (via Anarchaia)
Posted by Chris Bogart at 2:19 PM 0 comments
Turns out there is a character input method like the one I described in my previous post.
Sounds like it turns out to be kind of awkward to use. Ah well...
Posted by Chris Bogart at 9:52 PM 0 comments
I was just reading through Niklaus Wirth's paper, "Good Ideas, Through the Looking Glass" (found through Lambda the Ultimate, of course!) where he talks about the Dangling Else problem.
Some programming languages have statements of the form:
andif X then Y
with noif X then Y else Z
end
keyword or bracket, leading to the problem of how to interpret:
Do we see a movie if it's not Saturday, or if it is Saturday, but it's cloudy?if
it's Saturdaythen if
it's sunnythen
have a picnicelse
see a movie
We spent most of our time sitting on the back porch watching the cows playing Scrabble and reading.It reads funny in English, but we resolve it easily because we know the context. Obviously context is not as helpful for a compiler.
We spent most of our time sitting on the back porch watching the cows, playing Scrabble and reading.or
Playing Scrabble and reading, we spent most of our time sitting on the back porch watching the cows.The first suggestion corresponds approximately to Wirth's preference for an
end
keyword; the comma signals a break of some kind, and the obvious interpretation is that the cows and the scrabble shouldn't be too closely associated. It's not nearly so rigorous as end
, of course.if
it's not Saturday then
see a movie else if
it's sunny then
have a picnic.Posted by Chris Bogart at 9:45 AM 0 comments
Every software developer uses a variety of languages to develop a program. It may seem like you're developing something purely in Perl or Java or VB or whatever, in fact you're using a lot of mini-languages to manage the development process:
Posted by Chris Bogart at 8:37 PM 0 comments
In my previous post today I mused about using standard ontologies in programs as a way of grounding the web of connections between your objects in a framework of common terminology.
There is a concept in Buddhist metaphysics called Dependent Origination, which states that nothing exists on its own, but only manifests itself through its connections with everything else in its environment. One illustration of the concept is Indra's Net, an infinite spider web with little silver balls at all the junctions, each reflecting all the others in a way that would gum up any ray tracer.
A program that doesn't refer to much of anything external to itself is like that -- you can define data structures and pointers and files that all point to each other, but it's all meaningless without the interpretation that a programmer or user gives to it in the data they feed to it and their interpretation of its output.
I'd say that Category Theory is a mathematical restatement of Dependent Origination.
Tags: CategoryTheory, DependentOrigination
Posted by Chris Bogart at 4:43 PM 2 comments
It seems to me that natural langauge program specifications tend to consist of a lot of statements which are all independently true, and only dependent on context for deictic references (i.e. pronouns and pronoun-like references to concepts in surrounding sentences). In other words, I'd claim, you could make sense of a good proportion of a scrambled specification if:
Posted by Chris Bogart at 3:13 PM 0 comments