Tuesday, December 28, 2010


I've been thinking about languages a lot lately. Which is kind of a joke, given the title of my blog, but I actually mean "I've been thinking about them more than usual". This thought has been specifically dominated by thoughts of the Blub hierarchy as proposed by Paul Graham.

I'm not sure what's on top.

PG claims "Lisp", I've seen many that shout "Ruby", I've met many that claim "Haskell". In fact, if you participate in programming discussion for any length of time, there's a pretty good chance that you'll meet someone for every language (other than C) claiming it's Omega. It's ridiculous, of course. All most of them are really claiming is "This is the most powerful language I know how to work with", which is not the same thing as "the most powerful language". It's easy to see that trying to compare in any supposedly objective sense would cause giant hatestorms and various infighting. So people are perhaps justified in making statements like

"You don't compare a hammer with a screwdriver, but you use the one that fits your task & way of thinking/education/needed level of abstraction the most. Also, since the one doing the comparison is biased by the fact that he knows at least one of the two, or at least prefers one of the two, it is hard to find a truly objective criteria for comparing them (exceptions exist)." -Rook, pogrammers.SE

when discussing language comparison. That was an answer from a question about whether language comparison was useful. As of this writing, it has been closed and re-opened twice, and the original asker has accepted (then unaccepted, then accepted again) a joke answer. This is perhaps more telling of the culture of programmers.SE than of the question, but it doesn't seem like an uncommon response. People duck comparisons precisely because languages are half tools and half religions, and no one wants a crusade declared. But, well, you need to compare.

"A language is a tool. That said, I've seen really, really crappy tools before. No one wants to work with a hammer whose head is liable to fly off and hit another carpenter in the stomach. Likewise, if you noticed your fellow worker's hammer was in that shape, you'd probably steer clear of them when they were using it. It's also important to really understand which tool it is. You can't use a screwdriver and hammer interchangeably (though some try desperately). Hell you can't even use all hammers interchangeably; you need a sledge for some things, a mallet for others and a tack for yet others. If you use the inappropriate tool, then at best, you'll do a poorer job, at worst you'll injure yourself or a co-worker." -me, programmers.SE

Graham goes further, stating that not only can you compare languages in terms of power, but goes on to point out the obvious corollary that there is therefore such a thing as an empirically best language. As a note, I agree with him, but "which religion is best?" is a question you just don't discuss in polite society, so I haven't pushed the idea on any forum I frequent. It makes sense though. No one would disagree that Assembly < Cobol < Python on the power scale (I'm defining "power" as a non-specific mix of expressiveness, terseness, maintainability, readability and flexibility). And even admitting that simple truth exposes you to the idea that there's a ladder, or tree, or at least concentric circles of languages with one (or a relatively small group) taking the prime position.


Graham puts Lisp there, but he's making the same claim that any Ruby ardent or avid Haskeller are expressing; "Of all the languages I know, this one is the most powerful". The thing is, I haven't heard many convincing arguments to the contrary. The best argument aimed at Lisp these days is that it's slow, and even then, slow in what sense? It can certainly do the job of server software, or even local desktop/console software on today's powerful systems. Remember, Lisp was called slow back when 1Gz was the sort of processing power you paid many thousands of dollars for. I have more than that right now in my $300 dollar netbook. We're fast approaching an age where a phone you get for free with a subscription is more powerful. "Slow" just isn't a good enough strike against a language to discount it anymore. Other than that, people complain about the parentheses, which is an empty complaint at best, and typically a trolling attempt. The only good argument against Lisp as Omega comes from an unlikely source.

"I don't think it's necessarily Omega. The Haskellers and MLers say 'Well, from where we sit, Common Lisp looks like Blub. You just don't understand the brilliance of strong inferred typing'. And they may be right. Of course, Common Lispers look at Haskell and say 'Well, Haskell's really Blub, because you guys don't have macros'. It may be the case that there is no Omega, or that Common Lisp and Haskell are on different branches of the lattice, and someone's gonna find a way to unify them and a few other good ideas and make Omega." -Peter Seibel, Practical Common Lisp Talk at Google

It's an interesting fact that practitioners of either language can point to lack of features in the other. That has some pretty obvious corollaries as well.

  1. There may be such a thing as the most powerful language right now, but it may involve trade-offs (I don't know what it is, but one exists. I'll call it "Alpha" so as not to offend anyone).
  2. There is such a thing as the language that will be the best for the next 10 to 100 years (This one may or may not exist in some form today; it might be unified from several current languages as Seibel alludes. I'll use his name and call it "Omega").
  3. There is such a thing as the most powerful language that could exist on current machine architectures (This one almost certainly doesn't exist yet, and may never be embodied in an implementation. It's just the limit, in the calculus sense, of what we can hope to achieve with a language along the axes of expressiveness, terseness, maintainability, readability and flexibility. This one I'll call 0).

I'm not sure what Alpha is. I'm not sure anyone knows, because as I've said, people tend to bind that variable to whichever is the most powerful language they currently know. 0 is far away, and I won't even try talking about it today, because I don't have anywhere near enough information to make a decent guess at what it'll look like. So what does Omega look like? Well, Graham effectively says it's Arc (or what Arc will evolve into). Others variously substitute their own languages. There's a sizable community which thinks it's Haskell. Some ardents think it's Lisp. A few would like you to believe it's Java, despite the recent turbulence between Oracle and Google. And there are a couple of personalities in the industry who are vigorously pushing either Ruby or C#. Yegge echoes Seibel pretty closely

"[T]he Wizard will typically write in one of the super-succinct, "folding languages" they've developed on campus, usually a Lisp or Haskell derivative." -Steve Yegge, Wizard School

It's a line from one of his humorous, fictional pieces wherein he describes a Hogwart's-like school that churns out wonder-kid programmers, but it still seems like a vote for the Haskell/Common Lisp unification theory. It might happen. If it does, it'll be a race between the Haskellers and Lispers to out-evolve one another. In order to converge, Haskell needs to strap on prefix notation and macros, make IO easy (rather than possible), and blur the line between run-time, read-time and compile-time. Lisp needs declarative matching definitions, lazyness, currying (possibly eliminating the separate function namespace), strong types and a few small syntactic constructs (function composition and list destructuring leap to mind first). Lisp has a longer list to run through, but keep in mind that because it has macros, almost all of them can theoretically be added by you as you need them, rather than by CL compiler writers as they decide it's worth it.

It's also worth noting that the last point in Haskell's list is a pretty tricky proposition. How do you blur read/compile/run time when one of your goals is to have a complete type system? Well. REPLs for Haskell exist, so I assume it's possible, but making it part of the language core doesn't seem to be a priority at the moment (and probably won't be for a while due to the performance hits it imposes, and the perception performance hits still have in the general public of programmers). That's not the only hard bit either language would have though. How do you implement full currying and optional/default/keyword/rest arguments? Haskell purports to solve the problem by defaulting to currying, and giving you the option of passing a hash-table (basically) as an argument to implement flexibility. LISP gives you &rest, &body &key and very simple default argument declaration, but "solves" the currying issue by making currying explicit. Neither language's solution satisfies, because sometimes you want flexible arguments (and counter-arguing by saying "well, if you need them, you've factored your application wrong" is missing the point; expressiveness is a measure of power, remember, and having to think about the world in a particular way is a strike against you in that sense), and sometimes you want implicit currying (this is perhaps most obvious when writing in Haskell's point-free style, and if you've never done so, I doubt I could convince you).

As a common lisper, there are a bunch of things I'd like to steal from Haskell, if I could. The pattern-matching definitions are certainly useful in some places, list destructuring would help, and function composition seems useful (though this is, like defmacro, the sort of construct you have to understand first, in order to find places that it would greatly simplify). I'll check later, but I have a sneaking suspicion that someone has already lifted all of the above into a library somewhere on github or savannah. Even if not, list destructuring and function composition seem like they'd be easy enough to implement. The latter as a call to destructuring-bind, the former as a simple fold macro.

From the other side, there's already two projects underway; Liskell is a competing compiler to GHC that has a prefix notation and outputs the same machine code, and Lisk is a pre-processor for GHC that takes specific prefix notation forms and converts them programatically back to the Haskell source code before invoking the compiler. Lisk's creator talked briefly about macros, but the project is early enough along that nothing much more specific is out there right now (I'm watching his github repo with interest though).

I haven't a clue how to place my own bet. I tried starting this paragraph both with "My bet's on Lisp..." and "My bet's on Haskell...", but each beginning got to a logical dead end within two sentences. It doesn't seem like one can completely absorb the other. But, if Haskell + Lisp makes Omega, we'll see what it looks like shortly (by which I mean ~10 years) because cross-pollination is already happening, and it's not a far jump from there to full-on unification. Or maybe things get bloodier as the preview to Barski's Land of Lisp implies, who knows.

Either way, we'll see soon enough.

EDIT: rocketnia posted a particularly thoughtful response to the above post at the Arc Forum. He points out that there may not be an Alpha, Omega and 0, but rather "[L]ocal optima that can't be unified into Omega". I could have sworn I addressed this point (and acknowledged it, but stated that I was more interested the unification idea today), but my only mention of it is "...with one (or a relatively small group) taking the prime position." Apologies. He also explains a lot about how macros might coexist with a strong type system.

Tuesday, December 21, 2010


Just a quick update this week; I intend to record my thoughts on Bluetile (and I guess possibly xmonad by extension, but I get the feeling you could hammer the latter into a workable solution).

To start with

Why a Tiling WM?

I actually get asked this at work, so I won't assume that you're automatically on-board with the idea of using a tiling window manager. The most common questions are "Why?" and "Isn't it hard learning all those keystrokes?" The second is the easier question, so I'll answer it first; yes. But a good window manager should let you adjust keybindings[1], and the point here is to make your environment fast, not so easy to learn that the office secretary could use your computer in a pinch.

The answer to the first question is basically that.

It makes you faster.

Think about your editor. Actually, if you don't use Emacs, think about your editor. If you use Emacs, you already know what I'm talking about here; just skip to the next heading, where I give you the lowdown on Bluetile. Think about how you use that editor. Do you grab your mouse, and head over to the file menu every time you need to copy and paste something, or do you just Ctrl-c Ctrl-v? I'm hoping this is a ridiculous question; of course you use the keyboard shortcut when you can. It's faster. It would be utterly ridiculous to have to do everything with the mouse. Well, that's basically why. When you realize that the keyboard is so much faster, following that thread to its conclusion tells you that, except in special circumstances[2], you should use the keyboard as your primary input peripheral. If you analyze your mousing actions on a day-to-day basis, it'll occur to you that you spend a lot of time in a few different ways.

  1. Browsing the net (where you use the mouse to click on links and right-click on various things).
  2. Running programs (either from the dock on OS X or from the Start menu/desktop icons on Linux/Windows)
  3. Moving, sizing and adjusting windows (especially if you've got multiple, large screens. I typically have my editor, browser, debugger, a terminal window and maybe a movie to watch in the background. As I type this, I'm watching a talk on "Models and Theories" by Peter Norvig, which I can heartily recommend.)

The first point is something that you'd want a keyboard-driven browser for (I use Conkeror for preference, though most people seem to have decided to live with the mouse in the context of their browser), but 2 and 3 are both things that a good tiling window manager will solve for you. Depending on the manager, you either get a "run" command (a keystroke that brings up a little input where you can type the name of the program you want to run), or a keystroke for the most common programs, or both, which means that you don't need to rely on the mouse to run programs. You just need to hit Win-p and type emacs or (in my case) hit C-t C-e. Either of these is faster than grabbing the mouse, getting to your desktop, moving the cursor over and double-clicking on the Emacs icon.

Moving, sizing and adjusting is typically done in order to get maximum use of your screen real-estate. For my part, I rarely want overlapping windows, but I always want as much of my screen used as possible. The way tiling WMs work is by automatically laying out any windows you open so that they take up as much space as you need (either by letting you specify splits and groups as in StumpWM, or by letting you manage layouts in xmonad). By remembering a few extra keystrokes, you free yourself entirely from the mouse.

So that's why.

Bluetile (really)

That brings me to Bluetile. I've been using StumpWM for my purposes, but I wanted to try out the competition. Bluetile is a derivative of xmonad, the Haskell-based tiling WM, with an aim of being easy for beginners to get into. They do this, kind of ironically, by putting in mouse-oriented controls and by running on top of Gnome instead of standalone. That's pretty sweet, actually, and it seems to be fairly easy for beginners to get into. The trouble is that it doesn't do a very good job solving the problems I mentioned above (so while it's easy to get into, I doubt it would do a good job convincing beginners that tiling WMs are worth the trouble). First, it provides on-screen icons for navigation (each of which have keyboard counterparts, I'm just bemoaning the waste of screen space), and it keeps toolbars and gaps between windows so that you can still see your start bar and background. The gaps have no reason I can see; the toolbars are kept so that you can still click on windows and drag them around, which sort of defeats the purpose.

That's all nitpicks though, and you could argue that beginners would find it easier than the full-keyboard control of something like the standard xmonad or Stump. The big downside for me is actually the awkward screen model. I can imagine things going well on a single ginormous screen, and if I was running on one of the 27" iMacs, there'd be no problem. The trouble comes when you have multiple monitors, because the way xmonad seems to track them is by assigning a different "workspace" to each monitor. I'm sure this fit the program model perfectly, but it means that Alt-Tab only cycles between open windows on whichever monitor you have focus, and you have to pick your "focused" monitor. It's possible that I'm spoiled and this is actually how most TWMs work, but Stump doesn't seem to treat windows on different physical screens as separate areas, and I don't need to pick a working monitor. The other issue it brings up is with workspace switching. Because Bluetile gives you 9 workspaces (and assigns 1 to your first monitor, and 2 to your second), you need to be careful about which you switch to lest you screw yourself. For example, if you open Emacs on one monitor and a browser on another, then switch to workspace 2, they switch places. That is, your Emacs window gets shunted to monitor 2 while your browser gets pulled to the one you were looking at. That's not really what I want if I have multiple screens staring at me. If you then switch to workspace 4 (lets say you have Movie Player open there), your Emacs window stays where it is and workspace 4 replaces your browser in monitor 1. Now, moving back to workspace 1 causes your Emacs window to fly back onto monitor 1 and Movie Player to go to monitor 2. In other words, you're technically back where you started, except that workspace 2 now contains Movie Player instead of your browser. How do you get back to your initial setup? You have to switch to workspace 2 then to workspace 4 then back to workspace 1. This leaves something to be desired; and demonstrates that by conflating "monitors" and "workspaces", grater user-side complexity is achieved with no visible upside.

Treating monitors this way also introduces an extra level of complexity in the UI; you also need keys to select your primary monitor (they're Win-w, Win-e and Win-r in Bluetile; I don't know what happens if you have more than three monitors). That's too much to keep in my head, and this is coming from someone who uses Emacs. I won't be switching to Bluetile any time soon, and their docs give the impresion that this was pretty much how xmonad handles things too, which is sad. And means I'm sticking with Stump for the forseeable future.

1 [back] - So you don't so much have to memorize them as come up with some simple mnemonics and then assign keys to match those. For example, my .stumpwmrc is set so that C-[keystroke] starts up programs, C-M-[keystroke] runs a work-related shortcut (such as remote desktop, or opening my timesheet file) and M-[keystroke] does wm-related tasks. [keystroke] is typically just the first letter of whatever I'm trying to do (so C-e runs Emacs and C-M-r runs Remote Desktop). This is a mnemonic that makes sense for my workflow. I could easily have just kept track of my most common tasks and bound each to an F key.

2 [back] - For example, if you need to do some drawing. Either of decorative pieces/icon modifications for a web app or for the UI layouts in an environment like Flash/VB. In this situation, it goes without saying that you actually want a tablet, or a trackball, or a multi-touch trackpad, as opposed to a vanilla mouse. The only thing I'd use the traditional option for these days is gaming, and even then, tablets give you an edge if you know what you're doing because of the 1:1 mapping of screen to tablet.

Wednesday, December 15, 2010

Language Smackdown Notes and Smalltalk

I went to the Dynamic Languages Smackdown yesterday, and I'm recording my thoughts before losing too many of them. It was an event hosted by GTALUG (the Greater Toronto Area Linux User Group), and basically involved advocates of 7 languages standing up and talking about theirs.

Before I go any further, as an aside, the irony of a guy who writes a blog called "Language Agnostic" going to something called the "Dynamic Languages Smackdown" is not lost on me. It turns out I wasn't the only language nerd there though, and if nothing else I got a new book recommendation out of it.

The seven languages were Smalltalk, Ruby, Python, Erlang, Lisp, JavaScript and Perl, and the format was

  1. Introduction
  2. Questions from the audience
  3. Questions
  4. Code examples

To summarize, I kicked off (subtly, so no one seems to blame me yet) a line of questioning dealing with canonical implementations that we kept coming back to throughout the talk. Andrey Paramonov had it easy, because Erlang actually has just one canonical implementation (with one attempted port to the JVM that apparently no one takes seriously yet). Other than Erlang, what struck me here is how diverse the pool actually is. I mostly hack Common Lisp these days, and only vigorously play with Ruby, Erlang and Haskell (and PHP at work, but as you can see by the logo bar, I'm not terribly proud of that), so I was under the impression that Lisp was the only freak language that had so many implementations to choose from[1]. That turned out to be a misconception; Ruby has JRuby and Iron Ruby (both of which purportedly conform to the same spec and are both interchangeable and "official" as far as the community is concerned), Myles Braithwaite put up a slide listing twenty or so different Python implementations (which variously support Python 2.5, 2.9 and 3.x specs), Smalltalk has at least two open-source forks (and gnu-smalltalk, but that wasn't really discussed), the Perl community is apparently split between 5 and 6 and Javascript has at least three different server-side implementations (the client-side situation is worse).

It's weird, because as I've said, I was under the impression that "a language" meant one canonical implementation with one or two experimental projects, but (at least in the dynamic world) that seems to be false. It's odd, because people cite "difficulty choosing an implementation" as one of the principal reasons not to go with Common Lisp. I guess it's more of an excuse afterall.

The other big surprise was the age of the advocates. Of the seven, only Alan Rocker (the Perlmonger of the group) had the sort of beard you'd expect, and everyone other than Alan and Yanni (the Smalltalk presenter) seemed to be a student. I'm particularly happy about this since Lisp gets cast as the old-man's language, but in reality, programmers my age seem to be more common. Not that "age of the community" is important in any tangible way, just interesting.

"Smackdown" thankfully turned out to be too strong a word; other than a fierce rivalry between the Python and Ruby presenters (and a few low-blows from both of them aimed at JavaScript), everyone was respectful of the other languages there. It was fairly informative, and I'm going to pick up and play with Clojure, a Smalltalk (either gnu or Pharo) and more Python as a direct result.

A note for future presentations in this vein though:

  • Please don't do code examples last. This should have been done up-front with the introductions, and probably alotted 15 minutes or so per language. Alan didn't even get enough time to present his.
  • Either admit that these discussions will take more than two hours, or invite fewer languages at once. The conversations easily could have continued for a further hour or two (and probably did at the pub after the event, but I had work the next day, so I couldn't go).
  • Be prepared with the slides beforehand (anyone else would be able to blame PowerPoint, but this was the Linux User Group, so you don't get that luxury).

Preliminary Impressions of Smalltalk

I did briefly try to get into Pharo this morning, but I found it annoying to say the least. This doesn't mean I won't keep trying; I had a negative initial reaction to pretty much every language I currently know and love. There are some definite initial concerns though, the biggest of which is that Pharo insists that you use its "Environment" (which is only really a big deal because of the way that environment is constructed). It's heavily mouse-dependant (in fact the intro text suggests you get yourself a three-button mouse with a scroll-wheel to get the most out of it), and it insists on handling its own windowing (which means if you got used to a tiling window manager, you are so screwed). The gnu implementation is titled "The Smalltalk for those who can type", so at least I know I'm not alone. Minor concerns about image-based development include things like "How does source control work?" and "how do I use Pharo on a team?", but I'm sure those are resolved and I simply haven't dug deeply enough to have an idea of how yet.

1 [back] - First off, the "language" is split into Scheme, Common Lisp, and Other. In the Scheme corner, you have Racket (formerly PLT), Guile, Termite (which runs on top of Gambit), Bigloo, Kawa and SISC (and a bunch of smaller ones). In Common Lisp, there's SBCL, CMUCL, Clisp, Armed Bear and LispWorks (and about 10 smaller ones). Finally in "Other", you find crazy things like Emacs Lisp, AutoLisp, Arc, Clojure and newLisp (which are all technically Lisps, but conform to neither the Common Lisp nor Scheme standards). This is why I think having a representative for "Lisp" is a joke at a talk like this; which Lisp are you talking about?

Tuesday, November 16, 2010

Not Blubbing

I've been trying to get my mind around Haskell for the past little while. That's not a link to Haskell.org, incidentally, but the way people talk about it, Learn You A Haskell For Great Good may as well be the official site. It's referenced so commonly that people usually call it LYAH. The learning process is not easy going, I have to tell you. Probably because it's both the first statically typed, and the first purely functional language I've tried seriously to learn (as you can see by the logo array above). I'm going to keep at it; this isn't anywhere near the first brick wall I've attempted to headbutt through, but I'm observing some disturbing patterns in my thoughts, and I need to get them out.

It's surprising how tempting it is to say "bah, these Monads aren't worth my time; what do I need them for?"

The same thoughts commonly arise about strong typing and purity; it's really really tempting to drop the learning and just run back to Common Lisp/Erlang/Ruby for my programming purposes. The trouble is, this is precisely where the Blub Paradox strikes. Ok, yes fair, I can say "How can these Haskellers get anything done without macros?", but the thoughts I'm having are surprisingly similar to "This is just like Common Lisp, except for all this weird, hairy stuff I don't understand".

I'm not afraid of looking up the hierarchy, it's just unsettling that I can't tell and may be lulling myself into a false sense of superiority. The worst possible outcome here is that my internal biases rob me of power I might otherwise wield. A bad, but certainly tolerable outcome is "wasting" the time it takes to learn new concepts and techniques that are merely as effective (or slightly less effective) than ones I already know. The best case is clambering through the motivational dip to find new techniques I can apply to unilaterally improve my projects, both professional and personal.

Looking at it that way, it's pretty obvious that the correct (but admittedly, seemingly insane) answer is to keep hitting this wall with my head, and hope it collapses before I do.

Sunday, November 14, 2010

Debian Follow-up, and StumpWM

Ok, so remember how I said I added some of the latest Ubuntu repos to my Debian sources.list just to hit one or two installs?

Don't do that.

For the love of God, don't. Or, at least, remove them from your sources file afterwards. I was happy enough with the stability of Debian this weekend, so I decided to run my install script (which includes things from audio-editing programs to inkscape, to emacs to several languages). Bad idea, to put it mildly. I had no idea what the hell I was doing at the time, so I just hit y ret a whole bunch of times, and in the process fucked up my sytem. I knew I was boned about the time I saw

removing gnome-network-manager
removing gnome-desktop
removing gnome-tray
removing x-server
[more bad stuff]

whizz by on the terminal window. When I next restarted, I got a friendly little prompt, and that's it. My data was still intact, but I didn't have a network connection for perhaps obvious reasons, so I mounted a usb-stick and got the shit I've been working on for the past day or so off, and prepared to reinstall (I don't know nearly enough about Linux innards to attempt surgery at this point).

I had a thought though; my current installation was Debian Lenny, and there was a "testing" version out called Squeeze ("testing" is in quotes, because by all accounts I've read, it's rock solid by this point). It took a bit of counter-intuitive navigation on the Debian site to get to the squeeze installer, but I guess that's reasonable; they want most people to install the "stable" release, not the "testing" or "unstable" ones. So there, I'm typing to you live from Debian Squeeze, and I have to tell you, it's good. The biggest gripes I had from the last post have been addressed; the new version of Gnome plays nice with two monitors out of the box, and the squeeze repos have more recent installs of the programs I use than Ubuntu. Specifically, I get out-of-the-box apt-get access to emacs23, sbcl 1.0.40, haskell-platform, pacpl and synfig.

So there. Debian beats Ubuntu from my perspective at this point.

At this point, since I was already ass-deep in installs anyway, putting in StumpWM seemed like a logical conclusion. So I did. And I'm in love. It's Emacs for window management. Just as a note, I've found that any software I could describe as "Emacs for [n]" is something I'd probably like. Sadly, between Emacs, Emacs for the web and Emacs for window management, I get the feeling we're about tapped out now. I like GIMP, but it's not exactly "Emacs for images". I've set up a minimal .stumpwmrc file like so

;; Psst, Emacs. This is a -*- lisp -*- file.
(in-package :stumpwm)
(message "Loading rc ...")

;;; Program definitions
(defcommand emacs () () (run-or-raise "emacs" '(:class "Emacs")))
(defcommand browser () () (run-or-raise "conkeror" '(:class "Conkeror")))

(defcommand reinit () () (run-commands "reload" "loadrc"))

(define-key *root-map* (kbd "b") "browser")
(define-key *root-map* (kbd "C-q") "quit")
(define-key *root-map* (kbd "C-r") "reinit")

;;;Things that happen on StumpWM startup
(run-shell-command "/usr/bin/trayer --SetDockType false --transparent true --expand false")
(run-shell-command "nm-applet --sm-disable")

just to get everything up and running (trayer is needed to get the nm-applet working so I can has internets).

It's occurred to me that, now that my window manager is a Lisp machine, I could hard-wire "web-jumps" into my environment with Firefox. Not sure if it'd be worth giving up the keyboard shortcuts, but I would get HTML5 support, and all of my scripting would be done in Lisp at that point (rather than a JS/Lisp split). I'm really not up for that this weekend, but I'll keep playing with Stump. So far, it's good stuff.

Thursday, November 11, 2010


I'm temporarily writing in Gedit (lacking the internet connection I'd need to apt-get intall emacs23 and bend it to my will. Already I find myself annoyed by the pretentious little toolbar, tab array and pretty icons at the top of the screen, and the useless little status bar at the bottom. On my widescreen monitor, I probably wouldn't have noticed, but this is a pretty severe infringement of a netbook's screen real-estate.

That's not what I wanted to talk about though.

I ended up installing Debian linux on three of my machines just to find out that it's about the same deal as Ubuntu these days, right down to the window manager. It installs fewer things by default, and you can't download a single-disc installer for it, and you can't REALLY install it without a network connection, but I still tried it. It even accepts apt-get commands, so I don't have to port my startup scripts. Though I did have to add a couple of repositories in order to get apt-get access to pacpl, emacs23 and a version of sbcl that doesn't crash like a champ when trying to install ironclad through quicklisp. This means that I just have the task of presenting you with diff Ubuntu Debian on three different computers, which is significantly easier than reviewing Debian.

The first thing that struck me is that Debian has a working amd64 implementation. Theoretically, Ubuntu does too, but here's the thing. I have two Intel machines (an old Pentium 2 and an Atom) and one AMD64 machine (a Phenom 2 x4). The intel startup disks I burned (for each 9.04, 9.10 and 10.04 Ubuntu) never have a problem. Not once. The install has gone flawlessly each time. The AMD machine is the precise opposite story. Not only did I end up making two copies of each startup disk (one at the standard speed, one at the slowest possible) just to make sure the CDs weren't at fault, but I also tried the same install from a bootable USB key. No dice. Ubuntu does not like AMD, apparently, because it took me no less than 17 attempts to get a single copy of Ubuntu 10.04 working on that computer. Once it worked, it wasn't quite smooth sailing either; it would crash out randomly, and I became convinced through hardware surgery that it was a software problem. My intel machines (including that 8 year old laptop I wrote about a little while back) have never had problems. So, one giant check for Debian.

The missing repos are a bit annoying. Debian by default has access to older versions of some programs I use frequently. This isn't a problem for the most part, until it is. The way I found out was trying to install ironclad through quicklisp, which kept complaining that it didn't have access to sb-byte-rotate (which is an SBCL component that shouldn't need to be installed separately). This was infuriating until I took a look at the output of sbcl --version, only to find out it was 1.0.14. I don't need to be bleeding edge, exactly, but that was a little ridiculous. Now this isn't a huge deal, because all it takes is adding a couple of lines to your .sources file and apt-get updateing, and you're good to go. About 10 minutes after finding out what the issue was, I was sitting on sbcl 1.0.29 (which includes sb-byte-rotate) and merrily quicklisping away. After I did that, I checked back through all the programs I installed and confirmed that Debian has significantly older versions of mplayer, Emacs and conkeror, and it lacks any access to pacpl (even an older version). These out-of-date programs were old enough that fairly significant shortcomings were apparent (some libraries didn't work on SBCL, Emacs lacked some modules I'd gotten used to, and Conkeror lacked some convenience features that you really miss once you're used to them). Again, just adding the ubuntu repos, then installing fixes the problem but it's still not zero-effort, so one small check for Ubuntu.

In the same vein of missing repos, almost to the point that it should be the same objection, Debian really doesn't want you using proprietary drivers. This is ok, I like the initiative of aggressively pushing for open software. Except that my 8 year old guinea pig has a Broadcom wireless card that I need the bcm43 drivers to use. And my netbook has some Intel branded card that I still haven't gotten working. Wireless is important, guys. We don't all have (or want to have) a web of CAT5 running the floor of our computer and living rooms. Ubuntu solves this by leaving the proprietary drivers out, but gives you a simple interface for turning them on if it detects you have some proprietary hardware in your machine. On Debian, you need to research the problem, enable the correct repos and install these drivers yourself. A check for Ubuntu.

Ubuntu has out-of-the-box support for multiple monitors. Debian does too, except that by default, it just mirrors your screen, which isn't what I want. I got it working after about twenty minutes of reading up on the issue. It required some xrandr trickery, and a one-line change to xorg.conf (which thankfully didn't error out, otherwise I would have stuck with one screen). I suppose I could have just upgraded Gnome, but I'm looking to switch over to stumpwm in the near future anyway, so I want a way of using multiple monitors with X11 that doesn't depend on a specific window manager. This is still a small check for Ubuntu (the functionality's there in both, but it's effortless in Ubuntu and flexible in Debian).

So there. The way it looks to break down is that Debian stays on my AMD machine at home, and I keep Ubuntu on the various Intel machines I own. It shouldn't be too big a pain in the ass, given their similarities, they're almost silently interoperable and I finally have a working home behemoth.

The next step is putting stumpwm and xmonad through their paces. I need a tiling window manager.

In other news, I've started studying up on Haskell (as evidenced by additions to the right sidebar). It started with a question on programmers.SE about Erlang code. It yielded no answers, but a german guy came by and suggested Haskell instead. I dismissed it as another instance of language biggotry, but after a conversation with him (and a healthy amount of research on the language), I've decided it's certainly worth learning. I'm not giving up on Erlang, mind you, Yaws is just too good an app to pass up, but Haskell has some of my mindshare now, and its package manager is full of useful stuff from PDF generators to web markup utilities to language interop facilities (including one for interfacing with Erlang processes).

It's going to be a long December.

Thursday, November 4, 2010

Thoughts on Cooks [sic] Source

No programming discussion this time. Just some thoughts I had today.

So I caught wind of this during my daily Reddit trawl at lunch, and I didn't think much of it. Then I noticed Neil Gaiman and Wil Wheaton were chiming in via twitter. Then I saw a link to the magazine's FaceBook page which was in the process of defacement. I chuckled about it, then I went back to work. When I got home, it was still happening. The FB was up to 2200 "friends", the discussion group was filled with people now investigating Cooks Source articles from the archives under suspicion that Gaudio wasn't the first victim of this editor (she wasn't, by the way).

The flippant, asinine email (which I won't copy here; you can find it easily enough by now) probably fueled internet anger more than any kind of legal offense, but this didn't stop people from writing things like "thief", "plagiarist" and "copyright infringement" on the Cooks Source wall. Many of them also wrote "Twatwaffle", and various Chuck Norris/Monty Python references, but hey, this is the internet. Well, while online content certainly isn't 'public domain' as Ms Griggs claimed, the accusations don't seem fair.

First, she's not a thief. In the same way that someone who downloads a movie illegally isn't a thief. That is an offense, but stealing ain't the right one because the victim isn't actually being deprived of anything when someone makes a copy (many argue that they're deprived of potential profits, but even if that held water legally, it's a moot point in this case because Gaudio's blog is free).

Second, she's not a plagiarist. She might be an idiot with a tin ear for period-piece writing, a poor editor, a worse speller, and a self-righteous, uninformed ass, but she did credit Gaudio for the article (whereas plagiarism would have entailed putting someone else's name on the byline).

The third accusation of "copyright infringement", is prickly too. It's not that she didn't do it, according to the letter of current law, it's pretty clear she did. And to be fair, this is pretty much the one place where I feel copyright is approaching a good thing (someone reproducing others' works without permission or compensation and for the express purpose of making money by it). I'm still torn between the implications here, because the outrage tells me that even while ACTA is on many minds, and most folks I talk to complain about how restrictive current copyright is, a very vocal part of the internet still considers it a good thing. The really worrying part for me is that a CC-attribution-share-alike license (which all of my online work has been released under) would expressly allow what happened here.

I hate to be the killjoy, or to call down the internet thunder, but I wouldn't have a problem with that part of what she did. Endorsing an open culture doesn't just work when you argue against DRM on music and games, or when someone tries to shut down Pirate Bay, or when people hell-bent on censorship cry "DMCA!" without cause. It means that everyone gets access to your stuff too.

By all means, continue trolling. Assholes deserve to be slapped around just for being assholes, doubly so when they're also grossly incompetent and quick to resort to the "N years experience" argument. But I'm going to go on record as saying that I think reproducing work you find online isn't a bad thing. If it is, it shouldn't be a bad thing. The entire advantage that the internet has over traditional media is that digital copying lets information fly around the globe at almost the speed of light. If you invoke copyright, you slow that information down by artificial means. It's at once inefficient and depressing that people would suffer the lack of light-speed just to keep their ideas under control.

Plagiarism sucks, self-righteous assholes suck, but copying shouldn't.

Tuesday, November 2, 2010

OS Experiments and Project Update

As much as I like sitting at post number 42, it's time for an update.

I've been testing another distro of linux (well, not another distro, really. An Ubuntu derivative called Crunchbang), and I really thought it was going to unseat Ubuntu as my OS of choice. The main killer, it turns out, was xorg.conf.

First, the good stuff. It's simple, it has better keyboard support for launching programs than Ubuntu, and its performance is through the roof. It actually flies on the toy laptop I have lying around for this kind of experimentation (a Compaq Presario R3000 I picked up for $20 about two years ago. It has a 1.4ghz processor, a whopping 256 megs of ram and a 5400rpm, 30gb hard drive). Even with those specs, Crunchbang is usable. So I was all psyched up to install it on my netbook to get a bit more performance out of it, just because I do actually do some development on it when I'm out and about. Two things are making it unacceptable though.

One, and this is the main one, xorg.conf. To the Linux veterans this is probably a joke, but it's pretty difficult for the newbs. There was sparse information on configuring monitors this way back when it was mandatory, and you could never be sure that you were pasting the appropriate things because, apparently, different distros had different conventions about it. When I searched (and I mean everywhere; old linux sites, the various StackExchange sites, the Ubuntu docs and the Ubuntu forums), all I found were horribly outdated pointers, and one or two comments about how "xorg.conf isn't really used anymore, just use xrandr". Which is great, except that xrandr doesn't seem to auto-detect additional monitors in Crunchbang, so I still need to know how to configure them the old way. The other common instruction I found was to copy-paste existing parts of your current conf file. Which I'm sure was great advice at one point, but current Ubuntu/cruchbang xorgs look something like

Section "Device"
        Identifier      "Configured Video Device"

Section "Monitor"
        Identifier      "Configured Monitor"

Section "Screen"
        Identifier      "Default Screen"
        Monitor         "Configured Monitor"
        Device          "Configured Video Device"

which doesn't seem to contain useful information. All it tells me is that all these options are now configured elsewhere, and I have no idea where that is. The other big problem with xorg.conf editing is that you have to restart the X server each time you want to test it out, and (at least on Crunchbang and Ubuntu) the error messages you get aren't exactly useful. They basically say "there was an error in your xorg.conf".

Thank you.

Which section and line would be useful.

Even if you drill down further into the error log, it typically (again, for me at least) just said that the error was a missing EndSection on the last line, even though there were no missing EndSections, and the last line consisted of nothing else. It took me something like two hours to track down enough info about the basics to get the screen mirrored on an external monitor (before that, the extra screen was just displaying various seizure-encouraging light patterns). Xrandr still doesn't see the extra monitor by the way, so I still have to muck about further with xorg.conf.

Second, and this is the tiny problem almost not worth mentioning, but it's not an issue in plain Ubuntu. Crunchbang doesn't have apt-get access to emacs23. If I want to apt-get install emacs, it gives me version 22.1 or so. I could spend a bit of time figuring out how to script a wget to the GNU ftp site, then run the appropriate commands to untar and install it properly (it's probably just make and make install, though I've never had to do it so I don't know), but my fucking around time-budget is officially used up for this week.

So there. That's why I'm keeping Ubuntu 10.x. For what it's worth, until I hit the iceberg that is xorg.conf, this was going to be a blog post about why I switched. Seriously, I had notes ready and everything. So if you don't need more than one monitor for whatever reason, or you're a xorg wizard and still on Ubuntu, give it a try.

Other than my OS experimentation, I've also been toying with the formlet system I shamelessly lifted out of PLT Racket (still at the GitHub page, but now there's documentation), and I also released that CSS module I wrote about a little while back. I honestly wasn't going to, but those 12 or so lines of code found their way into every single project I've worked on since. Code reuse is good, I hear. The project page is here, and it is likewise accompanied by a nice little Markdown-enabled documentation sheet.

The big additions since you last saw these projects are

Formlets now support recaptcha.

You like how I'm adding high level support like this before the thing can even generate checkboxes, huh? Well, it might be a bit dumb, but I feel no shame in admitting that the order of operations is self-centered. Which is to say, I add features as I need them, not as they make sense in theory. It just so happens that I had call for text inputs, textareas, passwords and recaptcha before one of my projects called for checkboxes, radio buttons or select boxes. I'm sure it'll change soon.

CSS compilation

This one's obviously from the css generator. Basically, a task called for a several-hundred-line stylesheet. While it wasn't performing too badly, I realized that eventually, there would be a call for CSS that you could cache (as opposed to the inline styles then generated by cl-css). So, I added compile-css, which takes a file path and some cl-css-able s-expressions and outputs them as flat CSS. Tadaah! If you're running Hunchentoot (or any other server really, cl-css is simple enough that I can confidently call it portable) from behind Apache or nginx, you can now have the front-facing server serve out a flat-file stylesheet and still get the benefit of declaring your CSS more succinctly in Lisp.


Like I said earlier, there's some simple docs included with both projects which you can thank GitHub for. The main reason I wrote them was seriously because there's this little notice that tries to guilt you into putting in a README if you don't have one already. It supports markdown, textile, rdoc, and a bunch of other psuedo-markup languages meant for displaying plaintext while still supporting automatic conversion to HTML. Mine are written in markdown, mainly because I'm already familiar with it, and I managed to find a decent Emacs editing mode for it.


I'm probably late to this party, given that quicklisp is now in public beta stages, but I set up ASDF packages for both formlets and cl-css, and hooked them up to the CLiki. So unless it (the CLiki, I mean) goes down again, you can install them both by doing

(require 'asdf)
(require 'asdf-install)
(asdf-install:install 'cl-css)
(asdf-install:install 'formlets)
EDIT: Ok, so ASDF-ability doesn't seem to be playing nice at the moment. I swear I tested the things out by installing them that way myself. I'll see what's up later today. EDIT2: Still not sure what's going on here. It looks like ASDF doesn't like https? EDIT3: Ok, so ASDF seems to assume that all well-formed web addresses start with "http://", which means that google docs, github and other hosts that use https:// are a no go. The current solution I'm resorting to is just chucking cl-css and formlets tarballs up on my linode. CLiki has been updated, so the ASDF packages finally work. Now I neet to talk to Zach to get this stuff added to quicklisp...

Thursday, October 14, 2010


That's just about what I spent the last four hours feeling like. I've been working semi-feverishly on a version of the formlet macros for Common Lisp. Automatic validation is surprisingly difficult when you break it down. If you want to do it well, I mean. The thing could easily have been half-assed in half the time, and three-quarters the code. But I meant to do it well, so it took a while. It's still not anywhere near perfect. If you tried to get it to generate a form called "form-values", I imagine it would snarl at you like some lovecraftian horror. I'll be plugging the holes over the next little while with judicious use of with-gensyms, but that'll only add one layer to a construct already six dreams deep.

It's fucking bizarre, I tell you. On days like this, I can sort of see why people stay away from LISPS in general. Not all minds can be made to twist in on themselves indefinitely; mine was barely capable of six levels, like I said. By which I mean, the formlet system I wrote is made up of a function and a macro (show-[name]-form and def-form). show-[name]-form is a function that calls the show-form macro to generate partial HTML and invoke the form-element macro, which expands into the actual low-level HTML boilerplate. def-form is a macro that expands into the appropriate show-[name]-form function, and defines a validator function, itself composed of no less than four nested macros.

It sounds like a complete goddamn birdsnest, and it kind of is, but every layer of complexity is warranted, as far as I know (if it isn't, please, PLEASE tell me that, and point me to simpler code that does what this does).

Here's the thing. If you're just looking at how to generate the HTML on forms (a-la the PLT Formlet system), it's ridiculously easy. I could have gotten away with a macro and a half instead of the layer cake that I ended up with, but the issue there is that that way wouldn't have saved me much time or code. The hard part on a form is not displaying it. That is the easy part. If I may say so, the trivial part. If your system does nothing else, then it is balanced on a precarious edge between the twin pits of "break-even" and "not worth using". The difficult part is validation. At the high level, it sounds simple (which is probably why I was foolhearty enough to attempt it);

  1. Take in the form results and a list of predicates.
  2. Run the predicates over the results.
  3. If they all passed, do something, otherwise, send them back with a little note saying what they need to fix.


But those three steps (if you wanna do them properly) have so many moving parts that it necessitates many, many macros. For starters, there are fundamentally two types of forms; the type where you need to validate each field (like a registration form, or other long list of inputs), and the type where you need to check the whole form in aggregate (like a login form, where you really only care whether you were just handed a valid name/password pair, and in fact you don't want to tell the user which of the two they got wrong).

That second group runs out of situations fairly quickly; you just need to display one error message for the form and send the user back, or let them through if they got it right. Done.

The first group is what caused most of the work. First off, each input needs its own predicate, and its own failure message. If you want to provide good signage, it's also not sufficient to and the list of predicates over the inputs; you want to iterate through the full list no matter how many mistakes you find, and then mark them all off in the error listing. When you get back, you need to display each error next to the appropriate input, and (for non-passwords), you want to keep any inputs the user sent you that validated ok. As if that weren't enough, those pesky users like labels too, so that they can see what they're filling out, and you need to support (at minimum) input type=text, input type=password, textarea, select and checkbox if you want to be useful (cl-formlets doesn't, by the by, this 0.0001 release only supports inputs, passwords and textareas because I don't need the others for the app I'm currently writing. Stay tuned though). radio can be nice, but you can get by with just select. The end result is that you need to track

  1. field names
  2. field types
  3. user input for each field
  4. a predicate for each field
  5. an error message for each field

and good god, do the interactions make me want to headdesk. #2 specifically sounds easy, but gets mean fast when you think about it. I won't go into the details, but keep in mind that it's not enough to just keep track of a type property and switch it out; a select has a fundamentally different tree structure underneath than a textarea, which is again fundamentally different from an input. When I say different, I mean that they track user input in different ways, need different things changed out when they error, have different consequences when setting their value (and different methods of setting it, too) and behave differently on screen.

I'm not posting the code here, it'll be at github. If you can do better, please, please do so, and let me know.

In the meantime, I honestly feel like I should be spinning a top and watching it warily.

Monday, October 11, 2010

(defun my-thoughts (work time reflection) '(

I gotta tell you; when I started out with PLT Scheme, I wouldn't have thought I'd end up here. Here being an Emacs using CL-slinger.

It just sort of snuck up on me. It's really embarassing now, but the thing that tipped the scales most strongly in favor of PLT when I was starting out was the goddamn shiny IDE. It didn't enter into it how terse or how flexible the system was, all that mattered was that it be simple to use. This may be a primary driving force behind the general IDE craze that seems to be raging through development communities lately. It might just be my perspective, but it's feeling awful lonely over here in Emacs land. There are some popular adherents out there, but by and large, people I talk to these days hack primarily in MSVS or Eclipse. Hell, I went to the Toronto Lisp Users group last week, where you'd think there would be lots of support for Emacs, only to find that it was me by my lonesome. One guy hacked on Clojure in Eclipse, two of them used LispWorks and one didn't comment. The IDEs are winning in terms of number of users at least. Not sure whether it's a win on the productivity end, but hey.

So I guess it's not that embarassing.

Once I got hooked by the IDE, and the docs, and the package, it all just seemed so nice. Certainly better than my days hacking on Python/PHP (and even slightly better than my half-year of toying with Erlang). I never did figure out the profiler, but the macro stepper was really cool, and having errors highlighted with little arrows in DrScheme is so sweet that I actually started welcoming them for a while there. Slowly though, stuff changed. Without even realizing it, I was spending more and more time in Emacs. Whether in an editing mode, or the built-in GIT mode, or eshell, Emacs was starting to become my window manager. Before I noticed the change, I was using run-scheme rather than the IDE. I still had to pull it out for macro stepping, but it was tolerable.

Then I realized that I was really using three or four different libraries out of one category, and all of them were available for Common Lisp. Then I realized that CL also has a PDF-generating library. Aged and imperative, yes, but at least I wouldn't have had to roll my own that way. When the realization finally hit me that Emacs won in my mind, I sat down and thought about what really makes sense for me development-wise. Turns out that if you already know Emacs, and you already know a couple of lisps in addition to CL, SLIME is the best IDE you could hope for. Along with the built-in GIT support, swank, Lisp-Mode keybindings and Auto-Complete mode, SLIME's REPL/macroexpander/documentation-lookup/profiler push the environment over the edge.

So I hack on Common Lisp now, I guess. Man, I'd better get around to replacing that logo bar, it's getting pretty dated.)

Wednesday, September 29, 2010

Lisp and CSS

So the Reddit/Y Combinator spike seems to have died down, which means I can return to blissful obscurity. Not that arguing with Jay freaking McCarthy of PLT Racket and getting to thank Xah Lee for his Emacs tutorials wasn't the high-point of my day yesterday, but I sort of write these posts in order to get stuff out of my head rather than to have them read.

I got to the point of needing some CSS in a lisp app a little while ago, and while I was typing it up, I thought "Hang on, self, I'm sure there's a way to get this done in lisp instead of repeating yourself this much in CSS". Checking online, sure enough there's a library for it (css-lite, which is available through asdf).

The asdf version seems to have some bugs in it, sadly.

* (asdf-install 'css-lite)

[snip installation trace...]

* (require 'css-lite)


* (css-lite:css (("body") (:height "50px" :width "100px")))

body {
height, '50px', width, '100px':nil;

That's not exactly what I meant.

I'm sure the git-hub version has this stuff resolved, but by this point I was already on the "How hard could this possibly be?" train of thought.

Inputs and outputs are strings by the css-lite convention, so it seems like it should be pretty simple to output. Well, it is.

(defun css (directives)
  (apply #'concatenate 
         (cons 'string 
               (mapcar #'(lambda (dir) (format nil "~(~a { ~{~a: ~a; ~}}~)~%" (car dir) (cdr dir)))

* (defvar test `((body :margin 5px :padding 0px :font-family sans-serif :font-size medium :text-align center)
             (\#page-box :width 1100px)
             (".box-one, .box-two" :width 200px :float left :overflow hidden :margin "0px 5px 5px 0px" :padding 0px)))


* (css test)

"body { margin: 5px; padding: 0px; font-family: sans-serif; font-size: medium; text-align: center; }
#page-box { width: 1100px; }
.box-one, .box-two { width: 200px; float: left; overflow: hidden; margin: 0px 5px 5px 0px; padding: 0px; }


It could be more efficient if I used reduce instead of having mapcar and concatenate as separate steps.

(defun css (directives)
  (flet ((format-directive (d) (format nil "~(~a { ~{~a: ~a; ~}}~)~%" (car d) (cdr d))))
    (reduce (lambda (a b)
              (let ((final-a (if (listp a) (format-directive a) a)))
                (concatenate 'string final-a  (format-directive b))))

* STYLE-WARNING: redefining CSS in DEFUN


* (css test)

"body { margin: 5px; padding: 0px; font-family: sans-serif; font-size: medium; text-align: center; }
#page-box { width: 1100px; }
.box-one, .box-two { width: 200px; float: left; overflow: hidden; margin: 0px 5px 5px 0px; padding: 0px; }

* (defvar box '(:margin "32px 10px 10px 5px" :padding 10px))


* (css `((body ,@box :font-family sans-serif :font-size medium :text-align center)))

"body { margin: 32px 10px 10px 5px; padding: 10px; font-family: sans-serif; font-size: medium; text-align: center; }

That should do it. So yeah, there's a quick and dirty non-validating CSS generator. It took about 10 minutes to write (and most of that was trying to figure out why it wasn't working, then realizing that I'm no longer using Scheme and that foldl therefore doesn't exist), which is probably less time than it would take to go online, download a fix for css-lite, install it and try it again. I would submit it to git-hub or something, but 6 lines of code seems somehow unworthy of its own module.

I feel this also validates my statements about the format function in the last post. In scheme, this css transformer would have to resort to another couple of function calls. It's that short in part because I was able to take advantage of CL's embedded formatting DSL.

Sunday, September 26, 2010

Yegge Strikes Back from the Grave

So I've been fooling around with some new stuff.

Actually, before I tell you about that, quick update. dieCast is now in the early beta stages. It's actually capable of supporting games, but it's got a long way to go before it's something I'll be proud of. We're about three months away from a public beta from where I'm sitting. For the testing stage, I'm ending up using some creative-commons enabled sprites. I'll probably keep them as a subset of the final sprite lineup, if the license permits, but the intention is to get original artwork up.

Ok, now then.

I've been fooling around with some new stuff.

Or rather, some very very old stuff. Over the last couple of days, I've decided to pick up Common Lisp and Portable Allegro Serve again. I gave up on trying to install PAS on SBCL after about twenty minutes though, and promptly switched out to Hunchentoot, and all I really have to say is



I already have some projects underway with PLT Racket (including Diecast), but goddamit, I think I made the wrong decision. The quote from Yegge goes something like "Most newcomers independently come to the same conclusion; Scheme is the better language, but Common Lisp is the right choice for production work." Bottom line, I remember disagreeing a long time ago, but I've uh... independently come to the same conclusion.

PLT Racket seems to be as good as Scheme gets. It has built-in support for everything from hashes and regexps to x-path and http. It has file-system bindings, guaranteed tail-recursion and pretty much the best package system I've seen (from the downloaders' perspective, at least, Scribble is a bit of a bitch to get familiar with if you plan to actually document your own code).

So why am I having serious second thoughts?

Lots and lots of little things. Now that I've actually had some time to play with both contenders, mastered both IDEs, played with both macro debuggers, ran web servers on both and lived in each language for a decent length of time, I think I can finally compare them, and gain some sliver of insight from the comparison. And it's a damn close race. The biggest differences turn out not to be what everyone was pointing at. I have a link in the sidebar over there pointing to "Scheme vs. Common Lisp", which purports to tell you the differences between the two, and maybe three of those actually trip you up to any significant degree.

So here's the big stuff. PLT Racket vs Common Lisp from a young hacker's perspective.

1. Documentation

The PLT Docs are badass, and centralized. Second to none. They have Ajax search running over all functions in their implementation (and you need it with the amount of stuff it has), code examples all over the place, and a comprehensive set of tutorials perfect for beginners. Common Lisp probably has more overall information on it, but it's scattered across CLiki, Common Lisp Directory, Hyperspec, various indie package pages and Bill Clementson's archives. M-x slime-documentation-lookup helps, but it only searches the Hyperspec. That's plenty of info for the veteran, but (if I could imagine my point of view about two years ago) it wouldn't be sufficient for someone who's, say, looking for a complete listing of format-string options (As a public service, the way you find that is to look up format, then scroll about half-way down the page where you will be pointed to section 22.3 for more information on formatted output).

2. Package Repositories/Installation tools


I can't believe I managed to go so long without adding this note. As of the end of 2010, quicklisp also exists, and is awesome. That means that the gripes I had about asdfing things are moot, since you don't need to for the most part. Thanks to ql:system-apropos, it's also fairly easy to find CL packages, so I guess PLT Scheme (now Racket) no longer wins this one. I have no idea what they've been up to for the last year though, so they probably made a thousand and one improvements all over the place too.

Thu, 11 Oct, 2012

Common Lisp has asdf, which is awesome compared to the tools found in most other languages I've used, but PLT beats it pretty handily. It's basically the same story as documentation. There's technically more stuff out there for CL, but it's scattered, and since development is distributed, you'll get some duplication of effort. There are four or five different HTTP servers, for example, and at least three HTML-templating libraries. Granted, there's a clear "best" in each category, but you really need to do your reading in order to find that out. PLT has a smaller offering (the biggest gaping holes are in the document generation area; there is no such thing as a good PLT Racket PDF/PostScript generator), but it's neatly organized, indexed, and accessed by typing (require (planet [package-name])) in the declaration section of whichever file you need the new package for. No hunting, no missing GPG keys. These first two points are probably the ones I'll miss most from the PLT offering.

3. The Web Server

This is actually a place where more choice would do PLT Racket some good. They do have a pretty cool web server, but it's far from fast in practice. It also seems to crash more often than I'd like for a production app. Nothing like once per week, but it's happened a few times so far. The trouble is how it behaves. It's basically Tomcat, minus the copious installation headaches; you need to get all your code in order, make sure it'll run, then execute. And that's it. If you need to make changes (like, while developing web apps) you need to tweak the code, then restart the server, then re-navigate to the page you were just on because it auto-generates new urls each time. This is an exercise in frustration, and is one reason that I've still kept up on my PHP and Python skills this entire time. The languages may be slightly worse, but they're interpreted, so a change doesn't need to bring down the whole server. That's how Hunchentoot works. You load your files, then start the server. If you need to make a change, you evaluate the new code against the actual, still-running server. I wouldn't use this in the wild, but during the development stage, it is hot, buttered, bacon-wrapped power. That alone seems to be enough to pull CL into the lead as far as my use of it is concerned.

Now in, PLT's defense, they're aware of this. There was a concern about keeping LISP's inherently reflective nature, and they decided not to because it trips up so many people that they figured it wasn't worth the headaches. So the server forces you out while it runs, and the REPL bugs you to do a clean run every once in a while if your source has changed. I appreciate the sentiment, because it really was made to be a teaching tool, but I'm being mighty tempted by the dark side regardless. There's also a concerted effort from PLT to keep things byte-oriented. For example, there is no supported way to get a list of POST/GET parameters out of a request (other than "manually") in PLT scheme. "Manually" entails getting a list of binding objects out of the request and mapping over them to get a list of byte-strings out. There's also a few other little gotchas (like how awkward it is to actually create a link whose result is another scheme function, and how url-based dispatch is for whatever reason NOT the default).

4. The Format function

This may sound like a nitpick, but I'm not into the nitpicks yet. This is actually a difference. In PLT Racket, you're stuck with (format "~a" blah). It only accepts formatting directives, rather than CL's richer set of formatting, flow control and kitchen sink. It also always returns its result, and doesn't have the option of printing to standard-out (you have to use printf for that). I didn't think this would make as big a difference as it did, actually, because I've gotten used to the simpler Scheme format, but hot damn is it awesome to be able to do something like (format nil "~a ~{ ~a: ~@[ ~a ~]~}" (car blah) (cdr blah)) instead of resorting to several function calls for the same effect.

5. Plists

Basically same story as above. I forgot how useful these actually were for day-to-day purposes. I mean, I still bust out hashes for bigger stuff, but little tasks all over the place are made just a tiny bit easier with the use of p-lists instead of a-lists.

6. Function names.

Ok, now we're into picking nits. It's not a huge deal, but the scheme conventions are cleaner and more consistent. If you're dealing with a predicate, it ends with "?", if you're dealing with a side-effect function, it ends with "!". Common lisp has a grab-bag. Some predicates end with "p" (as in listp), but most are just the unmodified word (as in member). Also under this category is the lisp-1 vs lisp-2 thing; because there's separate namespaces for functions and variables in CL, there's two let types (let for variables and flet for functions) and two definition types (defun and defvar). Because functions get treated differently from other variables, some things are a bit trickier in CL; for example, while you can still do (apply (lambda () 42) '()) or (mapcar (lambda (num) (* 2 num)) '(1 2 3 4 5)), you actually can't do something like (setf foo (lambda () 42)) followed by (foo) (you would either need to call foo with (funcall foo) or define it as (setf (symbol-function 'foo) (lambda () 42))). The Scheme equivalent is (define foo (lambda () 42)), after which (foo) does exactly what you think it will.

7. Macros

PLT Racket has define-syntax-rule and define-syntax, as well as library support for define-macro, which is a copy of CL's non-hygenic defmacro. In practice, I found myself using define-macro most of the time, so it shouldn't be too big a problem to switch here. Admittedly, define-syntax made it extremely easy to define recursive macros, but lisp has a number of iteration options that make it close to a non-issue.

8. Iteration

This one's probably the tiniest deal there is. Common Lisp has a bunch of iteration functions/procedures, from the loop macro to dolist, to mapcar and friends. Scheme really only had map and tail recursion, and I sort of preferred that. The reason I list this as "tiny deal" is that my particular CL implementation (SBCL if you must know) does tail-call optimization anyway, so I could just keep up my wicked, functional ways if I wanted to.

9. The IDE

For beginners, PLT wins it. I remember having this conversation with myself earlier; a binary IDE portable across OS X, Linux and Windows, with nice buttons to do things like "Run" and "Macro Step". It's perfect when you're starting out because it's nothing like the near-vertical learning curve of Emacs, but it ultimately limits you. Since I started with PLT Racket, Emacs has become the main program I use. Seriously, something like 75% of all my computer time is spent here, and the rest is split between Klavaro and Conkeror. I'm contemplating getting a shirt that says something along the lines of "Emacs is my master now". Long story short, once you know LISP (or, to be more precise, three LISPs), Emacs is by far the better IDE.

Now that I've laid down all my gripes, the pattern emerges, and it's definitely what Yegge was talking about. Scheme is built to teach and learn (and possibly prove things formally). Even PLT Racket, whose developers are self-declared hackers who go above and beyond the R6RS implementation to provide a pretty decent production candidate, errs on the side of making things easier for beginners rather than easy for veterans, and it stresses academic application over production application. Common Lisp is the precise reverse. It exacts a heavy toll in experience and patience, and the reward is a measure of power beyond other options. It's also crafted (or perhaps evolved would be a better word) for production rather than theoretical purity. I can appreciate that.

So there. If you want the executive summary:

PLT Racket: Theoretical purity and conistancy before practical considerations. Centralized development, indexed for your convenience. Make it easy to learn, consider the newbies.

Common Lisp: Get shit done first, consistency and purity are acceptable collateral damage for terseness. Distributed development, find what you can. Make it powerful, the newbies better watch and learn first.

The choice is pretty simple. Common Lisp wins as soon as you know what you're doing. But while you're getting your bearings straight, go for PLT Racket. For what it's worth, I won't abandon it. I still plan to put out a decent PostScript generation library for PLaneT before I get working on CL hardcore, and I'll always keep it around as a second scripting language (along with Ruby), and I still have several Scheme projects to maintain, but the days of typing M-x run-scheme instead of M-x slime consistently are over for me.

Friday, August 20, 2010

The Upside of Apathy

I think it's finally time to record my thoughts on the new workplace (and in the process muse over some things that have been on my mind lately). And yeah, that bleak sounding title is the reason I'm about twice as satisfied here as I was back at I Love Rewards.

First things first, full disclosure, my first project here was in PHP. And they do their share of .NET programming (and I suspect that a lot of the driver work gets done in C or maybe even Assembly, but that's unverified). They interact with the Postscript standard a lot, but no one does it by hand. Because we serve the medical industry, there's also large parts of CCR and HL7 slung around on a regular basis. That's particularly bad if you understand what those acronyms mean. CCR is an XML-based standard circa 1998, and it exemplifies everything that eventually made sane minds give up that markup for today's light-weight formats. It's the only data standard I've seen that, and I promise I'm not making this up, you have to pay in order to get a copy of. It seems like that would defeat the purpose. HL7 is what you would expect out of a Unix shop from the early ninties. It's a pipe-delimited text stream whose definitions document runs to a hundred pages or so.

OS-wise, I'm in the minority with one other developer who thinks we ought to be using linux-based servers, and I'm actually the only one electing to go with Linux + Emacs for my desktop needs. The place is thick with MSVS and the like. Finally, I'm the only one who uses source control (and it's GIT, as if you had to ask by this point).

So why am I happy in the face of these conditions? It's because, for the most part, I've been hacking Scheme and Elisp through the past few months, and using whatever additional tools I wanted to.

Scheme? At a place that also potentially uses Assembly? Well, yeah. It turns out that no one outside of IT is religious in the least about what languages, tools or systems you choose to use. They really only care that the result is business-applicable in some way, cheap and fast. So as long as you hit those, they don't particularly care. I couldn't get away with this at many other places. Being a Linux-toting, Emacs-using Scheme hacker in the middle of a vast Windows ecology, I mean. There are remarkably few job openings in the field, especially in the bustling, skilled-programmer-metropolis of Toronto. If you want a capitol-J Job here, you need to know (and be willing to work with) Java or C#, the COBOLs of the modern world. Every once in a while someone wants a Python or PHP hacker for some contract work.

But apathy has an upside, like I said, and I'll take it.

It's really interesting to me that once again, passionate, caring (but woefully uninformed) MBAs seem to fuck everything up. If Ken was a hands-on leader, I'd probably have had to learn Java or C# too, or at the very least content myself with constant PHP. This is the Paul Graham effect in full swing; big companies use languages and tools that get sold to the management, not to the devs (and to be fair, if the devs got to pick, we'd all probably be working in C++ and it would be worse). Little companies use the juice that gives the the highest ratio of miles to millileters.

This is the crux of the problem though. The decisions here are such that most people have no idea what the right answer is. Most business people pick what's advertised; thay want to go with the flow. Most devs pick what's fastest in the machine sense; once they're done writing reams of code for their hello world, they want the machine to execute as fast as possible (Contractors seem to want what gives them as many billable hours as possible while not raising too many eyebrows, but that's far from unique to the IT business so I'm disregarding it). It's pretty obvious to the reasoned observer that neither are the correct answer. They might accidentally yield correct answers, but there's nothing about either thought process that makes the right answer more likely than (or even equally likely as) the wrong one. The specific answer depends on the specific situation, granted, but I put forth that in all cases, the correct answer is the most expressive language that will fit on the target hardware. You do want to optimize speed, but not from the machine's perspective. Code needs to be fast and easy to make and maintain, rather than run, which means that you want as little a gap between what you can express and what you want to express. The complaint that gets levelled at non-C languages is that they're slow, which is true until you consider that ~2.4 gH dual-cores and 6gb ram setups are now common on the home market. If you're working on the latest 8-bit IC from Atmel, ok, yes, use C or Forth and constrain yourself to either manual bit-twiddling or the stack. If you're on the desktop, or god help you, a fucking server cluster and still managing memory yourself, then someone (and it may have been you) has made a poor choice on your behalf. This is the sort of obsesive behaviour whose logical conclusion is hand-counting your cereal flakes every morning to make sure you're getting precisely 1024 of them. I think we can agree that the sane thing to do is fill the bowl, hoping the offsets average out in the long term, and getting on with your day.

It seems that democracy doesn't really work here either, because as I hinted above, I doubt that most programmers would pick the most expressive language. They'd either pick what they know just because they happen to know it already, or they'd pick a language that let them code as close to the metal as possible. So how do I know they're wrong, as opposed to me being wrong? Well, that's where the issue goes to shit. The reason I know they're wrong is that I've worked with the low-level languages (C and Forth), I've worked with what I'll call the mid-level languages (Python, Ruby, Java, C#, PHP), and I've worked with four or five LISPs (which I stereotypically place at the top of the progression). As per PG's Blub Paradox, I know that the non-lisps are missing some crucial features that I find myself using on a fairly regular basis (not even macros, interestingly. The prefix notation itself seems to pack a lot of the punch on its own). Also, I've clocked myself as "fast" with some of these languages and "slow" with others. The reason the issue goes to shit here is that this argument will only convince people who have already given several LISPs a fair try and have worked in some other languages from accross the continuum. Those people don't need convincing; if they've come that far, it's a good bet that they've stuck with LISP in some form, or that they're one of the few, vocal pessimists around, loudly proclaiming "all languages suck balls". The people that need convincing are the ones that are currently convinced that C# or Java alone represents the limits of "programming", or the ones that look upon learning new languages as procrastinating.

It's ironic to consider, but it seems like it might just be easier to sell Scheme to the managers and let the traditional pyramid bear the change out. Managers understand the argument "this way is faster", and don't particularly care about the rest as long as you can prove that. There's trouble this way too, though. You see, it's fairly easy to make the argument "this way makes me faster", as long as it's true and you can demonstrate this. But the argument "This will make you faster" is another matter entirely. For starters, in a shop of C programmers, it's patently false. It'll take months of genuine practice to get to a higher level of productivity if your team doesn't already know LISP. There's exactly one other way to go, and that's competing at the company level. In other words, start a bunch of Scheme/CL shops and watch them out-compete the ever-loving fuck out of the Algol descendants. Watch them driven before you, and blah blah blah. It seems like it would work, as long as we stop the AI winter thing from happening again, which looked like it was the result of promising too much, delivering too little, while focusing too much on the math and linguistics and not enough on the business end.

In other words, LISP for business logic instead of research. I think I can do it. I'm certainly in a position to. Even if not, I'll try my hardest and let you know how it goes.

Monday, August 2, 2010

(define (update-again)

Oh fer chrissake, ok.

I figure it's about time to update because I haven't lately. It turns out that you don't stop being busy just because you've stopped working for an employer that demands 20 overtime hour a week. You just get busy doing other stuff. I;m beginning to think that "busy" is either contagious or hereditary.

Mostly the stuff going on here is work on that clandestine project I mentioned a little while ago. Basically, I got to thinking that I really wanted to pick up D&D again. The really sad thing is that my group of friends is pretty geographically disparate, and we don't really feel like driving out for 6 hours to meet up for 8 hours of dice-rolling only to have to weather the drive back. That's not the type of ordeal you can undergo every two weeks.

My first thought was "ok, there has to be something that will let me and my friends play D&D online". I took a look around and found Virtual Tabletop (which has a pricing-system/website so bysantine that I rate their chances of putting together an intuitive in-game interface at very near zero), Open RPG (which has that typical open-source affliction of horrible UI, make-based installation and on top of that neglects OS X, which most of my friends use) and Web RPG (whose link doesn't go anywhere because the only evidence I could find of the project is a page talking about why, specifically, it sucks).

And that's the point at which I started hacking together my own little PLT Scheme app to let me play D&D with my friends. I've registered diecastgames.com (which currently goes to a godaddy parking page), and put together a development blog here in the meantime. It isn't at the point where I want anyone using it yet, but it should be up and running in beta form in the next little while. It'll be almost ugly as sin until I put the art together, at which point we're ready to go. Then I'll just have to convince one of my friends to DM.

Which shouldn't be hard, all things considered. If they get uppity about it, I can just point out that I built a friggin online D&D system from the ground up.)