Monday, February 14, 2011

Quicklisp, Linode, Hacking in the pejorative and other notes...

This week has been kind of a mixed bag for me; I've been thinking about a bunch of stuff, but not enough about each thing to justify a whole blog post. So here's the result of my mental mastication. It's not pretty, but perhaps it will be nourishing.

On serving data

I've been administering my own server for the last little while. First for DieCast (which is on hold for the moment 'cause the server was needed for something else) then to host some ASDF files (which are actually back up now; you should be able to asdf-install:install 'cl-css or 'formlets without serious problems), and now for a couple of websites I'm doing work on. The experience has taught me three things.

1. Common Lisp webapp deployment sucked balls before Quicklisp

My first nginx+Hunchentoot setup took hours. Some of this was for lack of familiarity with the apps because my second deployment took hours (fewer of them though). Which is an excellent improvement, but still not good in the absolute sense. The main problem was actually setting up Hunchentoot; it has many dependencies (many of which have several recursive dependencies of their own), each of which need to be downloaded and evaluated separately, each of which has at least one compiler warning, and one of which usually fails to download. The worst deployment after the first involved a key ASDF-package hosting site going down. That meant I had to go out and download + install + load all of Hunchentoots' dependencies recursively by hand in order to get them running. Sadly, lacking encyclopedic knowledge of Hunchentoot, this meant I had to try (asdf-install:install 'hunchentoot), wait for it to error out, get the piece it errored on, install it and try again. Once the server was up it was awesome, but getting it to that state was a pain in the ass the likes of which I'm having trouble analogizing properly. Quicklisp does it in 10 minutes, while simultaneously massaging my aching shoulders. I really hope zach doesn't start charging, because I get the feeling many Common Lispers would end up owing him their house (he welcomes donations, of course).

2. System setup sucked balls before Linode

I used to use Cirrus Hosting. And actually still do at work; we had them before I came in, and they're pretty good so I don't have a burning need to switch over, but we'll see what's possible once our subscription is up. Basically, I was used to a VPS being more or less just a regular server, except virtual. You have to spend a bit of time installing the distro, reboot, and install. It turns out that if you put thought into the process, a lot of that startup time can be done away with behind the scenes. Linode has put a lot of thought into the process. Going from one linux distro to another takes something like 5 minutes. I found this out bouncing between different linuxes (linuxen? linuces?); the process was initiated and that's typically a cue for sandwiches, but I didn't have enough time. Needless to say, it was a pleasant surprise the first time a deployment from bare metal to a running Common Lisp server took less than half an hour.

3. Break-in attempts are surprisingly common

If I'm to believe my auth.log, a concerted effort at hacking is made by some jackass roughly every two days. Needless to say, my iptables have been modified. It's different IPs, but always the same MO; they try some random common usernames, fail and go away. Apparently it's escaped their notice that I switched to RSA keys and disabled password/PAM authentication. To be fair, checking the logs, it seems that before the change to key-based auth, the situation regularly looked like

Feb 10 07:40:04 Invalid user abc from 61.240.36.1
Feb 10 07:40:07 Invalid user abc123 from 61.240.36.1
Feb 10 07:40:10 Invalid user benjamin from 61.240.36.1
Feb 10 07:40:12 Invalid user lstiburekz from 61.240.36.1
Feb 10 07:40:15 Invalid user kent from 61.240.36.1
Feb 10 07:40:18 Invalid user jabber from 61.240.36.1
Feb 10 07:40:20 Invalid user andres from 61.240.36.1
Feb 10 07:40:23 Invalid user dovecot from 61.240.36.1
Feb 10 07:40:26 Invalid user magda from 61.240.36.1
Feb 10 07:40:28 Invalid user alex from 61.240.36.1
Feb 10 07:40:31 Invalid user stefan from 61.240.36.1
Feb 10 07:40:34 Invalid user stefano from 61.240.36.1
Feb 10 07:40:36 Invalid user cristi from 61.240.36.1
Feb 10 07:40:39 Invalid user claudi from 61.240.36.1
Feb 10 07:40:42 Invalid user sarah from 61.240.36.1
Feb 10 07:40:44 Invalid user smokeping from 61.240.36.1
Feb 10 07:40:47 Invalid user fetchmail from 61.240.36.1
Feb 10 07:40:50 Invalid user backinfo from 61.240.36.1
Feb 10 07:40:53 Invalid user umberto from 61.240.36.1
Feb 10 07:40:55 Invalid user mauro from 61.240.36.1
Feb 10 07:40:58 Invalid user jana from 61.240.36.1
Feb 10 07:41:01 Invalid user adriano from 61.240.36.1
Feb 10 07:41:03 Invalid user xenie from 61.240.36.1
Feb 10 07:41:06 Invalid user lea from 61.240.36.1
Feb 10 07:41:09 Invalid user joule from 61.240.36.1
Feb 10 07:41:11 Invalid user Debian-exim from 61.240.36.1
Feb 10 07:41:14 Invalid user unbunutu from 61.240.36.1
Feb 10 07:41:17 Invalid user cacti from 61.240.36.1
Feb 10 07:41:19 Invalid user polycom from 61.240.36.1
Feb 10 07:41:23 Invalid user payala from 61.240.36.1
Feb 10 07:41:26 Invalid user nicola from 61.240.36.1
Feb 10 07:41:28 Invalid user melo from 61.240.36.1
Feb 10 07:41:31 Invalid user axfrdns from 61.240.36.1
Feb 10 07:41:34 Invalid user tinydns from 61.240.36.1
Feb 10 07:41:36 Invalid user dnslog from 61.240.36.1
Feb 10 07:41:39 Invalid user dnscache from 61.240.36.1
Feb 10 07:41:42 Invalid user qmails from 61.240.36.1
Feb 10 07:41:45 Invalid user qmailr from 61.240.36.1
Feb 10 07:41:47 Invalid user qmailq from 61.240.36.1
Feb 10 07:41:50 Invalid user qmailp from 61.240.36.1
Feb 10 07:41:53 Invalid user qmaill from 61.240.36.1
Feb 10 07:41:55 Invalid user qmaild from 61.240.36.1
Feb 10 07:41:58 Invalid user alias from 61.240.36.1
Feb 10 07:42:01 Invalid user vpopmail from 61.240.36.1
Feb 10 07:42:03 Invalid user ldap from 61.240.36.1
Feb 10 07:42:06 Invalid user gica from 61.240.36.1
Feb 10 07:42:09 Invalid user sympa from 61.240.36.1
Feb 10 07:42:11 Invalid user snort from 61.240.36.1
Feb 10 07:42:14 Invalid user hsqldb from 61.240.36.1
Feb 10 07:42:17 Invalid user member from 61.240.36.1
Feb 10 07:42:20 Invalid user chizai from 61.240.36.1
Feb 10 07:42:22 Invalid user yakuji from 61.240.36.1
Feb 10 07:42:25 Invalid user gijyutsu from 61.240.36.1
Feb 10 07:42:28 Invalid user kaihatsu from 61.240.36.1
Feb 10 07:42:30 Invalid user iwafune from 61.240.36.1
Feb 10 07:42:33 Invalid user oomiya from 61.240.36.1
Feb 10 07:42:36 Invalid user seizou from 61.240.36.1
Feb 10 07:42:38 Invalid user gyoumu from 61.240.36.1
Feb 10 07:42:41 Invalid user boueki from 61.240.36.1
Feb 10 07:42:44 Invalid user eigyou from 61.240.36.1
Feb 10 07:42:46 Invalid user soumu from 61.240.36.1
Feb 10 07:42:49 Invalid user hanaco_admin from 61.240.36.1
Feb 10 07:42:52 Invalid user hanaco from 61.240.36.1
Feb 10 07:42:54 Invalid user system from 61.240.36.1
Feb 10 07:42:57 Invalid user tenshin from 61.240.36.1
Feb 10 07:43:00 Invalid user avahi from 61.240.36.1
Feb 10 07:43:02 Invalid user beaglidx from 61.240.36.1
Feb 10 07:43:05 Invalid user wwwuser from 61.240.36.1
Feb 10 07:43:08 Invalid user savona from 61.240.36.1
Feb 10 07:43:10 Invalid user trthaber from 61.240.36.1
Feb 10 07:43:13 Invalid user proftpd from 61.240.36.1
Feb 10 07:43:16 Invalid user bind from 61.240.36.1
Feb 10 07:43:19 Invalid user wwwrun from 61.240.36.1
Feb 10 07:43:21 Invalid user ales from 61.240.36.1

whereas I now merely get the occasional

Feb 12 11:53:12 Invalid user oracle from 212.78.238.237
Feb 12 11:53:13 Invalid user test from 212.78.238.237
Feb 12 12:03:59 Invalid user apache from 79.174.78.179
Feb 12 20:16:59 Invalid user postgres from 79.174.78.179

So it helps, but the regularity of these attacks is still surprising to me. It seems a bit odd that a script would keep trying if it got the refused (publickey) error, so I'm forced to conclude that there are one or two spammers out there manually looking for servers they can break into. That's ... odd. And I can't shake this picture of a 12 year old in some spamming sweatshop somewhere failing to break into my server and missing his quota as a result.

On starting up

So remember back in the prehistoric ninties, when the likes of this strange creature walked the earth? When the Playstation first introduced the idea of CD-based games to the console market, a friend of mine flatly said he preferred his SNES. When questioned about it, his reasoning boiled down to one word.

"Loading..."

For the youth who never experienced this; a Super Nintendo had no loading screens anywhere. You put the cartridge in, hit the power button, and it would go straight to the logo screen. While in-game, moving between areas was instantaneous. It seems like most people working in the consumer electronics industry today have either forgotten that instant usage is really good, or they never thought so to begin with. The latest generation of consoles has loading screens friggin everywhere. A different friend of mine purchased a TV recently that has a 30 second boot cycle, and comes with a network connection for the purpose of getting firmware updates. A fucking teevee. It's hilarious that between the TV boot time and the console boot time (and I won't even mention the install time on the console because it's really unfair), it actually takes longer to start a game of whatever in his living room than it does on my computer. Weird, because I thought the whole point of consoles was that they were special-purpose devices specifically designed to run games. Entertainment isn't the end of this trend though; my phone now also takes about a minute to start up (which is fair I guess, since it basically is a computer now, complete with a flavor of Linux and a web browser). Finally, my parents recently renovated their kitchen and procured for it a, I shit you not, dishwasher that needs to boot before it starts pulling in water.

At what point did this start happening? When the hell did the decision get made in the bowels of Sony corporate HQ that it was ok for my display to have a configuration cycle? If this is where the future of TVs is going, I may very well have already bought my last non-monitor display. But beyond entertainment, my greater concern is the trend of ephemeralization (as elaborated by Graham to mean "...the increasing tendency of physical machinery to be replaced by what we would now call software.") combined with the new human habit of sticking computers into things means that we are likely to soon have shoes, lip-balm and kitchen cutlery that come with their own fabulously designed and meticulously polished loading screens.

Somehow, I'm not enraptured by this prospect.

On data moving

It's come to my attention that the Canadian government has recently had a nontrivial (and ongoing) tussle with the CRTC and the major Canadian ISPs about whether or not they should be allowed to charge arms and legs for data overages. That last link was actually to the Open Media site, which is organizing a petition against the CRTCs move. If you're in Canada, you should probably sign it. My position is basically that I don't care, because the way I use the internet, 40GB is essentially unlimited. I'm not a netflix user (though I'm constantly told I should be) I don't torrent the games like the kids these days, and downloading Linux packages is a joke if you're running the minimal system I've got over here. The single largest component I install is haskell-platform, which takes something like 600Mb. With an M. Even with my fiancee being perhaps the worlds' biggest YouTube makeup video fiend, we've never actually approached the limit of our plan. My interest in this fight is purely in the interest of a theoretically unfettered future; one where data is as free as it could possibly be, and that world includes no limits on how much it's allowed to move per month (incidentally, that's also why I frown when I see things like this happening; freedom of information includes the right for said information to exist). So I'm against the CRTC here, but seemingly not for the same reason as anyone in a 100 km radius of me.

Friday, February 4, 2011

Heart Ruby

I really do have to update that header more frequently. It's been a good year and a half since I did anything other than some light make scripting in Python, JavaScript may as well be jQuery as far as my recent use of it is concerned, and I haven't done much more than some very lightweight playing in Erlang. Most of my time at work has been getting spent ass-deep in PHP which wasn't very pleasant even back when it was one of two languages I knew. The rest of it is tilted heavily towards the lisps (Common Lisp, Elisp, Scheme, in that order), and I'm still trying to get my head around Haskell through some tutorial and semi-involved practice. The practice will actually increase soon; a friend of mine wants some help with his website, and he's convinced that having it written in a lesser-known language will make it less likely to get hacked (he's had some security troubles lately). I tried to explain that this isn't actually how cryptography works, but he's having none of it. His instructions were "I don't care what you use, as long as it's not PHP". Score.

The last piece up there is Ruby, which I've had an odd relationship with. I tried out Rails a while back, but didn't like the amount of magic involved (and the various "convention vs. configuration"/security exploit stories I keep hearing about through friends aren't exactly tempting me back). I also tried out some Windows automation back when "Windows" was a thing I used for work rather than just for playing 10 year old video games. We also run Redmine at the office, so I've had to spend a very little bit of time making cosmetic UI changes. The point is, I've yet to write more than maybe 200 lines of Ruby in one sitting, but I still like it. It's clean somehow. Simple. In a way that Python never felt, even though the syntactic whitespace forces more visual consistency onto it. Despite my low line-count, ruby-full is still firmly wedged in the ## languages section of my installation shell-script, and the only reason my installation shell-script isn't itself written in Ruby is that the language isn't bundled with Debian.

I'm musing on this now, because I recently received a reminder of how beautiful it can be for simple scripting purposes. I had a problem with my XFCE4 setup. Actually, not a problem, just something that wasn't going quite as smoothly as it might have. I use multiple monitors on each of my multiple machines, you see. My desktop has two, my laptops share an external, and my work machine travels with me so it actually has two different monitors to interface with depending on where it is. The key is, no matter where I am, the situation is the same; I just want my monitors arranged left to right, each at the highest possible resolution. XFCE doesn't seem to have an option for that, so my initial approach was just to manually check xrandr output and type out the appropriate combination of --output, --mode and --right-of to get it working. It dawned on me the other day that this is pretty inefficient given how consistent the pattern is, and since I occasionally profess to know how to program, I should be able to do something about it. The problem is that step one of the process is parsing the output from a shell command, which surprisingly few languages care to do. Luckily, Ruby is one of them. My initial pass worked, but it was ugly (and I won't inflict it upon you here). After consulting codereview.SE, it was whittled down to

#!/usr/bin/ruby

def xrandr_pairs (xrandr_output)
## Returns [[<display name>, <max-resolution>] ...]
  display = /^(\S+)/
  option = /^\s+(\S+)/
  xrandr_output.scan(/#{display}.*\n#{option}/)
end

def xrandr_string (x_pairs)
## Takes [[<display name>, <max-resolution>] ...] and returns an xrandr command string
  cmd = "xrandr --output #{x_pairs[0][0]} --mode #{x_pairs[0][1]}"
  args = x_pairs.each_cons(2).map do |(previous_output, previous_mode), (output, mode)|
      "--output #{output} --mode #{mode} --right-of #{previous_output}"
  end
  [cmd, *args].join(" ")
end

exec xrandr_string(xrandr_pairs(`xrandr`))

which is pretty beautiful, as far as I'm concerned.

It's elegant for shell-scripting for two reasons;

First, Ruby has a wide range of options for calling the shell. exec seems tailor-made for the purpose above (it replaces the current Ruby process with a call to the command you pass it), spawn is useful if you want to do things in parallel and ` delimits a special string type that gets executed synchronously as a shell command and returns that commands' output.

Second, any string can be a template. This includes special strings like regexes and backticks, which is why you can compose larger regular expressions from simpler pieces as in xrandr_pairs above. You can stitch #{ } into any string you like, and the contents can be any expression, not necessarily a string. A minor, but important part is that I didn't have to call a function in order to make a string a template (there's no call to printf or format), and the contents are inlined (I'm doing "Foo #{bar} #{baz}" as opposed to "Foo #{1} #{2}" bar baz) which makes the result that much more obvious. Neither would matter much proportionally in a big project, but when I'm working on a 16 line script, I'll knock out every bit of cognitive overhead I can.

That's why I still use it. I never liked Ruby for big stuff, or even medium stuff, but I instantly reach for it to do anything under 100 lines that needs to talk to the shell.

Wednesday, February 2, 2011

Old Machines

I'm not attached to things.

My grandfather was; whenever we'd do some carpentry or light construction, he'd insist that we save old screws and nails we found. It's always seemed weird to me because even in the old country, nails and screws are things you can get at the hardware store for $20 per 5 lbs, so there never seemed to be much of a point in saving old ones. Whenever I'd point this out, he'd reply "You never know when a nail will come in handy", and proceed to stash stray nails in variously sized glass jars. Ok, yes, this was across the atlantic, so what he actually said was "Nikad neznaš kad ćeš trebati čavlića", but you get the point. He had a musty attic full of clothes he wore decades ago, books he read once, games he played when he was a kid, travelling cases that had only been used once, and a thousand other treasures that I never got to see, but that he likely also could have discarded with no disadvantages. I'm sure there was some socio-political reason for this, but I'm digressing.

I'm not attached to things, and I can say this as someone who has observed humans attached to things. But I still feel a bit perturbed when someone throws a computer away. I'm sure my descendants will have the same reaction I had to the old nails. "Grandpa, you can buy computers for 2¢ per core, they come with ten free petabytes of memory and a lifetime supply of storage on the Googazon servers. Why are you keeping those old things?" I can already feel the urge to tell those smarmy cunts to get the hell off my lan...

For the moment though, I'm slightly less than insane for putting old machines to use. Last week I stumbled upon an HP Pavillion circa 1998 (well, I assume it was thereabouts since it still had Windows 98 installed). With a roaring 566 MHz processor, a truly awe-inspiring 64 MB of SDRAM, and a massive 15GB hard drive. I've been meaning to set up a backup server for my setup here anyway. I still had to spend some money on a couple of hard drives (about $50 each for 160GB IDE drives, I had one lying around, but the rest of my spares are all SATA. Could have saved some money by getting a couple of adapters instead, but I didn't think soon enough. I'll get some of these if the drives ever fail) and an ethernet card ($4.99).

The first thing I had to do was remove a few unwanted items.

As comitted as I am to reusing old machines, I've still got to admit that there's very little use today for a phone modem or floppy drive. They were fairly easy to remove; just a couple of mounting screws internally. What was slightly tougher was this plastic faceplate that covered the area next to the front-facing USB port; it was held in by a small, springy metallic assembly that I had to lever out with a swiss army knife (I wanted another hard drive to go there).

Next up, I ripped out the 15GB drive it came with, popped in one of my 160GB ones and threw in that ethernet card for good measure. Then I installed Ubuntu Server 10.10. It could have been Debian, but I wanted to try out the latest Ubuntu release, and there are some things I'd like to do with pacpl that don't seem to work on my debian machine. The tradeoff is that Emacs seems to misbehave out of the box on Ubuntu, but this isn't exactly going to be a development machine so that's ok. The only stuff that went on was SSH server, GIT and Ruby (my language of choice for quick and dirty scripting).

Once the system was installed, the CD drive could come out (not about to install MS Word or any such nonsense; any other software that goes on this machine will come in through the network). That turned out to be easier said than done though; it was secured by screws on both sides, so I had to completely disassemble the box to get at it.

The hard drive destined for the position was going to rattle in a slot that size, and while I don't plan to race this machine around the block or anything, it's probably better to be safe. A couple of drive brackets made sure it would stay in place. Shop around if you plan on buying some, incidentally, I just put that link up because it was the first I found; there were actually a couple of braces lying around from my last case so I didn't need to order any. It also seems like you could improvise a set if you didn't feel like buying them.

With everything hooked up, it was time to boot back into the machine.

That incongruent looking mesh plate covering the top drive is a spare from the same case that had the extra brackets. And yes, I named the machine "orphan". It seemed appropriate. Here's ls /dev, showing the new drives (still haven't formatted them, that'll be for next weekend).

And that's it. I dropped it into a little wheel assembly that's been going unused since I got that mammoth tower for my main machine. It gives it a somewhat R2-D2 feel (this may be the start of an art project).

I'll put together some scripts to copy out key directories from my other machines and that'll be that. I guess I could also use it as a full-out NAS (ok, I technically am, but you know what I mean) or streaming server, but I'm not sure how far those 566MHz and 64MB of RAM are going to stretch. In any case, even with the slightly higher price/GB I had to pay for IDE drives, converting this old machine was much cheaper than shelling out for a pre-built.

The Microsoft Arc keyboard came in quite handy with this project. It's fairly ergonomic, the arrow oddity isn't as annoying as it seems it should be, and the transmitter is easy enough to move around. It's definitely a step up from wrangling USB cables from my main machine about three feet to the side. My only complaint is that it friggin devours batteries, compelled like some primal beast, always growling for more. That's easy enough to solve, I guess, just remove the batteries when it's not in use, but that's a small annoyance on an otherwise perfect spare keyboard.

Tuesday, February 1, 2011

Confusion of Ideas

On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. - Charles Babbage Babbages' response was sufficiently condescending in the finest grumpy-IT-guy tradition (or perhaps this was just a reflection in his journal and he merely rolled his eyes at the time, boiling with internal rage). Holding the implicit assumption that, at least while operating a computer, a technician must forsake their humanity and act with perfect precision. The quote usually gets trucked out to ridicule someone who doesn't understand how computers work. You're supposed to be thinking of the CS manager who downloads and runs every .exe attachment, then wonders aloud (perhaps in your direction) why the damn machine doesn't work. "How the hell", we are expected to reflect, "would a computer getting the wrong inputs still produce the correct output? It's ridiculous!" Well the notion wasn't ridiculous. It was just ahead of its time by a few hundred years.

Saturday, January 29, 2011

Best Buy and Monitor Suspension

Last weekend, I placed an order for something that's been a long time coming.

The same order will also contain a new wireless keyboard I'll be trying out. It does something a bit odd with the arrow keys, but other than that, seemed to work pretty well when I tried it in store.

It probably won't replace my keyboard of choice, rather it'll function as a nice spare for working on off machines. It does have a much smaller receiver than the wireless keyboard I currently use in the livingroom, so it might find a permanent home there.

Now I'm used to ordering shit from newegg.ca, maybe I'm a bit spoiled, but this order was far from error free. So, for the benefit of people who want to get decent pricing on monitor arms, here's a guide to ordering things from BestBuy.ca with BestBuy gift cards.

Step 1 - Shop

Find the stuff you want and get it into your shopping cart (this works the same as on every online store ever).

Step 2 - Checkout

Click "Checkout". If you've never ordered something from BestBuy before, you'll need to register by telling them your email and setting your BestBuy password.

Step 3 - Address

Fill in your shipping and billing addresses (if you don't have them memorized, make sure to write at least the phone numbers down; you'll need to enter them a few times and the last phone field doesn't autocomplete for whatever reason).

Step 4 - Payment Information

Get your gift card out and enter the 16 digit number from the back (be sure to omit the spaces, the BestBuy devs don't believe in .replace()) then scratch off the security strip and enter the confirmation code. If you like, you can check the balance on your card before continuing just to make sure. Once you've entered your gift card info, click "Apply". This should cause your order to error, saying "ERROR: This order cannot be processed at this time. Please try again later.".

Step 5 - Address

You can't refresh, or hit back (well, you can, but you'll be prompted to refresh, at which point it will leave you with a blank form anyway) so just click on your cart and start the order again. Re-enter your addresses (if you just clicked "Add to Address Book" in step 3, note that your billing address is now entered in the shipping address fields, and your billing address fields are empty).

Step 6 - Payment Information

Re-enter your gift card numbers and click "Apply". This time your order should go through. After a short loading screen, you will be told that your "credit card" was declined (if you took more than 5 minutes to correct your address for whatever reason, you will instead be told that it has been 30 minutes since any activity and your connection has been terminated; either way proceed to the next step).

Step 7 - Address

Re-enter your addresses (make sure to correct the shipping address again).

Step 8 - Payment Information

Enter your gift card information. This time the order should go through (for realsies). You should get an on-screen receipt, which will contain your order number. Print this page (or at least save it as a PDF); you should get a copy emailed to you, I just printed mine to be safe.

Step 9 - The Aftermath

You will be sent a confirmation email, saying that your order has been accepted and will be shipped in two days. If your order contains items from multiple suppliers, you will get one confirmation per supplier. You will also get an email saying that your payment method was declined by the second supplier. This will happen even if you have enough money on the gift card. The second supplier will try to charge your gift card again the following day (you won't get a confirmation email if this succeeds but it should, with no action on your part, as long as you have enough balance remaining). Wait two days to see if the order ships before calling CS; they're very nice people who will tell you no more than I just did, and they can't seem to help much in this situation.

And that's it. You've just ordered something from BestBuy. It'll be there in about four days via Canada Post expedited delivery. By "there", I mean "at the billing address".

Monitor Arm (Atdec VF-AT-D)

First impressions are pretty good, except that it didn't come with a key screw that would let me use the clamp option, so I'll need to drill a hole in my desk this weekend before I mount stuff. It's fully articulated and there's a pretty wide range of configurations possible. The main win for me is that I'll reclaim a lot of my desk space because the arm is tall enough to keep my monitors above the working area (and I can move them out of the way in any case). It'll also even out my monitor position options; currently I have a Dell 23" widescreen (which came with a fully rotatable/tilting stand), and an NEC 22" widescreen (which can basically just tilt about an inch).

[one installation later...]

It definitely evened out my setup; both of my monitors are now comfortably hovering 12" above my desk (right at eye level, which is the ergonomically correct placement as I understand). The reason that I know it's exactly 12" is that I had to use a 12" steel ruler to brace the monitor arm in order to keep it from sliding down the shaft. I'm going to Home Depot later this week to get a locking clamp, but in the meantime, the ruler is performing admirably. The arm says it's built for a pair of 24"/17.5lbs monitors. It's not. What they probably meant is that the joints won't warp or sever under that weight. It's doubtful anyone at the company actually tried to put this thing together with two 24" monitors, because they would have quickly realized that the main clamp doesn't grip strongly enough to keep the setup in place (or, they tried it with two featherweight monitors and didn't bother to note the weight difference in their media).

TL;DR;

Pros:

  • Ergnomic positioning (and actually, really good cable management) for monitors.
  • High degree of articulation (not as much as the stand on a Dell Profesional series monitor, but still respectable).
  • Small footprint.

Cons:

  • Be prepared to drill a hole in your desk for it (even if the correct screws had come with the unit for clamping, there are some positional limitations if you go that route)
  • Be prepared to brace the main shaft if you have larger than 20" monitors (currently using a steel ruler in the wire cleft at the back, really should be using a locking ring clamp to support the on that came with it).
  • If you have especially small/lightweight monitors and can pick this up cheap, go nuts, otherwise I honestly can't recommend it. I'm keeping mine because it's better than stacking books under my NEC to get it to eye level, and because of the small footprint, but I probably would have gone with another model had I known how flimsy the locking assembly is on this one.

Monday, January 24, 2011

XMonad Up Close

I'm taking another, closer look at XMonad this week. StumpWM is awesome for straight-up coding, and its extensive use of the Emacs model means that there was very little learning curve for me to struggle against. A few things were starting to burn my ass though, and while I tolerate small blemishes there are two other forces at work here. First, I've tried hard to make sure that my tools enable rather than limit me (and one of the things that started to burn my ass was a specific GIMP-related bug that did limit me). And second, I've been looking for an excuse to pick up Haskell for more than casual play for a very long time now.

As a public service, here are the few issues that I ran into with StumpWM (I don't intend to dwell on this, just be aware of them. I can still recommend it heartily if you're an Emacs user that won't run up against these specific points):

  1. It crashes every time I try to open a file with GIMP
  2. It has some odd issues with Mplayer (naive fix here).
  3. It doesn't seem to like nautilus (which is mainly an issue when I'm trying to browse through a folder full of images; this is one of the few places in my day-to-day computing activities where a command line is less efficient than a GUI)

That's it. Now, to be fair, #3 is only relevant if you don't use StumpWM as a window manager on top of a desktop environment, #2 has a workaround that I've been using successfully, and #1 only really bites you if you're multiclassing Illustrator/Programmer, which is not unheard of, but keep in mind that YMMV here.

It's actually sort of amazing that I got by for such a long time without noticing #1. After noticing it, I got into the habit of spending most of my time in StumpWM, and switching into Gnome/Bluetile for GIMP work. And that worked out just fine when most one or the other type of work was a vast majority of my time. Sadly (fortunately?) a couple weeks ago, my schedule started rebalancing into about 50/50 graphics and coding (I'm doing some concept work, which involves a web app, but no code yet so my tablet and degree are finally being put through their paces). It was surprising how horribly annoying the start-up wait time for Gnome/Bluetile can be. I've written about it already, and my conclusion on Bluetile was, basically, that it was overtly complex but a workable beginners' TWM. Certainly not something I'd use as my first choice, in any case. Add to that the fact that I had been spoiled by StumpWMs' nearly instantaneous start-up, and those WM switches were starting to look ugly. It actually changed the way I thought; I'd get all my coding done first, then do my image work all at once, just to minimize the impact of that switch.

This was clearly not an optimal situation.

By chance, I stumbled onto a reddit post bemoaning that Gnome lag. Long story short, the poster used XFCE4, XDM and Ubuntu server edition to put together a minimal, but snappy desktop environment. It looked interesting, and passed the Compaq Test[1] so I took the weekend to replace Gnome with XFCE4 on each of my machines (I kept them all Debian Squeeze, rather than downloading Ubuntu server 10.10, and I used xmonad instead of XDM because I primarily wanted tiling rather than mousing). There's bound to be more updates about this as I nail down specific stuff, but it's working very well so far. I have tiling where I need it, and (because of XFCE4) I can use pointing devices when they're appropriate. My ~/.xmonad/xmonad.hs is pretty minimal at this point:

import XMonad
import XMonad.Config.Xfce

import XMonad.Actions.Submap
import XMonad.Util.EZConfig

import qualified Data.Map as M

main = xmonad $ xfceConfig { modMask = mod4Mask }
       `additionalKeys`
       [ ((mod4Mask, xK_p), spawn "exe=`dmenu_path | dmenu` && eval \"exec $exe\"")
       , ((mod4Mask, xK_Return), spawn "xterm")
       -- , ((control, xK_space), spawn "xdotool text 'testing testing'")
       -- , ((controlMask, xK_t), submap . M.fromList $
       --                         [ ((0, xK_p), spawn "exe=`dmenu_path | dmenu` && eval \"exec $exe\"")
       --                         , ((0, xK_Return), spawn "xterm")
       --                         , ((0, xK_t), spawn "xdotool key ctrl+t")
       --                         ])
       ]

The commented stuff has to do with tweaks I'm trying to make. Xmonad+XFCE4 hits all of the pain points I was having with StumpWM, but it introduces a couple of its own (less severe, from my perspective).

First, the mod-keys aren't on the home row; I have to contort my left pinky/ring finger/thumb (still haven't decided which feels least uncomfortable) in an odd way to hit the Win or Alt keys in a way that never happened when mod was effectively C-t. Granted that may have made it very slightly more awkward to add tabs in some browsers, but there are workarounds. Also luckily, XMonad.Actions.Submap exists, which means I can write up a key list that's less RSI-inducing (the only reason this is commented above is that I can't get xdotool working as advertised).

Second, there's that layer of user-facing complexity that comes from distinguishing between screens and workspaces. I've had time to reflect since my Bluetile writeup, and it seems like a lot of time in StumpWM gets spent screen-juggling (making sure that Emacs and Chrome stayed on my main monitor and mplayer/terminals/secondary apps on the other one). This is because Stump doesn't make that key distinction. When you cycle to the next window, it's the same window you've got open (regardless of which workspace you've currently got it on). That's easier to learn because you have to think about a single list of windows, but trickier to use because it's that much more likely to blow your layout by pulling a window you don't mean to. XMonad goes the other way; there are three explicit keystrokes to switch the "focused" monitor (and as far as I can tell, if you have more than 3, you're screwed), but the upside is that windows will stay where you put them, workspace-wise.

It also looks like I'll have to do some light Haskell learning to get full benefit, but if you've ever talked to me about computers IRL, you know that I don't consider that downside.


1 [back] - "The Compaq test" is something I put any desktop environment or WM through before switching; it involves building it on a $20 Compaq Presario R3000 (with a 1.4ghz processor, a whopping 256 MB of ram and a 5400rpm HDD) and using it for a weekend to see how it works. The reasoning is that if it's tolerable on the Compaq, then it'll fly on my desktop machine. It's something I reserve for background software, not things like Emacs or GIMP themselves. My rule of thumb is that anything I'm thinking of using that has to be on all the time should do better than "awful" on the Compaq test.

Thursday, January 13, 2011

Writing C in Lisp

I'm sick today, so I figured I'd write something so that I can at least retain the impression of productivity.

Lately, I've been working on a little pet project of mine which has to do with the codebase for Elite for Emacs (a port of Ian Bell and David Braben's original Elite game for my favorite editor). I liked the original, and while I didn't play it in 1984 (too busy being born that year), I did enjoy it quite a bit. The Emacs version is text driven, of course, but that's not all bad. I'll come back to why in a later post, but first I want to heap some scorn on this code. It's not that I hate it particularly. It works, for the most part, and it doesn't have to be fast (performance requirements are unsurprisingly low in a single-player game in Emacs). It's just that whenever I try to read any piece of elite-for-emacs, I come away with the impression that someone sat down and carefully thought of the worst possible way to do something, then implemented it. The result is an extremely verbose transliteration of a C program into Emacs Lisp. I'm going to dissect it here so that I can learn something, and so that hopefully, if there are hardcore C programmers out there learning Lisp, they won't pull things like this again.

This didn't start out as a re-write, just so you know.

I just wanted to fix a couple of bugs with weird control characters showing up in planet descriptions, change the behavior of a couple of commands for ease of use, and remove one or two components. It won't be a rewrite in the "throw it out and start over sense", but after peeking under the hood, it looks like I'll replace very close to all of the 4205 lines of code that make up this "port" of the original (either C or Assembly) codebase before I'm satisfied. There are some mistakes. Not in the sense that they don't produce working code, but in the sense that there are much simpler, easier and more accurate ways of doing the same thing in Elisp. Here are some before shots of common idioms I've found in no particular order:

(if foo
    (progn bar
           baz))

(defun foo (a)
  (let ()
    (progn
      (baz)
      (mumble))))

(if foo
    (progn (setq b bar))
    (progn (setq b baz)))

(defun foo ()
  (let ((a)
        (b)
        (c))
    (setq a bar)
    (setq b baz)
    (setq c mumble)
    (progn
      ...)))

(let ((a)
      (i 0)
      (b ()))
  (while (< i (length foo))
    (progn
      (setq a (car foo))
      (setq b (append b (list (mumble a))))
      (setq foo (cdr foo))
      (setq i (+1 i))))
  b))

If you want to see the complete, non-elided code, check out the project page. It's pretty much more of the same, with some odd byte/numeric indexing operations thrown in where they probably don't belong.

As far as I can tell, the snippets above should have been respectively

(when foo bar baz)

(defun foo (a)
  (baz)
  (mumble))

(setq b (if foo bar baz));; I'm leaving the setq in for demonstration purposes, but the actual task could have been done functionally

(defun foo ()
  (let ((a bar)
        (b baz)
        (c mumble))
      ...)) 

(mapcar #'mumble foo)

I'm not pointing this out to be mean. This code was written way back in 2003 (I'm using an earlier version that doesn't include combat/missions/GUI/windowed interface because my purposes demand simplicity, but the later code still makes use of the above idioms from what I saw). It's possible that some of this stuff was necessary at the time because of bugs in Emacs, or the peculiarities of Elisp. Not terribly likely, but possible. Anyway, here's why I don't like the above.

  • The when macro exists. (when foo bar baz) does bar and baz if foo is true (unless is a similar construct that does exactly what you'd expect given when). Keep in mind that in Elisp, (), '() and nil are "false" and anything else is "true" for boolean purposes.
  • The last value in a lisp block is returned implicitly. you can use this property to chain calls on objects instead of explicitly setfing an intermediate variable then copying. This applies to if too, which is why you can do (setq b (if foo bar baz)) instead of having to put setq in both branches of the conditional.
  • You don't need to declare variables in lisp. If you want to establish local bindings (let ((a 1) (b 2) ...) [body]) is the way to do it. You can also use let* if you want the temporary variables to refer to each other, for example (let* ((a 1) (b (+ 3 a))) b) would return 4. You do need to keep the two straight in your head, because (let ((a 1) (b (+ 3 a))) b) would return an error (specifically, it would complain that the variable a is unbound). This is because let doesn't guarantee that its bindings will be done in the order they are presented. let* does this, but it's considered good style to use let where you can. If you need to define temporary functions, use flet.
  • progn isn't necessary everywhere. Use it if you need to do multiple things in one branch of an if statement. Keep in mind that when, unless and cond have implicit progn for their blocks, so you don't need to type it out. New Lisp coders might think this is analogous to the missing curlies problem in C-like languages. I've been chewed out for doing things like
    if(foo) bar();
    else baz();
    in javascript code. The argument is always that if someone later adds mumble to the else block, but forgets to add curly braces, they'll get a fairly hard-to-find bug.
    if(foo) bar();
    else mumble();
         baz();
    
    In case you didn't catch it, that makes baz() unconditional, which is presumably not what you want. The correct way of doing it, I'm told, is
    if(foo){
        bar();
    } else {
        baz();
    }
    or
    if(foo)
    {
        bar();
    } 
    else 
    {
        baz();
    }
    depending on who's talking. In an imperative language with optional curlies/parentheses/brackets, this matters, so I'm not arguing that you should all stop using curly braces except where explicitly required. However, the fact that Lisp is fully parenthesized almost makes this a non-issue, a functional style mitigates it further, and in any case, adding progn all over the place isn't the right way to address it.
  • It's very common in Lisp to want to iterate over a sequence, do something to each member of that sequence, and return the resulting list. The name given to this oddly specific idea is "mapping". The specific function you use is called different things (map, mapcar or similar) depending on which language you're in, and there are one or two subtleties (for instance, Elisp's mapcar only takes a unary function and a single sequence, Scheme's map can only take lists and not any sequence, etc.), but odds are that if you search the docs of a given functional language for "map", you'll find a function that does the above. When you're dealing with a sequence of things, it's a mistake to use while with an explicit, manual counter. I'd use mapcar or similar, and fall back to recursion (with tail calls where applicable) for more general purposes.

There are more things I could gripe about, and my sinister purpose in toying with this code is unrevealed, and I feel like I've only started off a much larger conversation about what it actually means to learn to program in a given language, but I think I need to go lie down now. This got enough stuff off my chest that I can continue to wade through the code for a while without bitching internally.

Tuesday, December 28, 2010

Omega

I've been thinking about languages a lot lately. Which is kind of a joke, given the title of my blog, but I actually mean "I've been thinking about them more than usual". This thought has been specifically dominated by thoughts of the Blub hierarchy as proposed by Paul Graham.

I'm not sure what's on top.

PG claims "Lisp", I've seen many that shout "Ruby", I've met many that claim "Haskell". In fact, if you participate in programming discussion for any length of time, there's a pretty good chance that you'll meet someone for every language (other than C) claiming it's Omega. It's ridiculous, of course. All most of them are really claiming is "This is the most powerful language I know how to work with", which is not the same thing as "the most powerful language". It's easy to see that trying to compare in any supposedly objective sense would cause giant hatestorms and various infighting. So people are perhaps justified in making statements like

"You don't compare a hammer with a screwdriver, but you use the one that fits your task & way of thinking/education/needed level of abstraction the most. Also, since the one doing the comparison is biased by the fact that he knows at least one of the two, or at least prefers one of the two, it is hard to find a truly objective criteria for comparing them (exceptions exist)." -Rook, pogrammers.SE

when discussing language comparison. That was an answer from a question about whether language comparison was useful. As of this writing, it has been closed and re-opened twice, and the original asker has accepted (then unaccepted, then accepted again) a joke answer. This is perhaps more telling of the culture of programmers.SE than of the question, but it doesn't seem like an uncommon response. People duck comparisons precisely because languages are half tools and half religions, and no one wants a crusade declared. But, well, you need to compare.

"A language is a tool. That said, I've seen really, really crappy tools before. No one wants to work with a hammer whose head is liable to fly off and hit another carpenter in the stomach. Likewise, if you noticed your fellow worker's hammer was in that shape, you'd probably steer clear of them when they were using it. It's also important to really understand which tool it is. You can't use a screwdriver and hammer interchangeably (though some try desperately). Hell you can't even use all hammers interchangeably; you need a sledge for some things, a mallet for others and a tack for yet others. If you use the inappropriate tool, then at best, you'll do a poorer job, at worst you'll injure yourself or a co-worker." -me, programmers.SE

Graham goes further, stating that not only can you compare languages in terms of power, but goes on to point out the obvious corollary that there is therefore such a thing as an empirically best language. As a note, I agree with him, but "which religion is best?" is a question you just don't discuss in polite society, so I haven't pushed the idea on any forum I frequent. It makes sense though. No one would disagree that Assembly < Cobol < Python on the power scale (I'm defining "power" as a non-specific mix of expressiveness, terseness, maintainability, readability and flexibility). And even admitting that simple truth exposes you to the idea that there's a ladder, or tree, or at least concentric circles of languages with one (or a relatively small group) taking the prime position.

Omega.

Graham puts Lisp there, but he's making the same claim that any Ruby ardent or avid Haskeller are expressing; "Of all the languages I know, this one is the most powerful". The thing is, I haven't heard many convincing arguments to the contrary. The best argument aimed at Lisp these days is that it's slow, and even then, slow in what sense? It can certainly do the job of server software, or even local desktop/console software on today's powerful systems. Remember, Lisp was called slow back when 1Gz was the sort of processing power you paid many thousands of dollars for. I have more than that right now in my $300 dollar netbook. We're fast approaching an age where a phone you get for free with a subscription is more powerful. "Slow" just isn't a good enough strike against a language to discount it anymore. Other than that, people complain about the parentheses, which is an empty complaint at best, and typically a trolling attempt. The only good argument against Lisp as Omega comes from an unlikely source.

"I don't think it's necessarily Omega. The Haskellers and MLers say 'Well, from where we sit, Common Lisp looks like Blub. You just don't understand the brilliance of strong inferred typing'. And they may be right. Of course, Common Lispers look at Haskell and say 'Well, Haskell's really Blub, because you guys don't have macros'. It may be the case that there is no Omega, or that Common Lisp and Haskell are on different branches of the lattice, and someone's gonna find a way to unify them and a few other good ideas and make Omega." -Peter Seibel, Practical Common Lisp Talk at Google

It's an interesting fact that practitioners of either language can point to lack of features in the other. That has some pretty obvious corollaries as well.

  1. There may be such a thing as the most powerful language right now, but it may involve trade-offs (I don't know what it is, but one exists. I'll call it "Alpha" so as not to offend anyone).
  2. There is such a thing as the language that will be the best for the next 10 to 100 years (This one may or may not exist in some form today; it might be unified from several current languages as Seibel alludes. I'll use his name and call it "Omega").
  3. There is such a thing as the most powerful language that could exist on current machine architectures (This one almost certainly doesn't exist yet, and may never be embodied in an implementation. It's just the limit, in the calculus sense, of what we can hope to achieve with a language along the axes of expressiveness, terseness, maintainability, readability and flexibility. This one I'll call 0).

I'm not sure what Alpha is. I'm not sure anyone knows, because as I've said, people tend to bind that variable to whichever is the most powerful language they currently know. 0 is far away, and I won't even try talking about it today, because I don't have anywhere near enough information to make a decent guess at what it'll look like. So what does Omega look like? Well, Graham effectively says it's Arc (or what Arc will evolve into). Others variously substitute their own languages. There's a sizable community which thinks it's Haskell. Some ardents think it's Lisp. A few would like you to believe it's Java, despite the recent turbulence between Oracle and Google. And there are a couple of personalities in the industry who are vigorously pushing either Ruby or C#. Yegge echoes Seibel pretty closely

"[T]he Wizard will typically write in one of the super-succinct, "folding languages" they've developed on campus, usually a Lisp or Haskell derivative." -Steve Yegge, Wizard School

It's a line from one of his humorous, fictional pieces wherein he describes a Hogwart's-like school that churns out wonder-kid programmers, but it still seems like a vote for the Haskell/Common Lisp unification theory. It might happen. If it does, it'll be a race between the Haskellers and Lispers to out-evolve one another. In order to converge, Haskell needs to strap on prefix notation and macros, make IO easy (rather than possible), and blur the line between run-time, read-time and compile-time. Lisp needs declarative matching definitions, lazyness, currying (possibly eliminating the separate function namespace), strong types and a few small syntactic constructs (function composition and list destructuring leap to mind first). Lisp has a longer list to run through, but keep in mind that because it has macros, almost all of them can theoretically be added by you as you need them, rather than by CL compiler writers as they decide it's worth it.

It's also worth noting that the last point in Haskell's list is a pretty tricky proposition. How do you blur read/compile/run time when one of your goals is to have a complete type system? Well. REPLs for Haskell exist, so I assume it's possible, but making it part of the language core doesn't seem to be a priority at the moment (and probably won't be for a while due to the performance hits it imposes, and the perception performance hits still have in the general public of programmers). That's not the only hard bit either language would have though. How do you implement full currying and optional/default/keyword/rest arguments? Haskell purports to solve the problem by defaulting to currying, and giving you the option of passing a hash-table (basically) as an argument to implement flexibility. LISP gives you &rest, &body &key and very simple default argument declaration, but "solves" the currying issue by making currying explicit. Neither language's solution satisfies, because sometimes you want flexible arguments (and counter-arguing by saying "well, if you need them, you've factored your application wrong" is missing the point; expressiveness is a measure of power, remember, and having to think about the world in a particular way is a strike against you in that sense), and sometimes you want implicit currying (this is perhaps most obvious when writing in Haskell's point-free style, and if you've never done so, I doubt I could convince you).

As a common lisper, there are a bunch of things I'd like to steal from Haskell, if I could. The pattern-matching definitions are certainly useful in some places, list destructuring would help, and function composition seems useful (though this is, like defmacro, the sort of construct you have to understand first, in order to find places that it would greatly simplify). I'll check later, but I have a sneaking suspicion that someone has already lifted all of the above into a library somewhere on github or savannah. Even if not, list destructuring and function composition seem like they'd be easy enough to implement. The latter as a call to destructuring-bind, the former as a simple fold macro.

From the other side, there's already two projects underway; Liskell is a competing compiler to GHC that has a prefix notation and outputs the same machine code, and Lisk is a pre-processor for GHC that takes specific prefix notation forms and converts them programatically back to the Haskell source code before invoking the compiler. Lisk's creator talked briefly about macros, but the project is early enough along that nothing much more specific is out there right now (I'm watching his github repo with interest though).

I haven't a clue how to place my own bet. I tried starting this paragraph both with "My bet's on Lisp..." and "My bet's on Haskell...", but each beginning got to a logical dead end within two sentences. It doesn't seem like one can completely absorb the other. But, if Haskell + Lisp makes Omega, we'll see what it looks like shortly (by which I mean ~10 years) because cross-pollination is already happening, and it's not a far jump from there to full-on unification. Or maybe things get bloodier as the preview to Barski's Land of Lisp implies, who knows.

Either way, we'll see soon enough.

EDIT: rocketnia posted a particularly thoughtful response to the above post at the Arc Forum. He points out that there may not be an Alpha, Omega and 0, but rather "[L]ocal optima that can't be unified into Omega". I could have sworn I addressed this point (and acknowledged it, but stated that I was more interested the unification idea today), but my only mention of it is "...with one (or a relatively small group) taking the prime position." Apologies. He also explains a lot about how macros might coexist with a strong type system.

Tuesday, December 21, 2010

Bluetile

Just a quick update this week; I intend to record my thoughts on Bluetile (and I guess possibly xmonad by extension, but I get the feeling you could hammer the latter into a workable solution).

To start with

Why a Tiling WM?

I actually get asked this at work, so I won't assume that you're automatically on-board with the idea of using a tiling window manager. The most common questions are "Why?" and "Isn't it hard learning all those keystrokes?" The second is the easier question, so I'll answer it first; yes. But a good window manager should let you adjust keybindings[1], and the point here is to make your environment fast, not so easy to learn that the office secretary could use your computer in a pinch.

The answer to the first question is basically that.

It makes you faster.

Think about your editor. Actually, if you don't use Emacs, think about your editor. If you use Emacs, you already know what I'm talking about here; just skip to the next heading, where I give you the lowdown on Bluetile. Think about how you use that editor. Do you grab your mouse, and head over to the file menu every time you need to copy and paste something, or do you just Ctrl-c Ctrl-v? I'm hoping this is a ridiculous question; of course you use the keyboard shortcut when you can. It's faster. It would be utterly ridiculous to have to do everything with the mouse. Well, that's basically why. When you realize that the keyboard is so much faster, following that thread to its conclusion tells you that, except in special circumstances[2], you should use the keyboard as your primary input peripheral. If you analyze your mousing actions on a day-to-day basis, it'll occur to you that you spend a lot of time in a few different ways.

  1. Browsing the net (where you use the mouse to click on links and right-click on various things).
  2. Running programs (either from the dock on OS X or from the Start menu/desktop icons on Linux/Windows)
  3. Moving, sizing and adjusting windows (especially if you've got multiple, large screens. I typically have my editor, browser, debugger, a terminal window and maybe a movie to watch in the background. As I type this, I'm watching a talk on "Models and Theories" by Peter Norvig, which I can heartily recommend.)

The first point is something that you'd want a keyboard-driven browser for (I use Conkeror for preference, though most people seem to have decided to live with the mouse in the context of their browser), but 2 and 3 are both things that a good tiling window manager will solve for you. Depending on the manager, you either get a "run" command (a keystroke that brings up a little input where you can type the name of the program you want to run), or a keystroke for the most common programs, or both, which means that you don't need to rely on the mouse to run programs. You just need to hit Win-p and type emacs or (in my case) hit C-t C-e. Either of these is faster than grabbing the mouse, getting to your desktop, moving the cursor over and double-clicking on the Emacs icon.

Moving, sizing and adjusting is typically done in order to get maximum use of your screen real-estate. For my part, I rarely want overlapping windows, but I always want as much of my screen used as possible. The way tiling WMs work is by automatically laying out any windows you open so that they take up as much space as you need (either by letting you specify splits and groups as in StumpWM, or by letting you manage layouts in xmonad). By remembering a few extra keystrokes, you free yourself entirely from the mouse.

So that's why.

Bluetile (really)

That brings me to Bluetile. I've been using StumpWM for my purposes, but I wanted to try out the competition. Bluetile is a derivative of xmonad, the Haskell-based tiling WM, with an aim of being easy for beginners to get into. They do this, kind of ironically, by putting in mouse-oriented controls and by running on top of Gnome instead of standalone. That's pretty sweet, actually, and it seems to be fairly easy for beginners to get into. The trouble is that it doesn't do a very good job solving the problems I mentioned above (so while it's easy to get into, I doubt it would do a good job convincing beginners that tiling WMs are worth the trouble). First, it provides on-screen icons for navigation (each of which have keyboard counterparts, I'm just bemoaning the waste of screen space), and it keeps toolbars and gaps between windows so that you can still see your start bar and background. The gaps have no reason I can see; the toolbars are kept so that you can still click on windows and drag them around, which sort of defeats the purpose.

That's all nitpicks though, and you could argue that beginners would find it easier than the full-keyboard control of something like the standard xmonad or Stump. The big downside for me is actually the awkward screen model. I can imagine things going well on a single ginormous screen, and if I was running on one of the 27" iMacs, there'd be no problem. The trouble comes when you have multiple monitors, because the way xmonad seems to track them is by assigning a different "workspace" to each monitor. I'm sure this fit the program model perfectly, but it means that Alt-Tab only cycles between open windows on whichever monitor you have focus, and you have to pick your "focused" monitor. It's possible that I'm spoiled and this is actually how most TWMs work, but Stump doesn't seem to treat windows on different physical screens as separate areas, and I don't need to pick a working monitor. The other issue it brings up is with workspace switching. Because Bluetile gives you 9 workspaces (and assigns 1 to your first monitor, and 2 to your second), you need to be careful about which you switch to lest you screw yourself. For example, if you open Emacs on one monitor and a browser on another, then switch to workspace 2, they switch places. That is, your Emacs window gets shunted to monitor 2 while your browser gets pulled to the one you were looking at. That's not really what I want if I have multiple screens staring at me. If you then switch to workspace 4 (lets say you have Movie Player open there), your Emacs window stays where it is and workspace 4 replaces your browser in monitor 1. Now, moving back to workspace 1 causes your Emacs window to fly back onto monitor 1 and Movie Player to go to monitor 2. In other words, you're technically back where you started, except that workspace 2 now contains Movie Player instead of your browser. How do you get back to your initial setup? You have to switch to workspace 2 then to workspace 4 then back to workspace 1. This leaves something to be desired; and demonstrates that by conflating "monitors" and "workspaces", grater user-side complexity is achieved with no visible upside.

Treating monitors this way also introduces an extra level of complexity in the UI; you also need keys to select your primary monitor (they're Win-w, Win-e and Win-r in Bluetile; I don't know what happens if you have more than three monitors). That's too much to keep in my head, and this is coming from someone who uses Emacs. I won't be switching to Bluetile any time soon, and their docs give the impresion that this was pretty much how xmonad handles things too, which is sad. And means I'm sticking with Stump for the forseeable future.


1 [back] - So you don't so much have to memorize them as come up with some simple mnemonics and then assign keys to match those. For example, my .stumpwmrc is set so that C-[keystroke] starts up programs, C-M-[keystroke] runs a work-related shortcut (such as remote desktop, or opening my timesheet file) and M-[keystroke] does wm-related tasks. [keystroke] is typically just the first letter of whatever I'm trying to do (so C-e runs Emacs and C-M-r runs Remote Desktop). This is a mnemonic that makes sense for my workflow. I could easily have just kept track of my most common tasks and bound each to an F key.

2 [back] - For example, if you need to do some drawing. Either of decorative pieces/icon modifications for a web app or for the UI layouts in an environment like Flash/VB. In this situation, it goes without saying that you actually want a tablet, or a trackball, or a multi-touch trackpad, as opposed to a vanilla mouse. The only thing I'd use the traditional option for these days is gaming, and even then, tablets give you an edge if you know what you're doing because of the 1:1 mapping of screen to tablet.

Wednesday, December 15, 2010

Language Smackdown Notes and Smalltalk

I went to the Dynamic Languages Smackdown yesterday, and I'm recording my thoughts before losing too many of them. It was an event hosted by GTALUG (the Greater Toronto Area Linux User Group), and basically involved advocates of 7 languages standing up and talking about theirs.

Before I go any further, as an aside, the irony of a guy who writes a blog called "Language Agnostic" going to something called the "Dynamic Languages Smackdown" is not lost on me. It turns out I wasn't the only language nerd there though, and if nothing else I got a new book recommendation out of it.

The seven languages were Smalltalk, Ruby, Python, Erlang, Lisp, JavaScript and Perl, and the format was

  1. Introduction
  2. Questions from the audience
  3. Questions
  4. Code examples

To summarize, I kicked off (subtly, so no one seems to blame me yet) a line of questioning dealing with canonical implementations that we kept coming back to throughout the talk. Andrey Paramonov had it easy, because Erlang actually has just one canonical implementation (with one attempted port to the JVM that apparently no one takes seriously yet). Other than Erlang, what struck me here is how diverse the pool actually is. I mostly hack Common Lisp these days, and only vigorously play with Ruby, Erlang and Haskell (and PHP at work, but as you can see by the logo bar, I'm not terribly proud of that), so I was under the impression that Lisp was the only freak language that had so many implementations to choose from[1]. That turned out to be a misconception; Ruby has JRuby and Iron Ruby (both of which purportedly conform to the same spec and are both interchangeable and "official" as far as the community is concerned), Myles Braithwaite put up a slide listing twenty or so different Python implementations (which variously support Python 2.5, 2.9 and 3.x specs), Smalltalk has at least two open-source forks (and gnu-smalltalk, but that wasn't really discussed), the Perl community is apparently split between 5 and 6 and Javascript has at least three different server-side implementations (the client-side situation is worse).

It's weird, because as I've said, I was under the impression that "a language" meant one canonical implementation with one or two experimental projects, but (at least in the dynamic world) that seems to be false. It's odd, because people cite "difficulty choosing an implementation" as one of the principal reasons not to go with Common Lisp. I guess it's more of an excuse afterall.

The other big surprise was the age of the advocates. Of the seven, only Alan Rocker (the Perlmonger of the group) had the sort of beard you'd expect, and everyone other than Alan and Yanni (the Smalltalk presenter) seemed to be a student. I'm particularly happy about this since Lisp gets cast as the old-man's language, but in reality, programmers my age seem to be more common. Not that "age of the community" is important in any tangible way, just interesting.

"Smackdown" thankfully turned out to be too strong a word; other than a fierce rivalry between the Python and Ruby presenters (and a few low-blows from both of them aimed at JavaScript), everyone was respectful of the other languages there. It was fairly informative, and I'm going to pick up and play with Clojure, a Smalltalk (either gnu or Pharo) and more Python as a direct result.

A note for future presentations in this vein though:

  • Please don't do code examples last. This should have been done up-front with the introductions, and probably alotted 15 minutes or so per language. Alan didn't even get enough time to present his.
  • Either admit that these discussions will take more than two hours, or invite fewer languages at once. The conversations easily could have continued for a further hour or two (and probably did at the pub after the event, but I had work the next day, so I couldn't go).
  • Be prepared with the slides beforehand (anyone else would be able to blame PowerPoint, but this was the Linux User Group, so you don't get that luxury).

Preliminary Impressions of Smalltalk

I did briefly try to get into Pharo this morning, but I found it annoying to say the least. This doesn't mean I won't keep trying; I had a negative initial reaction to pretty much every language I currently know and love. There are some definite initial concerns though, the biggest of which is that Pharo insists that you use its "Environment" (which is only really a big deal because of the way that environment is constructed). It's heavily mouse-dependant (in fact the intro text suggests you get yourself a three-button mouse with a scroll-wheel to get the most out of it), and it insists on handling its own windowing (which means if you got used to a tiling window manager, you are so screwed). The gnu implementation is titled "The Smalltalk for those who can type", so at least I know I'm not alone. Minor concerns about image-based development include things like "How does source control work?" and "how do I use Pharo on a team?", but I'm sure those are resolved and I simply haven't dug deeply enough to have an idea of how yet.


1 [back] - First off, the "language" is split into Scheme, Common Lisp, and Other. In the Scheme corner, you have Racket (formerly PLT), Guile, Termite (which runs on top of Gambit), Bigloo, Kawa and SISC (and a bunch of smaller ones). In Common Lisp, there's SBCL, CMUCL, Clisp, Armed Bear and LispWorks (and about 10 smaller ones). Finally in "Other", you find crazy things like Emacs Lisp, AutoLisp, Arc, Clojure and newLisp (which are all technically Lisps, but conform to neither the Common Lisp nor Scheme standards). This is why I think having a representative for "Lisp" is a joke at a talk like this; which Lisp are you talking about?