Sunday, September 30, 2012


I haven't actually had very much time to explore it yet, but the Raspberry Pi is officially out.

"Officially", because it's been sort of a running joke with my friends that it hasn't been for the past while. The two sites that sold them had "Register your Interest" buttons, rather than the expected "Add To Cart" or "Buy Now"[1], and while many told tales of the legendary owners of these boards, I had failed to meet one for quite a while.

The Pi sitting on my left knee at the moment was picked up at Creatron Inc on College. It cost about $50[2], and came with nothing but the board, but I still see myself picking up a few more down the line. This first one is going to get get plugged into some of the mobile computing experiments I plan to do shortly. The next one will either replace the media center PC or my backup server, since it's lower power than either.

At first glance, it seems like I'm a bit late to the party, since the installation went perfectly. Got my Pi, got an SD card[3], downloaded the installation images[4], unzipped them, then ran

dcfldd bs=4M if=2012-09-18-wheezy-raspbian.img of=/dev/mmcblk0

/dev/mcblk0 is the name of my SD card drive. Also, you'll probably need to install dcfldd before running that; if you're ok with not having any kind of progress feedback, just run the same command with dd instead.

And that was that. After getting the resulting cards into dat Pi, it booted flawlessly.

Like I said, haven't had much time to explore, but what I can tell you is that the Debian version

  • uses the LXDE desktop[5]
  • comes with Python 2.7, Python 3.2, and a bunch of python-based games
  • takes about three seconds to start up a Python 3.2 shell in X, and seems to be able to run at most one
  • comes with Scratch and Squeak[6]

By contrast, the Arch distribution they ship is very minimal. Which I guess makes sense, all things considered. As a note here, the Downloads page mentions that Arch boots to a useable prompt in about 10 seconds. Firstly, that doesn't sound very impressive, given that the Debian version does exactly the same thing if you tell it to run from command line. Secondly, in practice it seems to be closer to 5 hippopotami, which actually is impressive out of a general purpose computer the size of my business card.

The Arch Pi doesn't come with anything other than bash, perl and a root account[7], and that includes the standard raspi-config script that lets you resize your initial SD card partition. Ah well, I suppose there are worse things than having to play around with parted and friends.

Anyway, like I said, not much time this weekend, between the various work work and play work I've been up to around here. I've literally managed to install the OS, apt-get a copy of mplayer and get the thing onto my network.

In case you were wondering, I consider that a success

Next order of business, getting a Haskell and a Common Lisp running on it, getting a "case"[8], and figuring out some sort of portable input/output strategy.


1 - [back] - That's changed by now, obviously, and it's theoretically possible to buy n of them rather than just one per customer, though I've yet to test this theory.

2 - [back] - Canadian dollars.

3 - [back] - A 16 GB class 10. Class 10 is the important part since that indicates read/write speed. I could have gotten away with as little as 2 GB in terms of size, but the store I walked into happened to have a special on the 16 GBs, so a pair of those ended up costing me about $10 less than a pair of two gig class 10s would have. No, I have no idea how this made sense from their perspective. It's not as though flash memory goes stale in blister packs.

4 - [back] - One copy each of Raspbian and Arch ARM.

5 - [back] - Look-and-feel-wise, it's a slightly shinier XFCE.

6 - [back] - Though to be fair, I've yet to get Squeak running successfully. Scratch looks like a very interesting teaching tool. The sort of thing you could give a curious six-year-old if you wanted them to learn about programming.

7 - [back] - Ok, ok, that's a half-lie. It also comes with the usual *nix suspects, but you know what I mean dammit. It's been a long while since I've actually had to install Python anywhere.

8 - [back] - Or possibly a case, depending on how adventurous I feel.

Sunday, September 23, 2012

JS Frameworks

So I've spent the past week or so working through some examples with the various JavaScript MVC front-end frameworks. Let me save you the trouble: they're all shit. If you absolutely, positively can not live without a framework of some sort, use Backbone or Spine[1], because they seem to be as close to "minimal" as you can get, they won't get in your way too much, they help a little, and it's perfectly possible to run them alongside jQuery or similar if you feel like rolling certain pieces on your own.

While they do get in your way, the Javascript MVC movement is getting a couple of things profoundly right. Things I didn't really notice, or didn't think through all the way in the past, so I'm kind of shamefaced about having missed them, but they definitely seem like the right approach[2]. The reason I say

[The frameworks are] all shit -me

above is that none of them seem to be necessary to do the Right Thing©™[3], and none of them seem to help much with the big detriments of the approach.

Anecdote Time

Being that I haven't put a large project together with any of these techniques yet, this is the only example I'm willing to show, but it is illustrative. So I had a particular place where I needed to reorder pieces of information on the client side, then send them out to the server for persistence. Luckily, I was not the only one who had this problem and thought of using backbone to do it. Here's the solution recommended in that question:

Application = {};
Application.Collection = {};
Application.Model = {};
Application.View = {};

Application.Model.Item = Backbone.Model.extend();
Application.View.Item = Backbone.View.extend({
    tagName: 'li',
    className: 'item-view',
    events: {
        'drop' : 'drop'
    drop: function(event, index) {
        this.$el.trigger('update-sort', [this.model, index]);
    render: function() {
        $(this.el).html(this.model.get('name') + ' (' + this.model.get('id') + ')');
        return this;
Application.Collection.Items = Backbone.Collection.extend({
    model: Application.Model.Item,
    comparator: function(model) {
        return model.get('ordinal');
Application.View.Items = Backbone.View.extend({
    events: {
        'update-sort': 'updateSort'
    render: function() {
        this.collection.each(this.appendModelView, this);
        return this;
    appendModelView: function(model) {
        var el = new Application.View.Item({model: model}).render().el;
    updateSort: function(event, model, position) {
        this.collection.each(function (model, index) {
            var ordinal = index;
            if (index >= position)
                ordinal += 1;
            model.set('ordinal', ordinal);

        model.set('ordinal', position);
        this.collection.add(model, {at: position});

        // to update ordinals on server:
        var ids = this.collection.pluck('id');
        $('#post-data').html('post ids to server: ' + ids.join(', '));


var Instance = {};
Instance.collection = new Application.Collection.Items();
Instance.collection.add(new Application.Model.Item({id: 1, name: 'a', ordinal: 0}));
Instance.collection.add(new Application.Model.Item({id: 2, name: 'b', ordinal: 1}));
Instance.collection.add(new Application.Model.Item({id: 3, name: 'c', ordinal: 2}));

Instance.collectionView = new Application.View.Items({
    el: '#collection-view',
    collection: Instance.collection


$(document).ready(function() {
        stop: function(event, ui) {
            ui.item.trigger('drop', ui.item.index());
#collection-view {
   margin-bottom: 30px;

.item-view {
   border: 1px solid black;
   margin: 2px;
   padding: 10px;
   width: 30px;
<ul id='collection-view'></ul>
<div id='post-data'></div>

And once you have all that in place, what you can do is drag the given elements around, and have returned a set of IDs in the order that they appear on the users screen! Isn't that amazing!? I gave it the benefit of the doubt, and tried to fit the code into my head for about half an hour before I realized something.

var util = {
    log : function (message) {
 $("#console").append(JSON.stringify(message)).append("<br />");    

var templates = {
    rule : Handlebars.compile($("#tmp-list").html())

var rules = {
    render: function (rules) {
    $.each(rules, function (i, aRule) {

$(document).ready(function() {
    rules.render([{"id": 1, "name": "a"},
                  {"id": 2, "name": "b"},
                  {"id": 3, "name": "c"}]);
        stop: function(event, ui) {
        var ids = $("#rules-list li").map(function (i, elem) {
            return $(elem).find(".id").attr("title");
#rules-list {
    margin-bottom: 30px;

#rules-list li{    
    border: 1px solid black;
    margin: 5px;
    padding: 7px;
    width: 50px;
<ul id="rules-list"></ul>
<script id="tmp-list" type="text/x-handlebars-template">
   <li><span class="id" title="{{id}}"></span>{{name}} -- {{id}}</li>
<div id="console"></div>

There. That's a solution weighing in at under half the SLOC, which gives precisely zero fucks about MVC frameworks and accomplishes the same task. Incidentally, I include underscore-min.js and backbone-min.js in that fiddle link because this was refactored from the above Java-style OOP soup, but I'm fairly certain that they're both unnecessary for this approach.

Note that I use, and wholly endorse Handlebars.js, or any of the similar standalone JS templating engines. I'll discuss why this is a good idea in a bit, when I clearly define the Right Thing©™, and what it implies.

The Right Thing... separating out your front end into an entirely different application from your backend. Set them apart entirely, and have them communicate through JSON feeds and AJAX requests. It seems like an awkward thing to do principally because of how much harder it is to generate/template HTML inside of Javascript than it is outside, in server-side languages. Handlebars and similar libraries provide enough of a stopgap that the separation starts looking worth while.

Those of you who are already network programmers, or have dabbled with actors will intuitively understand why this is good. For the rest, let me try to explain what you gain and what you lose.

Bi-Directionally Agnostic Components

That's just a fancy way of saying that neither the front-end nor the back end really care what's on the other side of the channel, as long as it responds to the appropriate requests with well-formatted JSON[4]. That means that you could conceivably port your entire back-end without changing any client-side code if you really wanted to, or have certain requests get handled by specialized servers[5], or write multiple front-ends, or document your interfaces and let others write additional front-ends.

Hell, as long as it sent the right requests, and interpreted them correctly, there's no particular reason you couldn't ship a native desktop or native mobile front end that connected out to a production server this way. Decoupling project components to this extent also makes it much easier to make less radical and more controlled changes than the ones proposed above.

Simpler Components

Because each component can be made responsible for a particular concern, you get less code overlap. In retrospect, this has been a problem with most projects I've been on; if your server-side needs to be involved in templating, it's very tempting to "optimize" by having it emit ready-to-$.append() pieces. The problem is that these optimized pieces are harder to change later, and they sometimes require changes even when no other piece of server-side alters.

Doing the JSON communication thing completely tears this problem down. One end emits a series of expressions, the other consumes it. That means that your server has no involvement whatsoever in how the data is displayed to the user, and the client doesn't give a flying fuck about how it's stored on the back-end.

Security Concerns

One not-entirely-good part of the situation is that as soon as you decide to architect your application as a discrete set of network-communicating components, you have to solve one or two big problems that you could otherwise avoid. Specifically, you need to start dealing with throttling, authentication and network security right away, rather than leaving them for the point when you start scaling up. "Dealing with" doesn't necessarily mean "building", by the way, a legitimate choice is to make all of your handlers publicly accessible, but in that case you still need to make sure that no private information leaks out.

You also need to invest some thought into storage layout, since you won't necessarily be able to assume that the entire application is on the same machine.

Rich Client Side

This isn't necessarily a good thing. Yes there's a somewhat better interactive experience for the user, but they absolutely have to have JS enabled if they're frequenting your site. You can still bolt together a much simpler, pure HTML interface, but that will likely have to be an entirely separate piece if you want to keep any of the benefits of the decoupled approach. You still should take the approach, I think, but I wanted to note that there's a little bit more involved with supporting security-conscious users and older browsers.

HTML and Javascript

Barring a native front-end, you're stuck developing your client side in Javascript. The bad news is that it's Javascript. The good news is that a lot of people out there know it, and it means that your UI guys don't necessarily need to be up on their type theory or compiler concepts in order to be productive. And there's no reason for the back-end programmers not to use whatever language they find most productive[6].

You could Coffee Script or Parenscript or Clojurescript your way out of the worst of it, but any of those approaches necessarily couples you to some language or technology less common and less commonly known than Javascript. That particular tradeoff[7] is a conversation I plan to have with myself another day though, lets get back on topic.

What The Frameworks Do(n't)?

So back to They're All Shit. You'll notice that of the implications above, a framework can[8] conceivably mitigate one; the security concerns. The rest of them are either inherent advantages, or inherent disadvantages so fundamentally baked into the approach that no amount of JS-based syntactic sugar could make a difference.

The one extremely annoying thing each framework seems to do is try to layer additional object hierarchies on top of the DOM; an already existing object hierarchy that perfectly expresses the view end of an application. You're expected to maintain a view and a model tree in addition to that. I guess some people are already used to typing reams upon reams of code to perform basic tasks? And most of them work at Google? Ok, ok, maybe that's not entirely fair. It's certainly possible that the approach allows larger applications to be put together more easily, but I'm honestly not seeing it. Having taken in tutorials for Ember, Spine, Backbone, Angular, and Batman, the things they all have in common are:

  • getting a simple task done is a lot more complicated with framework code than without it
  • components built with frameworks are less composeable than components built without them
  • everyone really, really, really wants you to declare a tree of classes before you do anything

It seems like all of those would add up to significantly increase complexity in a larger project, and I sort of had this goofy idea that complexity is a thing we're trying to reduce when we reach for library code.

In any case, I'm going to keep pushing straight up jQuery with Handlebars for the time being. With small, deliberate pinches of Underscore here and there. And I'll keep an eye out for pitfalls that might be pre-resolvable using some of the approaches I've seen this week. If something pops up and kicks my ass hard enough to change my mind, I'll write a follow-up, but until then I can't honestly recommend that you go with what seems to be the flow of web development on this one.


1 - [back] - Depending on how comfortable you are with Coffee Script, and how much you hate JS. Links to both in the sidebar.

2 - [back] - Again, I've only thrown the past week or so at this, so take that with a grain of salt.

3 - [back] - And, in fact, seem to make it much harder, more ponderous and more complicated to do the Right Thing©™.

4 - [back] - Or XML, or YAML, or whatever markup you end up actually using for communication. JSON seems like the right approach since it's extremely simple, and extremely easy to work with from within Javascript.

5 - [back] - For instance, you could have most of your app written in a powerful, expressive language without regard for performance, but have any real-time pieces handled by a server optimized for high concurrent throughput.

6 - [back] - As long as it supports easy serialization to and from whatever data format you guys have settled on.

7 - [back] -Number of practitioners vs. Quality of hiring pool.

8 - [back] - This doesn't mean they do, incidentally; from what I've seen, none of the current JS MVCs bother making it easier to do authentication, or secure requests.

Monday, September 17, 2012

Setting Up Haskell


su -c apt-get install haskell-platform haskell-mode hlint

cabal update
sed -i 's/^-- documentation:.*$/documentation: True/' ~/.cabal/config
sed -i 's/^-- library-profiling:.*$/library-profiling: True/' ~/.cabal/config

cabal install hoogle
~/.cabal/bin/hoogle data
cabal install hasktags
git clone git://
cd scion
cabal install

Then go configure your .emacs properly. There; I just saved you some long, tedious, boobless hours.

The Basics

Installing Haskell itself is extremely easy, assuming you're on a relatively recent version of Debian.

apt-get install haskell-platform

should handle that nicely. That will install ghc (the Haskell compiler), ghci (the Haskell interpreter) and cabal (the Haskell package manager). Ok, now before you install anything else, hop into your favorite editor, open up ~/.cabal/config and change the options -- library-profiling: and -- documentation: to True. These both default to false, but having them on is preferable for the development process. Also, note that they begin commented out; you actually need to remove the initial -- before they'll take effect.

The documentation flag isn't critical, just nice. It gives you some extra local docs with the libraries it downloads. I'm honestly surprised that profiling isn't on by default though. You just plain can't profile without it. If you try, you get an error telling you to install the profiling versions of all relevant libraries. Here's the kicker though; if you try to install profiling libraries yourself from cabal by using the -p flag, it does not resolve dependencies. That means you get to go back through all the libraries you installed, and re-install them recursively by hand. If you do it through the config option I mention above, it's automatically done for you whenever you install a new library. Which seems, I dunno, a bit better[1].

The Docs

I previously mentioned that hoogle is really useful, but that what you'd really want is a local copy you could search without hitting a server. Well, there is. It's a cabal package you can install with

cabal install hoogle
hoogle data ## you may need to run the hoogle binary directly with "~/.cabal/bin/hoogle" instead of "hoogle"

That second command will make a local copy of the hoogle database for you. You can then use it to do a text search, like hoogle map, or a type signature search like hoogle "(Ord a) => [a] -> [a]". That will give you a long list of results matching your query. You can also use hoogle --info [search term] to display the documentation of the first result, rather than a list of results.

Boy, it sure would be nice to have that available from your editor, huh?

The Editor

If you're not an Emacs user, and are used to indenting things by hand (shudder), pick up leksah and be done with it. It's pretty cool, supports incremental compilation out of the box, has some small measure of project management, and performs pretty well. If you're like me, and have gotten used to Emacs handling the tedium of indentation[2], you'll want a better solution.

The default Haskell mode is available standalone or from the Debian repos. There are apparently some non-obvious config tweaks to make with the standalone version which were done for you in the Debian package, so use the apt-get option if you can.

You'll also want to install Scion, which will give you type hints in the minibuffer. Ostensibly, it also gives you goto-definition, and a couple of other small convenience facilities, but I've yet to get that working properly[3]. Actually do a git clone of that github and install it manually, by the way. The version in cabal has some dependency oddities that kept it from installing properly on my machine. YMMV, as always.

The last thing I ended up doing, though you may want to stick with the defaults, is wire up some extra keybindings for hlint[4] and hoogle[5].

And that's how you set up a Haskell environment. Or, at least, that's how I did. Hopefully, I can now start building some cool things with it.


1 - [back] - It's particularly odd when dealing with a fully lazy language, because it seems like that property would make it very difficult to reason about performance a priori. I may talk about that at some point in the future, but you're probably better off reading what Robert Harper says about it.

2 - [back] - Especially in languages with significant whitespace like Haskell.

3 - [back] - I use my own anyway.

4 - [back] - Inspired by this gist by Sam Ritchie.

5 - [back] - Not really inspired by anything, but extending the default haskell-mode functionality which only lets you do the initial documentation search, rather than viewing.