11 April 2007

Darwinian Evolution of Software?

In high school math class, we learned about gradient descent. The idea is to find the low point on a graph, like finding the lowest elevation in a geographical area. Although it sounds pretty straightforward, there is a problem doing this: we can tell if something is a local minimum, but not if it is globally the lowest. The algorithm for gradient descent can be summed up "go downhill", which will eventually find you to a low point. It's possible that right next door to your low point, however, is a much lower point, but you would have to go uphill to get there so you're never going to.

Lots of things behave like this, which I've always found interesting. Rivers lie where they do because they seek out the lowest point. And evolution acts this way too, because the best it can do is to improve on what's currently available. Each generation can have mutations based on their parents, and if it's an adaptive mutation, the new trait lives on. But, if the improvement is dependent on something else happening first, and that prerequisite is detrimental, then the improvement isn't possible. That's like going uphill to look for a new minimum.


My favorite example of this is that the back our eyes have a hole where the optic nerves bundle together and pass through the eyeball to get to our brain. This is needed because the nerves come out of the front of the receptor cells, instead of the back. If the nerves came out the back, we wouldn't need the hole, but we happened to evolve the receptor cells and optic nerves in this order, where by the time the eye was finished, the pipes were layed wrong. If only the intelligent designer could refactor, right?

From http://www.2think.org/eye.shtml

Ok, now I'll make my point. In software, we should be intelligent designers, and we should be able to refactor. If we made an eye with a hole in it, we could probably fix that in release 2.0. And yet, in a larger and slower scale, software really does "evolve," and critical mistakes aren't corrected if it would mean the technology is temporarily crippled. Instead, we build on them and move on to the next layer of abstraction with those funky things still frozen in that layer.

Around 2002, as the web was turning into a place for applications to live instead of just reference material, a need arose to have appealing and highly functional user-interfaces in a web browser. We wouldn't have chosen to turn a web browser into a rich-client platform if we had a big meeting about it, but they were already deployed on everyone's desktop. It was clearly descending the gradient to just put an application on that platform, since we weren't sure our users would put up with downloading and installing something without their employer making it mandatory. Also, we need it to be cross-platform since we don't want to make any assumptions about what hardware our users have. Java tried to get onto the desktop, which would have made more sense, but it didn't catch on.

So then Web 2.0 came along, and improved the browser's ability to present a decent UI. We deploy rich-client applications in a browser using the XmlHttpRequest javascript feature. Javascript has to be the most abused programming language ever, and at first I was aghast that AJAX would push it to do so much. It felt buggy and unnatural, and again I think this was an evolutionary process - we're writing web applications, we have developers who know CSS and DHTML, lets put them to work.

Bruce Eckel wrote about this in his article on moving to Flex. He gives a nice history of the presentation layer, finally ending up on Adobe Flex. I'm sure many of us are a little annoyed by the frequency of the Bruce Eckel "thinking in Flex" ads that have been all over the place lately - it reminds me of how I wanted to get away from T-Mobile when they had Jamie Lee Curtis everywhere. But Flex does make evolutionary sense - now that we've evolved these capabilities to write web applications, let's improve our toolset and move to ActionScript and have some better graphics and animation capabilites. We don't have to deploy much to clients because again they mostly have flash plugins in their browsers already.

So now here comes Apollo:



Jesse and Gabe were experimenting with it, and from what I could tell, this is a neat way to take the current evolution of Web 2.0 presentation technology and bring it back to the desktop again. This is where I think of the hole in the eye - it's great to have a true desktop app that can interact with peripherals and be responsive and work offline, but some of the technology stack comes from the web. CSS wasn't intended to layout a rich-client desktop app, was it?

No comments: