Dearest Overlord

Fading are the days when being born the son of a doctor means I’ll be a doctor. Or being born a woman means that I will be a homemaker. The increased freedom is wonderful, but even in this modern world these decisions have consequences and have to be considered in terms of the raw meat that backs behavior: the brain.

From an evolutionary psychology perspective, brains are complex structures with one job: keeping their genetic code in existence. Human brains do this in large part by connecting observations together and recognizing patterns that can be exploited. Maybe I notice that little trees start from around the area where bit trees drop nuts. I connect nuts to trees, abstract things out to other types of seeds and, voila, agriculture. This gives me a definite survival advantage over the squirrels that just see trees and nuts.

The problem though is that there’s simply a limit to how much our gray matter can handle. Studies on working memory have fairly consistently limited us to about 7 ± 2 (2.5 bits) of information in our head at a time.

There’s a pretty straightforward mathematical relationship for complexity — the more things that are allowed to change in a system, the harder the computation. Particularly when dealing with probabilistic systems, the more degrees of freedom along which things can vary, the less precise your estimates.

To some extent, modern society has extended its reach beyond what the brain can handle by itself. The bulk of people have more freedom in their professions and lifestyles than their parents. This freedom tends to decrease going down the socio-economic ladder, but it exists as a trend. Increasing the amount of choice increases the number of things one has to consider when making decisions, and this increase is not without a cost.

Ironically, attempting to lead a productive life in several areas taxes many people’s systems to the point that they end up not enjoying any of them (the stereotypical executive cut off from his family, for example). For this reason, you’re seeing movements like In Praise of Slowness and Get Things Done that emphasize decreasing the working cognitive load.

GTD, in particular, acknowledges that keeping track of your status on various projects takes brain power. It takes advantage of external sources to handle that information leaving the mind free to more completely focus on whatever the task at hand is; be it the essay you’re writing or having fun playing with your kid.

Human beings have been developing increasingly complex systems of augmenting their brains. From the advent of speech to the modern computer, we have methods that allow us to externalize some of our memory and processing. I predict that the ubiquity of computing systems will continue to increase driven in no small part by our need to manage an increasingly complex world. In much the same way that travel and communications are now almost wholly dependent on technological means, knowledge organization and discovery is going to continue to deepen its reliance on computing system.

Sifting through the massive quantities of information requires not only a complex understanding of the relationships between pieces of data, but also a complex understanding of what the user wants and needs. Recommender systems, in particular, are where cognition and computing really meet. The driving question is one whose only criteria for evaluation is entirely subjective — how much does the user like this recommendation?

Ian Hickson has a post on how corporations make their money. He’s mostly poking at Microsoft and in the part on vendor lock-in he mentions Google. Imagine though the not improbable scenario that the paradigm for interface development continues to shift toward increasingly personalized recommendations. Your recommender system has three things that go toward making good recommendations:

  1. The model that you’re using to guess
  2. The data that you have to base your guess on
  3. Luck

Google, for example, is collecting huge amounts of data. Putting Google just in terms of their simple software products isn’t realistic. A would-be competitor doesn’t just need a better product. Their product has to be enough better that it can outperform a system with a significantly larger sample size (and thus power in the sense of statistical inference). Not only that, but they need to convince a user to leave behind the data that Google has collected.

Hickson considers the market for software primarily in monetary terms. People invest more than money in software, however. Consider my MySpace account. I think MySpace is a horrid implementation of a neat idea. All indications are Facebook is a better system. I’ve not created an account because I don’t want to invest the time.

In an ideal market, users’ data would be under their own control rather than the control of the companies. I could have a repository for my data and companies could create applications to augment and do interesting things with that data. Unfortunately, the market pressure to create such a repository is limited unless people care enough about their flexibility to insist on it.

I’m still considering the exact sort of work that I should work on this summer. Perhaps I could work this in somehow…

Leave a Reply

Your email address will not be published. Required fields are marked *