Friday, November 7, 2008

Web 3.0 - Context Aware Apps and devices.

I know I wanted to talk about the economic 'crisis', babajob and orkut among a myriad of other things, but you can't have everything.

Since the popularization of the internet (the mid-1990's, and a little while before) there has been the promise of the internet of machines and of machine learning; a promise of 'smart' devices. We are still waiting for the internet of the devices, a stage where I walk into a meeting and my phone "knows" this, and goes into silent mode and "tells" the computer to load up my presentation and add my name to the attendee list. Coincidentally we are also waiting for the flying car, but that's a whole different blog-post.

We are finally getting to the stage with devices and in turn applications can be context-aware. We now have mobile phone application which for example "know" where you are, and where you are going for a meeting with whom at what time; this application then "detects" if you're going to be late and informs others via an SMS. This is one step closer to the dream, but what is still missing are two things:

1.) Multiple Context Convergence. [MCC]
2.) Device Information Sharing. [DIS]

MCC is critical. Lets go back to the example of me walking into a meeting room and say my phone "knowing" I am in a meeting, but how exactly does it know? Just because it says 'meeting' in my calendar? or because the noise in the room is low? Or is it really smart enough to use all the different contexts available to it, like calendar information, noise level, location, proximity to others invited to the meeting and make a smart decision in the face of incomplete and or conflicting information.

DIC is really what I'd like to see, an internet for the devices. Now that my phone knows I am in a meeting, it can not just itself go silent but also tell my laptop to switch to my "work profile - Meeting mode" which would in turn prompt the computer to turn off the lights in my office and lock the office door (if I forgot to do so). It will also activate a call divert from my office to my cell phone if I so desire. The opportunities for this are endless.

Which brings us back to, well why haven't we done this yet? The simple answer is because the devices were never built to interact with each other in the first place, secondly since MCC hasn't yet been perfected, the results for this just aren't there yet.

Another phenominon which is interesting and somewhat related is the convergence of devices, a stage where my mobile phone is my mp3 player and camera ... and my laptop is my TV. Although this may at first seem like a dream killer, it infact makes it more possible for these device functions to interact and communicate.

We will see context aware applications before the dawn of context aware devices. This is because, code is easy (free) to create and a whole lot more people have access to a computer than the specialized equipment required to make sophisticated devices like cars and phones.

Eventually, the machines will talk to each other ... until then, we'll just have to be content with talk to each-other.

Random things I did this week: I Geo-coded 676 ATMs for a marketing project. The addresses were a bit messed up, but once I figured out the means to get 3 possibly working locations from each one provided, it took only a few hours to go from looking at a list of ATMs spanning 67 pages to having a mock web-site up with a glorious google map flaunting the ATM as placeMarkers ... now I am adding info windows with YouTube videos and mock 'user-generated' content to it. =)

Final Word: Next post will have a video from the author of "the wisdom of the crowds" and we will talk about blogging ... also i can haz cheezeburgar!