This is one of a series of live blog posts directly from the site of the 2013 IFS World Conference in Barcelona. Business journalist Adam Tinworth is a veteran of Reed Business Information and a lecturer on digital journalism at City University in London. His first-hand impressions are accompanied by illustrations of Matthew Buck, cartoonist for Drawnalism.
Pranav’s journey started with architecture – designing buildings rather than technology. He’s worked in rural and urban India, and with Microsoft. Then he joined MIT’s Media Lab. All of this helped him understand design, technology and a broad spectrum of different disciplines. One phrase informs his work:
Every industry at its maturity looks for the human touch.
Today, when you got to a market to buy a watch, you don’t ask how accurate a watch it is. You used to. Then people started advertising other things about it – it was thinner, golden, digital. It was the watch Einstein wore. Nobody cares about the accuracy any more.
10 year ago, when you bought a computer, you asked, “How much RAM does it have? How fast is it?” Now you ask: “Does it have touch? Does it come in pink?”
Magic Interfaces
“Any sufficiently advanced technology is indistinguishable from magic”
– Arthur C. Clarke
Any technology in Google Glass was magic 200 years ago. The medium of information keeps changing. It’s been physical, visual, auditory and any combination of these. It moved to electronics, and then into laptops and phones. Next? It’ll be in wearables – maybe.
We are on the verge of information changing its medium. Again. That’s exciting. No-one knows what it will look like – and that’s what’s exciting. The cloud of information – the data – is important. One thing is constant is our urge to look into data. How do we look into the cloud? In 1999, he took two broken computer mice and built a glove that mapped your actions into a virtual space. Can paper be an interface in the virtual work? Can you tag and analyze words and images on paper? If you do this, people who don’t want to communicate via computers can use paper, and still be part of the cloud.
Most families fight over the remote. How can you share the TV? Well, what if you could use glasses to pick out different signals and thus different channels? That idea led to the technology that runs 3D TV now. Glasses that detect blinks can be used to control robots and direct them. Right now, we generally use our hands to interact with devices. Could we use other things – and thus give access to people who can’t use their hands?
Touching the cloud
In all this work of his, he was trying to bring the physical world towards the digital world. We’ve been through virtual reality and mixed computing. We’ve had touch interfaces, and sensors inside physical objects. Smartphones bring the digital experience everywhere. But do we need to restrict the digital world to these screens? He put a projector on his bike helmet, and wherever he goes, he’s seeing colorful pixels. And that unleashed a new urge: to touch them. He set up a system to use pen tops on fingers and the camera detects them and allows you to “draw” on walls. Any wall becomes a potential interface. It can be used to take photos, to project the Amazon rating of a book onto books in a bookshop – it brings the dynamic information of the internet into the physical world.
Pushing it further, you can attach a microphone to be a piece of paper, and make it a tablet, controlled by the sound of your finger on it.
There’s a lot of commercial interest in this idea. In the meantime, the size of the device gets smaller and smaller. It’s nearly the size of a coin now. The human limitation on interfaces is greater than the technical one when it comes to screens. Too small and you can’t use it. A projector brings new utility – because the device can be tiny and still project images of different sizes.
He’s open sourced the technology behind this. One man in Brazil has used this technology to build a device that converts his sign language into spoken words.
Going Mouseless
One project based on this is Sparsh. He takes a YouTube movie on his phone, touches it with his finger, and “copies” it to his finger. He takes a tablet, touches it, and the video starts playing there. How’s it doing that? The camera in the phone and tablet it recognizing him, and knows what he last “stored”.
The mouse was invented in 1964 – and 50 years later it’s still there. Can we get rid of it? That was the genesis of his mouseless idea. It’s a non-existent mouse, that still controls your computer, though your hand actions. It uses an infrared laser and a camera to detect your movements and translate them into screen movement.
This camera-based analysis of your world opens up all sorts of option. If you phone recognizes a light that’s been internet connected, it can offer you the chance to turn it on and off remotely. People can annotate presentations from their own phone. Put the camera in glasses, and you can augment the world as you walk through it.
Recently he joined Samsung, and now he’s their head of research. The Galaxy Gear is his first Samsung product. It’s 2.5 seconds from thinking “I want to take a photo” to taking a photo – much quicker than a phone.
The world we have created is a product of our thinking. It cannot be changed without changing our thinking.
– Albert Einstein.