A cross-disciplinary research lab at MIT inventing self-assembly and programmable material technologies aimed at reimagining construction, manufacturing, product assembly and performance.
We live in an age of touch-screen interfaces, but what will the UIs of the future look like? Will they continue to be made up of ghostly pixels, or will they be made of atoms that you can reach out and touch?
At the MIT Media Lab, the Tangible Media Group believes the future of computing is tactile. Unveiled today, the inFORM is MIT's new scrying pool for imagining the interfaces of tomorrow. Almost like a table of living clay, the inFORM is a surface that three-dimensionally changes shape, allowing users to not only interact with digital content in meatspace, but even hold hands with a person hundreds of miles away. And that's only the beginning.
Created by Daniel Leithinger and Sean Follmer and overseen by Professor Hiroshi Ishii, the technology behind the inFORM isn't that hard to understand. It's basically a fancy Pinscreen, one of those executive desk toys that allows you to create a rough 3-D model of an object by pressing it into a bed of flattened pins. With inFORM, each of those "pins" is connected to a motor controlled by a nearby laptop, which can not only move the pins to render digital content physically, but can also register real-life objects interacting with its surface thanks to the sensors of a hacked Microsoft Kinect.
To put it in the simplest terms, the inFORM is a self-aware computer monitor that doesn't just display light, but shape as well. Remotely, two people Skyping could physically interact by playing catch, for example, or manipulating an object together, or even slapping high five from across the planet. Another use is to physically manipulate purely digital objects. A 3-D model, for example, can be brought to life with the inFORM, and then manipulated with your hands to adjust, tweak, or even radically transform the digital blueprint.
But what really interests the Tangible Media Group is the transformable UIs of the future. As the world increasingly embraces touch screens, the pullable knobs, twisting dials, and pushable buttons that defined the interfaces of the past have become digital ghosts. The tactile is gone and the Tangible Media Group sees that as a huge problem.
"Right now, the things designers can create with graphics are more powerful and flexible than in hardware," Leithinger tells Co.Design. "The result is our gadgets have been consumed by the screen and become indistinguishable black rectangles with barely any physical controls. That's why BlackBerry is dying."
In other words, our devices have been designed to simulate affordances—the quality which allows an object to perform a function, such as a handle, a dial or a wheel—but not actually have them. Follmer says that's not the way it's supposed to be. "As humans, we have evolved to interact physically with our environments, but in the 21st century, we're missing out on all of this tactile sensation that is meant to guide us, limit us, and make us feel more connected," he says. "In the transition to purely digital interfaces, something profound has been lost."
The solution is programmable matter, and the inFORM is one possible interpretation of an interface that can transform itself to physically be whatever it needs to be. It's an interesting (and literal) analogue to skeuomorphism: while in the touch-screen age we have started rejecting interfaces that ape the look of real world affordances as "tacky" in favor of more pure digital UIs, the guys at the Tangible Media Group believe that interface of the future won't be skeuomorphic. They'll be supermorphic, growing the affordances they need on the fly.
Although the inFORM is primarily a sandbox for MIT to experiment with the tactile interfaces to come, it would be wrong to dismiss this project as mere spitballing. "We like to think of ourselves as imagining the futures, plural," Follmer says. "The inFORM is a look at one of them." But while the actual consumer implementation may very well differ, but both Follmer and Leithinger agree that tangible interfaces are coming.
"Ten years ago, we had people at Media Lab working on gestural interactions, and now they're everywhere, from the Microsoft Kinect to the Nintendo Wiimote," says Follmer. "Whatever it ends up looking like, the UI of the future won't be made of just pixels, but time and form as well. And that future is only five or ten years away. It's time for designers to start thinking about what that means now."
Facebook, Google, And Sony Are Getting Ready To Fight A Cyberpunk War
Science fiction has always presaged the advent of actual technology, and taught us how to think about it before it comes. A century before the Apollo Space Program, Jules Verne had flown a rocketship to the moon; 40 years before the iPad, Stanley Kubrick's 2001: A Space Odyssey imagined touch-screen tablets in every bag and briefcase.
Now, the next big war in tech is coming, and it has once again been predicted by science fiction: the curious subgenre of the 1980s known as cyberpunk, which deals with the technological blurring of the lines between individuals, machines, and mega-corporations. With Google Glass, Sony's recent announcement of a virtual reality headset, and Facebook's $2 billion purchase yesterday of the company that makes the VR headset Oculus Rift, it's clear that the cyberpunk era is now here, three decades after it was first predicted by novels like Neuromancer and Snow Crash. A cyberpunk tech war is coming. Not for your pocket, desktop or living room, but for how you experience reality...
We use the characters in our daily life.
And these visual characters have various, context-dependent meanings.
In the Japanese system of hiragana, “あ” [ah] is the most flexible character.
Despite its simple sound, different inflections can express various human emotions: joy, anger, sorrow, and pleasure.
As such, the character of “あ” [ah] must reflect the emotional nuances of its use in the highly context-dependent culture of Japan. People often use this character without realizing its various expressions.
To make them know this amazing feature, we proposed an object whose visual shape is the character “あ” [ah].
When people play with the object striking, bending, or rubbing, the various sounds of “あ” [ah] are expressed.
Through the communicative possibilities of “あ” [ah], we can learn to appreciate the nuanced relationship between a character and its meaning.
Finalist at 20th International collegiate Virtual Reality Contest (IVRC2012).
Jury Prize at 18th Student CG Contest.
Awards for Excellence at 2012 Asia Digital Art Award.
This is a blog about researches on "Interactivity" & "Installation".
This was an ongoing independent study project in Winter 2009 with Associate Professor Steve FORE.
It has become a Thesis Project in Fall 09 & Winter 10, working with Assistant Professor Samson YOUNG.
I have already completed the MFA program, still would like to continue to look in this area. Update irregularly and comments are welcomed.