Devices that adapt and build smart environments | Sean Follmer | TEDxCERN


Translator: TED Translators admin
Reviewer: Ivana Korom We’ve evolved with tools
and tools have evolved with us. Our ancestors created these
hand axes 1.5 million years ago, shaping them to not only
fit the task at hand, but also their hand. However, over the years, tools have become
more and more specialized. These sculpting tools
have evolved through their use, and each one has a different form
which matches its function, and they leverage
the dexterity of our hands in order to manipulate things
with much more precision. But as tools have become
more and more complex, we need more complex controls
to control them. And so designers have become
very adept at creating interfaces that allow you to manipulate parameters
while you’re attending to other things, such as taking a photograph
and changing the focus or the aperture. But the computer has fundamentally
changed the way we think about tools, because computation is dynamic. So it can do a million different things and run a million different applications. However, computers have
the same static physical form for all of these different applications, and the same static
interface elements as well. And I believe that this
is fundamentally a problem, because it doesn’t really allow us
to interact with our hands and capture the rich dexterity
that we have in our bodies. And my belief is that, then,
we must need new types of interfaces that can capture these
rich abilities that we have, and that can physically adapt to us and allow us to interact in new ways. And so that’s what I’ve been doing
at the MIT Media Lab and now at Stanford. So with my colleagues,
Daniel Leithinger and Hiroshi Ishii, we created inFORM, where the interface can actually
come off the screen and you can physically manipulate it. Or you can visualize
3D information physically and touch it and feel it
to understand it in new ways. Or you can interact through gestures
and direct deformations to sculpt digital clay. Or interface elements can arise
out of the surface and change on demand. And the idea is that for each
individual application, the physical form can be matched
to the application. And I believe this represents a new way that we can interact with information, by making it physical. So the question is, how can we use this? Traditionally, urban planners
and architects build physical models of cities and buildings
to better understand them. So with Tony Tang at the media lab,
we created an interface built on inFORM to allow urban planners
to design and view entire cities. And now you can walk around it,
but it’s dynamic, it’s physical, and you can also interact directly. Or you can look at different views, such as population or traffic information, but it’s made physical. We also believe that these dynamic
shape displays can really change the ways that we remotely
collaborate with people. So when we’re working together in person, I’m not only looking at your face, but I’m also gesturing
and manipulating objects, and that’s really hard to do
when you’re using tools like Skype. And so using inFORM, you can really
literally reach out from the screen and manipulate things at a distance. So we used the pins of the display
to represent people’s hands, allowing them to actually touch
and manipulate objects at a distance. And you can also manipulate
and collaborate on 3D data sets as well, so you can gesture around them
as well as manipulate them. And that allows people to collaborate
on these new types of 3D information in a richer way than might
be possible with traditional tools. And so you can also
bring in existing objects, and those will be captured on one side
and transmitted to the other. Or you can have an object that’s linked
between two places, so as I move a ball on one side, the ball moves on the other as well. And so we do this by capturing
the remote user using a depth-sensing camera
like a Microsoft Kinect. Now, you might be wondering
how does this all work, and essentially, what it is,
is 900 linear actuators that are connected to these
mechanical linkages that allow motion down here
to be propagated in these pins above. So it’s not that complex
compared to what’s going on at CERN, but it did take a long time
for us to build it – we actually had to build it – and so we started with a single motor, a single linear actuator, and then we had to design
a custom circuit border to control them. And then we had to make a lot of them. And so the problem with having
900 of something is that you have to do
every step 900 times. And so that meant that we had
a lot of work to do. So we sort of set up
a mini-sweatshop in the media lab and brought undergrads in and convinced
them to do “research” — (Laughter) and had late nights
watching movies, eating pizza, and screwing in thousands of screws. You know — research. (Laughter) But anyway, I think that we were
really excited by the things that inFORM allowed us to do. Increasingly, we’re using mobile devices
and we interact on the go, but mobile devices, just like computers, are used for so many
different applications. So you use them to talk on the phone, to surf the web, to play games,
to take pictures, or even a million different things. But again, they have the same
static physical form for each of these applications. And so we wanted to know how can we take
some of the same interactions that we developed for inFORM and bring them to mobile devices. So at Stanford, we created
this haptic edge display, which is a mobile device
with an array of linear actuators that can change shape, so you can feel in your hand
where you are as you’re reading a book. Or you can feel in your pocket
new types of tactile sensations that are richer than the vibration. Or buttons can emerge from the side
that allow you to interact where you want them to be. Or you can play games
and have actual buttons. And so we were able to do this by embedding 40 small, tiny
linear actuators inside the device, and that allow you not only to touch them, but also back-drive them as well. But we’ve also looked at other ways
to create more complex shape change. So we’ve used pneumatic actuation
to create a morphing device where you can go from something
that looks a lot like a phone … to a wristband on the go. And so together with Ken Nakagaki
at the media lab, we created this new
high-resolution version that uses a ray of servo motors
to change from interactive wristband to a touch-input device to a phone. (Laughter) And we’re also interested
in looking at ways that users can actually
deform the interfaces to shape them into the devices
that they want to use. So you can make something
like a game controller, and then the system will understand
what shape it’s in, and change to that mode. So, where does this point? How do we move forward from here? I think, really, where we are today is in this new age
of the Internet of Things, where we have computers everywhere — they’re in our pockets,
they’re in our walls, they’re in almost every device
that you’ll buy in the next five years. But what if we stopped
thinking about devices and think instead about environments? And so how can we have smart furniture or smart rooms or smart environments or cities that can adapt to us physically, and allow us to do new ways
of collaborating with people and doing new types of tasks? So for the Milan Design Week,
we created TRANSFORM, which is an interactive table-scale
version of these shape displays, which can move physical objects
on the surface; for example, reminding you to take your keys. But it can also transform
to fit different ways of interacting. So if you want to work, then it can change to sort of
set up your work system. And so as you bring a device over, it creates all the affordances you need and brings other objects
to help you accomplish those goals. So in conclusion, I really think that we need to think
about a new, fundamentally different way of interacting with computers. We need computers
that can physically adapt to us and adapt to the ways
that we want to use them, and really harness the rich dexterity
that we have of our hands, and our ability to think spatially
about information by making it physical. But looking forward, I think we need
to go beyond this, beyond devices, to really think about new ways
that we can bring people together and bring our information into the world, and think about smart environments
that can adapt to us physically. So with that, I will leave you. Thank you very much. (Applause)

Comments 4

Leave a Reply

Your email address will not be published. Required fields are marked *