2013-05-02

Who Needs Siri When You Have A Brain?

Brain-controlled interfaces are making progress, just as Siri copycats are proliferating, suggesting that developers will have access to a surprising spectrum of different UI-less interface paradigms to watch in the next few years.



No fingers, no thumbs: Your next input device is your brain. According to the New York Times, there's something of an explosion going on in the brain-reading device game right now. Such is the pace of development, in fact, that:

In a couple of years, we could be turning on the lights at home just by thinking about it, or sending an e-mail from our smartphone without even pulling the device from our pocket. Farther into the future, your robot assistant will appear by your side with a glass of lemonade simply because it knows you are thirsty.

Big names are investigating this technology, which may be considered the ultimate "non-UI" interface, and the New York Times quotes MIT Technology Review's article about work being done in Samsung's Emerging Technology Lab. Samsung has a control like a "ski hat studded with monitoring electrodes" that can interact with a tablet UI.

But you may also already be familiar with some of this tech thanks to smaller companies like NeuroSky and Emotiv—both of which actually sell cheap headset-like systems that can detect brainwave activity and translate it into controls for games and simple apps on computers or smartphones. There's also more serious work such as Brown Institute's BrainGate project. This sophisticated, and surgically implanted brain interface has such a degree of finesse, once the user has learned how to use it, that it

enabled two people with full paralysis to use a robotic arm with a computer responding to their brain activity. One woman, who had not used her arms in 15 years, could grasp a bottle of coffee, serve herself a drink and then return the bottle to a table. All done by imagining the robotic arm’s movements.

The New York Times then reminds us of the classic sci-fi novel and film Firefox where an American pilot has a mission to steal a revolutionary Soviet fighter jet which has a brain interface, and the wrinkle is that he has to "think in Russian." Basically this is a reminder that many implementations of thought-control interfaces have already been imagined by writers. Even Arthur C. Clarke himself imagined a "braincap" that connected a user's thoughts seamlessly with a digital device—although Clarke's system goes even further and has the ability to create send data back to the user.

So is this really close to being true? Maybe not in the short term, because the world is going to take years to get used to simpler innovations like Google Glass. But remarkable progress is already underway.


The Personal Digital Assistant app revolution cometh. We know Apple has big plans for Siri, but this week Google brought its Now service to iOS—a move that could be considered an attack on Siri's job. Now is a very different type of system, but because it automatically farms through your online Google habits and presents you what it thinks is relevant info based on time, location, and other factors it may be considered a basic non-UI system.

But TechCrunch has noticed that several Siri-like apps have popped up over recent months—apps like Donna, Osito, and Sherpa. The conclusion is that the big issue is "finding an app that fits everyone’s lifestyle" because, for example, people have different workflows (since users may be loath to personally train the app during the initial learning-curve period).

Nevertheless it's a trend worth watching, especially since Apple may be poised to reveal some of Siri's enhanced powers at WWDC 2013 in June.

Should Developers And Designers Really Seek to Make UI Invisible?

Google design invisible. Read a few of the posts, and you’ll find dozens of designers agreeing that the best design is nearly always invisible. (The contrapositive is also popular; we really notice bad design.) The argument runs thus: A great product should be designed so that it just works, and the considerable time and effort taken to make it work perfectly shouldn't be obvious to the user.

It's an attractive idea which, taken to extremes, can lead technologists to pursue what's commonly called a No UI interface, where content is central and everything else—controls, notifications, processes—are somehow unnoticeable.

But is the best UI the one that's least perceptible? And if so, is it always the case?

What This Story Is Tracking:

Zero-interface designs are creeping into phones, desktops, tablets, cars, and televisions. New ideas about UI could lead to use cases for technology that we've never conceived of—or it could lead to a generation of impossibly confusing, arcane devices. We hash out the opinions here and cite products and designs we think will change the conversation.


Previous Updates

One of the ideals ascribed to future interfaces: They should feel like magic. The things a user has to do to control an action should be so fluid and natural that it's almost like the device itself knows in advance what users want to do. David Holz (CEO of Leap Motion, perhaps one of the most exciting "spatial interface" devices coming on sale), phrased this more eloquently when he spoke recently at SXSW. From The Guardian:

This world [of natural gestural interfaces] perceives you in new ways. It is very much a new reality; when you reach out for an object using leap motion, it comes to you—it's like being a Jedi.

Leap Motion's hardware and software can read user's hand gestures so well it's being experimented on for projects from controlling a remote-controlled boat to making an air harp musical instrument; check out these stories from New Scientist and Engadget.

But one of the most compelling demonstrations has come from a surprising source: NASA. On stage at the recent Game Developers Conference, scientists remotely controlled the 1 ton Athlete rover prototype using Leap Motion. Far from being a simple demo of a more intuitive way to command a rover on a distant planet or moon, NASA sees this sort of spatial UI as key to near-future robotic exploration of our solar system—because the interface is so transparent it helps operators feel like they're actually "tele-present." Whether or not a spatial UI is as beneficial here on Earth remains to be seen. From The Verge.


A different kind of invisible UI is the subject of a recent patent award to Apple.[/b] In this case, actually invisible. U.S. Patent Number 8,407,623 concerns "playback control using a touch interface," but it's not related to Apple's numerous other touch screen patents. Instead, Apple has considered how you may be able to meaningfully interact with a touch screen device like an iPhone when you can't see its display because it's in a pocket or bag. The specific idea is to:

Control media playback using a touch-sensing device without requiring the selection of displayed options...

This is actually more sophisticated than it may appear—even the most advanced touch screen UIs rely on visual feedback to let you know when you've touched or gestured on the right part of the display. To manage this, Apple's patent makes mention of controls like single and double taps or even circular dialing motions on the touch screen that could control volume.

Of course, gestures like these, made furtively under the dinner table or on an iPod strapped to a runner's arm, could be confusing without visual feedback. Hence, sections of the patent mention the device giving audible or tactile notice that a gesture has been correctly recognized:

Wherein the tactile feedback comprises at least one vibration that matches a touch pattern of the detected touch gesture.

You might argue this still constitutes a "user interface," but what Apple's clearly trying to do here is imagine how users could seamlessly interact with their touch-screen device when they can't see the screen. Part of the "No UI" argument is what actually qualifies as UI-less and what's merely a less-immersive, traditional interface.


Not touching the screen of a touch-screen phone may seem odd. But that's the core feature of the upcoming Sensus iPhone case. It's a protective plastic case with multitouch sensors built into the sides and back that work a lot like the built-in ones on the phone's screen, and it plugs into the iPhone's data port to transmit its control signals. The whole thing may remind you of the rear touchpad on Sony's PS Vita. The idea is to actively add to the touch experience by making use of where the user's hands naturally fall when holding the device; developers can use the case's controls to interact with their apps through some straightforward API hooks. Wired, which tested one at CES, described the experience like this:

When reading, for example, or browsing the Internet, you can navigate without swiping your finger across the screen. Simply run your fingers along the edge, or over the back. It's almost magical.


Apple's known for the tactile beauty of its devices, but what about jewelry? Apple is no stranger to nontouch interactions—i.e., Siri. But a recent twist to the long-standing Apple TV rumors include mention of a new interface device: the iRing. Analyst Brian White of Topeka Capital Markets had spoken to sources inside Apple's supply chain and suggested that, as part of Apple's plan to "revolutionize the TV experience forever," it would include a spatial UI controller that fits around the user's finger. From AppleInsider:

The iRing accessory described by White is a new concept that has not been previously detailed in other reports. His visits with Apple suppliers suggested the ring will act as a navigation pointer for the television and will allow the TV set to enhance motion detection and replace some of the functionality found in a remote.

We can guess that if it exists, the iRing would likely use a short-range, low-power wireless connection, like Bluetooth 4, because of its small scale. We can also guess that it would include some form of motion sensor and perhaps a signalling light, akin to Sony's Play Motion Control system for its Playstation 3. This could give the device extremely accurate motion-sensing powers for subtle gestures—perhaps at finger-size scales—that could lead to some powerful applications. Would the iRing also be a remote microphone for a Siri-enabled voice control interface, too? This could be the ultimate hands-off, zero-interface way to interact with your TV. But would it work for a desktop or mobile device?


Siri, perhaps the ultimate "No-UI" interface you can use today, may soon have a rival: Amazon is said to have paid some $26 million to buy Evi, a very Siri-like voice recognizer personal digital assistant, according to TechCrunch.

Evi, from British startup, True Knowledge, uses the same Nuance technology that Apple's said to use for Siri and worked on both iOS and Android. It was initially under threat from Apple for being too similar to Siri but was ultimately allowed to remain available. If anything Evi sounds a little more advanced than Siri currently is, perhaps representing what Siri may become soon, because TechCrunch notes it:

has been described as being capable of ‘learning.’ It has an ontology of tens of thousands of classes and almost a billion ‘facts’ (machine understandable bits of knowledge) and, says True Knowledge, can infer trillions more when needed.

Amazon's goal for Evi is unknown, but the service could easily find a home on Kindle Fire devices to both boost their overall functionality and add utility for users with disabilities. The move has sparked speculation, of course, that Amazon is going to release a smartphone—as has long been rumored. Evi on a smartphone would help Amazon rival Apple, and if it leveraged Amazon's extensive cloud infrastructure and tapped into the company's content cloud (as would seem inevitable), then it could be a very powerful content discovery and recommendations engine. Evi could also move Amazon still farther away from Google infrastructure in its forked edition of Android.

Separately, Apple, for its part, seems to be beginning a concerted push to advance Siri's functionality and importance within iOS. Recent job offerings, numbering about 10 individual positions, suggest that the company is trying to rapidly grow the team. Since iOS 7 is well under development, this would make for perfect timing.


There's a Kickstarter rival for Leap that's open, not closed. DUO is a home-brew 3-D gesture sensor project that's seeking $100,000 in cash through crowdfunding site Kickstarter. Unlike Leap Motion, the basic premise of DUO is that the hardware and software systems that make it work will be open source. The hope seems to be that this will make DUO ripe for some really clever, innovative hacking that could quickly deliver some truly next-generation UI interactions.

DUO uses a twin camera system, much like Leap, and though its VGA resolution seems crude, it can scan the 3-D area in front of the cameras (and thus sense user's hand positions and gestures) up to 374 times a second. The level of sensing accuracy this could create is significant—enabling the sort of gestural finesse needed to play a virtual musical instrument and perform fast, detailed moves for game play.

Hackers have already leveraged Microsoft's Kinect platform (which DUO is intended to outperform) for all sorts of incredibly novel uses. Hence DUO's Kickstarter page suggests the kit would be useful for:

Art/Visualizations
Human Computer Interaction
Gaming/Entertainment
Robotics/Machine Vision
3D Scanning/Environment Mapping

It's a bold vision and seems targeted at the developer and maker community versus Leap's evidently more commercial market.


No fingers, no thumbs: Your next input device is your brain. According to the New York Times, there's something of an explosion going on in the brain-reading device game right now. Such is the pace of development, in fact, that:

In a couple of years, we could be turning on the lights at home just by thinking about it, or sending an e-mail from our smartphone without even pulling the device from our pocket. Farther into the future, your robot assistant will appear by your side with a glass of lemonade simply because it knows you are thirsty.

Big names are investigating this technology, which may be considered the ultimate "non-UI" interface, and the NYT quotes MIT Technology Review's article about work being done in Samsung's Emerging Technology Lab. Samsung has a control like a "ski hat studded with monitoring electrodes" that can interact with a tablet UI.

But you may also already be familiar with some of this tech thanks to smaller companies like NeuroSky and Emotiv—both of which actually sell cheap headset-like systems that can detect brainwave activity and translate it into controls for games and simple apps on computers or smartphones. There's also more serious work such as Brown Institute's BrainGate project. This sophisticated, and surgically-implanted brain interface has such a degree of finesse, once the user has learned how to use it, that it

enabled two people with full paralysis to use a robotic arm with a computer responding to their brain activity. One woman, who had not used her arms in 15 years, could grasp a bottle of coffee, serve herself a drink and then return the bottle to a table. All done by imagining the robotic arm’s movements.

The NYT then reminds us of the classic sci-fi novel and film Firefox where an American pilot has a mission to steal a revolutionary Soviet fighter jet which has a brain interface, and the wrinkle is that he has to "think in Russian." Basically this is a reminder that many implementations of thought-control interfaces have already been imagined by writers. Even Arthur C. Clarke himself imagined a "braincap" that connected a user's thoughts seamlessly with a digital device—although Clarke's system goes even further and has the ability to create send data back to the user.

So is this really close to being true? Maybe not in the short term, because the world is going to take years to get used to simpler innovations like Google Glass. But remarkable progress is already underway.


The Personal Digital Assistant app revolution cometh. We know Apple has big plans for Siri, but this week Google brought it's Now service to iOS—a move that could be considered an attack on Siri's job. Now is a very different type of system, but because it automatically farms through your online Google habits and presents you what it thinks is relevant info based on time, location and other factors it may be considered a basic non-UI system.

But TechCrunch has noticed that several Siri-like apps have popped up over recent months—apps like Donna, Osito and Sherpa. The conclusion is that the big issue is "finding an app that fits everyone’s lifestyle" because, for example, people have different workflows...so apps like this that are designed to help you work will have to be really smart to adapt to each user (since users may be loathe to personally train the app).

Nevertheless it's a trend worth watching, especially since Apple may be poised to reveal some of Siri's enhanced powers at WWDC 2013 in June.


Blood, sweat and fear—how you may control future computer games unconsciously. You're human, so when you're excited or nervous your brain is very much the center of the experiences you're having. But in answer to some older bits of biology from earlier evolutionary needs, your body has its own take on what's going on and reacts in hundreds of subtle ways without your conscious mind taking part. Now, in what may be the ultimate "no-UI" trick, pioneering game company Valve is experimenting on using these physical cues to control the progress of a computer game. Via The Verge.

Valve's resident experimental psychologist was speaking at the recent Neurogaming Conference and Expo and revealed that his company has tried using sweat-sensing as part of the game Left 4 Dead. The idea is to monitor the player's sweat levels and feed that into the game, which reacts accordingly: A nervous player's heart will pound and they will sweat more and the game would detect this and speed up, giving them less time to complete a task. A calmer player would sweat less, and the game would proceed normally. Though these cues are unconscious they would certainly add to the immersive feeling of the game play itself.

In other experiments the company let gamers control the Portal 2 game only with their eyes. Though it was experimental it "worked pretty well", and we can imagine how much more emotional it would be to play a computer game without the encumberance of a physical controller in your hands.

These advances are the flipside to a no-UI solution to many other developments: They remove some of the hardware that normally gets between the computer user and what they're doing on the machine, and thus make the experience more immersive.


Samsung and Intel want to snoop on your chat to make their hardware seem psychic. One of the ways a computer can approach a zero user interface experience is by not requiring a user's input before it reacts and supplies information...and this is tech that Samsung and Intel are now said to be examining.

Recently Samsung, Intel and Telefónica said they were making a strategic investment in Expect Labs. This company is developing what it calls "anticipatory computing." The trick is that computer hardware like a tablet, PC or perhaps one that's built into a home entertainment solution would listen to the conversations going on in the house 24-7-365. By analysing what's being said the computers can react and present information that may be pertinent to the current discussion, or possibly even to future words.

Expect Labs CEO Tim Tuttle is quoted by The Verge explaining the idea like this:

We're focused on building software that listens to what's happening in a room and delivers information to people before they know they need it... Samsung imagines a world not too long from now where there is a flat-screen in every room. You might have a phone or tablet they built on you, but Samsung will also have a screen in your wall or on your refrigerator. They are interested in technology that can use voice commands as an input, that can listen to a conversation and provide answers without needing to be asked.

Meanwhile Intel's in the game to make sure its chips and software are the ones that hardware makers and software developers use to craft their solutions for this sort of "perceptive" computing environment.

And Telefónica's involvement is the real giveaway about the future of this tech: If the company can insert itself into the anticipatory computing experience, perhaps using its mobile hardware to listen to user's words and its data network to send off and thus process the data, it can help shape the information that's returned to the user. Privacy concerns aside, think of it as an even smarter Google Now with Telefónica working out how to monetize the experience.


Image: by Flickr user seeminglee




Add New Comment

0 Comments