2013-04-22

Co.Labs

Google Glass Will Run Web Apps, Not Native Android

The limitations of the web frameworks Google Glass developers are allowed to utilize in their apps says something about the constraints of building a wearable device which runs apps.



Google has revealed that Glass will be built entirely on web apps, not native Android software. Though Glass runs a full Android OS, the first Glass developers in the "explorer" program have now found out that Google's API only allows interaction with the wearable goggles through the cloud, and the only apps that can be built for Glass are web apps.

What Google's doing here seems to be strictly controlling how much processing goes on aboard Glass' own electronics in order to deliver a day's useful battery life. The company says that Glass can work for a whole day as long as you don't record too much video. This seems to be the key--if video eats up so much power, Glass' battery must be relatively small (as befits the needs of it being comfortably portable). Google's sacrificing utility for a convincing user experience: Glass is clearly meant to be donned and used for an extended period of time, and users would quickly lose interest if they had to recharge it halfway through a typical day.

The limitations of the HTML and CSS services Google wants Glass developers to use also limits the kind of apps that seem possible: TechCrunch notes that "real" augmented reality apps probably aren't possible, nor is it easy to "stream audio or video from the device to your own services (though you can obviously use Hangouts on Glass)."

Part of the limitation to the web apps that the first Glass developers create will be failure of imagination. When the iPad first arrived many of the first apps weren't exactly demonstrative of the revolution in mobile computing the iPad represented. First-gen Glass apps will probably suffer the same fate, and users will be even more at sea because Glass is a wholly new type of device:

Will Glass be judged as good based on its ability to entertain? Its power to keep our smartphones in our pockets? Its ability to deliver real-time information when we need it most? It could easily be all of the above. One thing’s for sure: trying to evaluate what is and isn’t a “good” Glass experience will be one of the more exciting undertakings the tech world has seen in a long while.

Google's Eric Schmidt certainly isn't hyping expectations as to how revolutionary this wearable tech will be: At AllThingsD he mentioned things like checking your messages on the go. That's an act that Glass, being wearable, makes much more accessible and swift...but it's hardly a paradigm shift.

The feeling you may get from this is that though Google's really bringing the first wearable computer to the masses, it's being really cautious about it. It's not leaping boldly, experimentally into the fray.


Augmented reality constantly in your vision may be a distraction rather than a boon, according to some thinking by a surgeon who's really experimented with the tech. This may be bad news, or at least represent a serious designer and developer challenge, for wearable AR tech like Google Glass.

AR would seem to be one of the most promising aspects of wearable computing, thanks to its ability to display "augmented" information on your view of the world. This, of course, is likely to ultimately include things like coupons and real-time location-based ads (even if Google's forbidding it in the first-gen Glass apps).

Head and neck surgeon Ben Dixon, who has used all sorts of augmented data systems like on-screen prompts during endoscope-based procedures assumed:

...head-up displays would be a valuable resource. In theory they would provide anatomical guidance ("cut here") while alerting me to potential problems ("avoid that"). This would lead to safer, more efficient and less demanding surgery.

But:

...it was not long before I hit a major problem: distraction. Trained surgeons were unable to efficiently complete tasks while being presented with additional stimuli.

In one study, published in Surgical Endoscopy, surgeons completed tasks on a realistic model with some salient, but unexpected, findings placed in their field of vision.

Just 41 per cent of surgeons recognised additional information using a standard display, such as a computer monitor. In a group using an augmented reality display the rate was even worse. Almost every member of the group completely missed the unexpected finding.

The issue is "inattentional blindness", which means that when you're really concentrating on a task you completely fail to notice an unexpected stimulus. This has all sorts of important implications for how developers should write apps: Will cyclists using Glass miss important navigation instructions as they weave through traffic? Or, on the other hand, if Glass' nav alerts are too bold or demanding, will they divert attention to their AR display and make an injury-threatening mistake?


Want to know more about the future of the user interface? Check out our ongoing coverage: Tracking Spatial Controls And "No UI".


[Image: Flickr user Linus Bohman]