2013-03-06

How Oakley Tested Their Wearable Computing Platform In The Mountains Of Argentina

Oakley’s Airwave goggle features a heads-up display running an extensible operating system. To test its performance, they brought a team of pro skiers and riders to Cerro Catedral, Argentina’s mountaineering paradise. "We had nightly review sessions, then the Recon guys would work through the night—we'd have app revision to test in the morning," says Oakley’s Ryan Saylor.



The Oakley Airwave goggle contains lots of the sensors and radios that handheld devices have, and a few they don’t: accelerometer, low energy Bluetooth, temp sensor, barometer, GPS, and gyroscope. The whole platform, built by Canadian company Recon, is open to developers, but creating apps for the mountain means testing them on the mountain—not exactly the typical milieu for a QA session.

The destination was San Carlos de Bariloche in Argentina, home of a world class ski and mountaineering destination. There, the team spent a week skiing and mountaineering in high winds, logging bugs during the day and transmitting them back to Recon HQ at night. The next day, the team would have a new build and a new list of priorities.

What did you learn about wearable device testing in the Argentina QA session?

Ryan Saylor, Director of Optics Technology at Oakley: UI and UX experience are the two key drivers. When you have a bunch of mechanical engineers and electronics engineers, a lot of things seem really easy. But my experience was: Take the consumer perspective. Don't assume they know how to navigate screens, and don’t assume they know how to connect three devices with Bluetooth. That has to be intentional: dumbing down the process. 

What specifically were you testing on the platform?

Mike Jensen, QA Tester and Snowboarder at Recon: Following our beta testing trip with Oakley at the beginning of the year, we had redesigned our GUI, navigation app, and for the first time achieved iOS integration.

How big was your team?

Saylor: We had seven people with a good understanding of what to look for: recording bugs, recording issues with the UI and OS, and quantifying feedback on the performance of the goggle.

How did each person contribute to testing?

Saylor: Each person on the trip documented their feedback thru the day, have lunch talking about what we'd each found, so we could focus more on other areas to see if we could reproduce bugs. We would consolidate the information at night in a round-table. Looking for common themes and larger. We'll typically use something on an Phone—notes or voice recordings. We'll use numbers (1 thru 10) and assign categories of feedback: all the verticals: connectivity of device, phone and remote; music control, which is new; navigation, which includes the compass calibration and mountain maps and buddy tracker; dashboard which shows metrics; notifications, screen where you can see if you've reached a new performance.

What did you do once you logged the bugs?

Saylor: We had nightly review sessions, and the Recon guys would work through the night. We'd have app revision to test in the morning. Jensen: The testing group was made up of seven riders, so I had to ensure that we had sufficient HUDs for testing that were updated daily, as I was receiving software builds with new improvements constantly from Recon head office in Vancouver. To update the units more effectively, a colleague of mine built a program that allowed for multiple units to be automatically updated at once from a USB hub, while showing the status of the update process. This really helped manage time. 

How did Oakley and Recon divvy up the work?

Saylor: There were two pieces of content in the review sessions: the hardware/software/app component, where Recon was working hard to revise their brand new iOS app, and the UI, which Oakley was heavily focussed on. So, Oakley was optimizing for the consumer experience, and manipulating the app how the thing interfaces with the device, and a whole host of other apps. Those were the two different areas to focus on. Jensen: While out in the field I would make sure to ride the chairlift with different members of the group throughout the day, and take notes about bugs and problems that my fellow testers had run into, ideas they had, or experiences they had enjoyed. All of this was relayed back to Recon through daily reports. The time difference [in Argentina] really helped this process, as we were four hours ahead of the Vancouver head office—when I would be finishing a ski day, the development teams would be in the office, taking my report and building a new software package based on the points testers raised. It was a lot of work for everyone, but it ended up playing to our advantage. 

Did you segment the system and assign different people areas to test?

Saylor: Not at first. For the first day or two, no segmentation of people looking for any particular thing: just a general assessment. After we got a feel for system arch and performance, day 2 and 3 we took a different tactic, focusing our efforts individually over the next four days.

Why did you do this in such an awesome location?

Saylor: We rely on South America in the summer for foul weather: the lowest common denominator. We backed it up with a follow-up trip focusing on the iOS app, where we wanted to address in the UI and the functionality. We came back to CA, then did a trip to Mt. Hood with Recon Team, since that's near Recon's office. Then another trip to Whistler, not to ski, but capitalize on immersion and testing the maps. We also rode mountain bikes in the goggle—it ended up being pretty productive.




Add New Comment

0 Comments