FLIPR, a project by the MIT Mobile Experience Lab and Turkish telecom Avea, creates an interface to capture locations over time.

The FLIPR project started out at Avea Labs, who explores how a mobile carriers can use technology to better engage people with their city. This sketch from the MIT lab's drawing board was the genesis for FLIPR.

Photographs in the same location can get stitched together to form one postcard "flip" over time.

Postcards need not be collaborative, and can merely document a moment in time, like this parade down Springfield Street in Sommerville, MA.

2013-04-01

Co.Labs

How The Rebirth Of The GIF Screws Up Authorship

This MIT-built app creates crowdsourced, time-lapse animations from individual snapshots which aren't quite still images but aren't video either. The renaissance of the GIF is also blowing up distinctions between time, place, and creator--all because you want to animate your cat.



Apps like Vine and Cinemagram have blown up what used to be a simple distinction between still images and video. Now an MIT app project has made the line even blurrier by using content created by different authors, officially enabling users to kick off a video-ish experience created by no one in particular, at no particular point in time.

The result of these authorless GIF-like animations is fast-moving "flipbook" animations that show a single location through the eyes of many. Subsequent photos at the location add frames automatically, creating a collaborative record of the spot over time. While playful photography is fun, the project's larger impact is to demonstrate an interface for collaboratively documenting spaces over time--without much deliberate action on the behalf of anyone.

We caught up with MIT Mobile Experience Lab Systems Designer Steve Pomeroy to talk about the project, which is dubbed FLIPR.

Why do you think it took so long for animated GIFs to catch on? What challenges do you face in working with animated GIFs?

I think animated GIFs have come back due to a number of factors. Foremost, all the large players have been fighting over codecs for the longest time, and even with the HTML5 video tag being implemented in browsers, the codec wars continue. There have been attempts at truces (WebM, Vorbis), but the scare of submarine patents (as well as a plethora of other political reasons) have been enough to prevent full adoption of any one codec across all browsers and platforms. Animated GIFs, on the other hand, have had native support since the dawn of Netscape, in all their dithered 256-color ugliness. To make animated GIFs work for video, a number of factors need to be in place: high bandwidth, plenty of spare CPU cycles and memory, a desire for sharing video, and sites that make it trivial to share.

Are still images and motion picture beginning to merge?

It's funny--for our mobile app we actually render the animations as video, as a proper lossy video codec can shave off an order of magnitude worth of bits from the weight of the animation. Also, mobiles often struggle to play one-megabyte animated GIFs, in part due to a lack of native hardware decoding. I worry that one day I'll see animated GIF encoding/decoding becoming a SOIC feature, right alongside MPEG4. Many Systems On an IC have video encoding and decoding built-in. This is a huge feature: just look at the number of people using raspberry pi for a media player. The "worry" is a bit sarcastic: it just represents a level of demand being met by the industry for something that's essentially the wrong tool for the job. For rendering the animated GIFs, we use Imagemagick; for our video processing, we use ffmpeg.

Do you view this as a Vine improvement? What was the inspiration for FLIPR?

The project was conceptualized before Vine was released, so we can't claim to be inspired by it. Others have inspired us, though. We love the playfulness and ease-of-creation of Instagram, and Cinemagram introduced the notion of lightly animated images, which we greatly enjoy.

What's in the stack?

We wanted to expand that space into timelapse/stop motion animations. This project builds on top of our F/OSS Open Locast framework, which allows developers to create location-based media platforms easily. The framework consists of two major parts: a web side and a mobile side. Both have been built upon an existing software stack--Django on the web side, Android on the mobile--and are designed to feature-match on both sides.

How do the layers interact?

In Open Locast, both the web and mobile stacks provide a data persistence layer: On Django we use its built-in ORM and on Android we developed our own F/OSS ORM-ish library that ties into Android's Content Provider framework. The web side exposes a RESTful HTTP+JSON API, which is used for all interaction on both the mobile and the web front-end. The mobile has a synchronization layer which can understand and speak to this API in a very generalized way. This allows us to build novel content-driven apps on top of Open Locast without having to constantly reimplement the data + communication + authentication parts of the stack. Additionally, a number of common features of content-driven apps come out-of-the-box, such as free tagging, commenting, favoriting, and media uploading/processing.

How do you associate an image with its location?

Location is added automatically to each flip at the time of image capture and published with the flip. As a flip could potentially span multiple locations, we take the best location discovered by the end of the first round of image capturing and use that. One of the unique features of FLIPR is that it allows for a flip to be made collaborative. Once it's marked collaborative, it shows up differently on the map and other users can post their own photos to it. One could imagine someone taking a photo of a public landmark, adding their own take on it and watching the whole thing change over time. As the author of the flip, you have control of it and can remove unwanted photos if they're added.

What features do you want to add next?

We look to continue to refine the interface to ensure that the process of going from zero to a high-quality flip that you're excited to share is as quick as possible. For example, we're working on building in an optional automatic image stabilization soon in order to minimize the effect of hand shake.