Social media and smartphones have created the ultimate outlets for teenage angst. Last month, Rebecca Sedwick jumped off a ledge at 12 years old, killing herself. A bully she knew later published these words on Facebook: “Yes IK I bullied REBECCA nd she killed herself but IDGAF.” Kids are mean.
After the bully was arrested, a big debate set off about whether parents of a cyberbully should be legally liable for the crime. Most of the conversation on the parent’s defense blamed not having the tools or time to monitor their child’s social media and phones. The NSA spying frenzy has its hands on all of our information, yet the average parent has no access to a technology that can tell the difference between sexting, bullying, and harmless texting on their child’s cell phone.
That’s because there’s no way for a computer to tell what’s in an image—it only sees bits and pixels. But one Texas company may be about to change all that.
It’s called ImageVision, and the company thinks it can solve this oh-so-nuanced social problem with brute force software filtering. ImageVision built something they call Eye Guardian, which allows parents to monitor their child’s texts and Facebook account by reporting any illicit images or text. This is the story of how it works—and how teaching computers to “see” images could revolutionize the web.
At the beginning of our interview ImageVision’s cofounder, Mitch Butler, made one simple statement: “We do not intend to dictate morality or parenting standards.”
In his southern accent, Butler explained the event that led him to build EyeGuardian—“An inappropriate text from, ehh, I’ll call him Johnny from school, to my daughter and about four or five other little girls which contained inappropriate text commentary. But also, a picture of Johnny’s Johnson.”
His daughter was in her last year of middle school at the time, which was four years ago. The average age of an eighth grader is 13-14 years old. These teens stack up about 60 texts and spend around 7.5 hours online daily. On top of this, more than 22% of teens admit to sexting. ImageVision started out with one goal—trusting computers to keep an eye out for the kids by teaching machines to “see.” Here’s what this means for marketing and big data.
When Butler tried downloading an app to monitor his daughter’s phone, he found nothing. ImageVision differentiated itself from the rest of the industry by initially focusing on body parts.
“Essentially, what we do is we teach computers to see,” explains Butler, who was also attracted from a business standpoint. “We wanted to improve the way technology worked by automating the photo reviewing process.”
ImageVision’s image recognition software breaks down pictures at a pixel level and classifies them based on context, shape, texture, and color. Whether it decided to notify a parent of suspicious activity depends on the skin texture and skin tone in the images. If the machine thinks it’s seeing a lot of skin, it lets the parents know.
Butler himself admits he has always been a no-man. “I am always saying, No don’t do that. No, don’t play in the street when you’re a toddler. No, don’t play with fire. No, don’t drive and text.” He likes to think of the application as an extra eye to help with that.
“We are so busy and social media travels at the speed of light. I have my own kids and I can’t keep up with everything they do—every Facebook post.”
Today’s average first-time smartphone owner is 11 years old. “We walk them into the wireless store, we hand them a smartphone and say ‘have fun’ and they’re off to the wild, wild, west. No training,” says Butler.
“We realized that the Facebooks of the world, the Instagrams, the Photobuckets, the Yahoo’s, are all going to be hosting an enormous amount of visual content. In fact, recent studies state that in the big data world, 89% of a corporation’s data is visual,” says Butler. Yet until recently, most of the big data movement has focused on words and numbers.
Social media’s greatest asset is the free supply of user-generated content. Most of the inappropriate photos circulating the web have either been scanned by the system before or are duplicate copies of this content.
“We’re just focused on training computers how to sort through visual content, after paying virtual models that focus on contextual analysis,” says Butler.
For example, if you’re sitting at a light-colored wood conference table and someone took a picture, previous generations of image analysis software would recognize it as skin. This is because of the similarity of colors and texture.
“We’re looking at that table and saying, wait a minute, let’s do a textual analysis. We’re not only using color analysis, texture palettes, and shape models. We’re using artificial intelligence,” explains Butler.
ImageVision was able to do this utilizing machine learning methods and multiple analysis algorithms that classify an image at the pixel level. This classification is based on different features such as color and shades, texture, and shape. But now, a revolutionary application of ImageVision's decision tree is able to detect and recognize context of environment.
This is important, because a big part of being able to tell what is in an image comes from the ability to differentiate where the photo was taken. This technology can recognize whether the image is in a bedroom, out in the woods, or at the beach.
ImageVision migrated to a Hadoop architecture this year in search of better scalability, processing efficiency, and workload flexibility. Hadoop basically allows for a necessary structure for big data and the large amounts of information extracted from websites that are rich with visual content.
Butler knows that anti-sexting software might not be a big seller, so to fund the project, ImageVision has been working big companies, leveraging their technology to serve up better online advertisements.
While working with an image hosting service, ImageVision found pet selfies made up a large percentage of user-generated content—particularly cats. ImageVision aggregated this information for advertisers like Pedigree and Purina.
“A lot of my friends have gotten the ‘lose belly fat’ ad,” says Butler. “The problem is most of the time it’s a misdirected ad because they are active and in shape.” ImageVision deployed on a social networking site could ensure these ads only appear to users who, in their photos, appear to have a certain body shape. That means advertisers could tailor their spending more accurately. “It’s all about [showing] the thing they are looking for versus the thing they don’t need to see,” says Butler. The system could even be used to replace “flagging” systems on a network like Facebook.
Right now the company is working on silicon chips that embed its technology in other hardware, giving the software access to all the content on a device, not just certain apps. (“A very large Android OEM has engaged us for multiple products for different business projects,” says Butler.) Let’s hope it’s used to prevent innocent kids from suffering, and not for any other kind of censorship.
[Image: Flickr user Sadie Hernandez]