2013-08-06

An Anti-NSA Encryption Strategy Every Company Should Understand

An encryption method called "perfect forward secrecy," which is known for its reliability in the security community, is rapidly gaining popularity now that government snooping abounds. Here’s how it works.



The problem with webmail is that it has to exist somewhere. If it’s going to be accessible from any device you have to be able to log in. That means that your credentials are being stored on a server. And those credentials are the key to reading even the most strongly encrypted emails.

“Perfect forward secrecy” works by sending messages between devices in real time, so decryption keys are generated randomly on the fly. Instead of using the same key over and over to encrypt and decrypt messages, the way PGP (Pretty Good Privacy) does for example, secure messaging apps generate random and transient keys that only work and exist in a given moment for a single message. A crucial component of this system, though, is that both parties be in the app and messaging in real time. It is this coordinated connection that differentiates these secure apps from “asynchronous messaging,” or communications like email that can be sent at one time and read at another. If the message isn’t being stored on a server until the recipient is ready to read it, there isn’t a copy of it out there. It only exists locally on the device it was sent to.

But asynchronous messaging is popular and convenient for obvious reasons. Coordinating real-time (synchronous) messaging sessions can be difficult. So with competitors nipping at their heels, the makers of a tool called TextSecure are rolling out upgrades that don’t require real-time interaction, but keep the perfect forward secrecy.

With secure communication becoming a bigger, and trendier, priority since revelations about the reach and scope of NSA surveillance, discussion is ramping up about whether webmail can ever be secure. Services like Lavabit have already shut down rather than face the implications of this predicament, namely that if they were subpoenaed they would probably have to give the government the keys to your correspondence castle. Available encryption options, like PGP, have drawbacks and are too involved for the average user. So consumers are turning to encrypted smartphone messaging as an accessible avenue to inaccessible communication.

The Wall Street Journal reported last week that the popularity of encrypted messaging apps like Wickr for iOS and TextSecure for Android has increased dramatically in the last few months. Though neither app would release exact download figures, both are working on expanding to other mobile environments. Wickr is promising an Android app, and TextSecure, which is developed by the open source group Open Whisper Systems, has an iOS release slated for the end of September.

One of TextSecure’s developers, who goes by the name Moxie Marlinspike, has created a workaround using a concept called “prekeys.” In a blog post about the approach, Marlinspike says:

[In] synchronous messaging systems . . . if someone sends you a message after the app has been out of the foreground for two minutes, you just don’t get the message. This results in an awkward scenario. On the other hand, iOS apps like Threema and the proposed Heml support asynchronous messaging, but do not provide forward secrecy.

In the upcoming versions of TextSecure for Android and iOS, users all download 100 prekeys when they install TextSecure. Later when they send a message to someone who is not currently in TextSecure, the app uses information about both users’ prekeys to allow the creation of a “shared secret” that is sent with the message. When the recipient goes to retrieve the message, the package contains what they need to decrypt it. Marlinspike explains, “Since the server never hands out the same prekey twice (and the client would never accept the same prekey twice), we are able to provide forward secrecy in a fully asynchronous environment.”

TextSecure has competition from Wickr, as well as apps like Silent Text. As the market for secure messaging apps grows, TextSecure’s asynchronous messaging with perfect forward secrecy will presumably create competitive pressure for other apps to build out their capabilities and offer increasingly secure environments combined with transparent code.



Why We’re Tracking The Bad Internet

People’s lives and decisions are complicated. And the more they live them online the more ambiguity they introduce. But we’re not here to judge. This Bad Internet tracking story looks at offbeat or fringe Internet practices and people who are just trying to do a thing online. It explores the black hat spectrum, everything from scraping to vulnerability exploitation, and highlights utilities that could have both legitimate and dastardly functions.


Previous Updates


Why You Should Fear Your Ex-Boyfriend More Than The NSA

August 6, 2013

Security researcher and law student Brendan O’Connor realized recently that with just a few hundred dollars, some Wi-Fi adapters and sensors, and a Raspberry Pi he could create a system that compiled and visualized all the unencrypted data coming from his laptop, iPad, and smartphone. Appropriately, he named it the Creepy Distributed Object Locator, or creepyDOL.

If he was on public Wi-Fi, O’Connor could identify the OS and make of nearby devices, monitor their web browsing, or find their unique IDs and then track their locations when they queried servers for emails or instant messages. He was all up in his own business, testing how much data was available about him simply through his Internet-connected devices.

Doing this to someone else violates the Computer Fraud and Abuse Act, but why? Who could possibly care about the mind-numbing minutiae of a person’s daily data? You browse Facebook and Twitter, shop on Amazon, read boring work emails, maybe watch some light porn and check the news. But I couldn’t help but wonder what it would be like to date a guy--a wily, more obsessive version of someone like O’Connor--with the technical know-how to bug your entire life. So I went looking for people who could tell me.

A MalwareBytes user posted this in February:

Dear all, I was wondering which are the best ways to protect my computer from my ex-boyfriend who is a professional hacker. He is a very nice and honest person, just a little crazy: he still hacks me, in spite of my filling several reports with the police. It's just a game for him, but it's getting a little excessive.

And an aptly named MacRumors forum, “HELP, my ex hacked my iphone 4 and everything else,” includes suggestions from users like, “Restore your iPhone and computer. Get a new boyfriend. Someone that's not a psycho and u can trust.” And, “He probably sees that you are typing on this forum right now. You should just reformat your computer.”

Data protection has become such a blanket topic recently that it’s easy to forget the link between security and scale. Different types of data are vulnerable to different people or groups depending on their interests, and microdata that is attached to an identity is most useful to someone directly in that microcosm. The more I do online, the more I’m aware that my activity has value to specific people who may be motivated to figure out ways to screen it. And the more I’ll check into the technical skills of the people I get involved with.


Are Secure Networks Finally Becoming Trendy?

Every now and then I used to joke about Facebook reading my messages for targeted advertising, but since the PRISM debacle, I’ve realized I need to consciously acknowledge the availability of my data within services I use. And ideally I need to move some or all of my online interactions to more secure platforms (or at least a platform that will share some of the revenue it makes on my data).

Perhaps I’m not the only one feeling that way--and it appears developers can smell it. They’re rushing to create products that fill the void like the secure messaging app, Hemlis. And after Skype’s sketchy involvement with the NSA it makes sense that other competitor VoIP services would pop up claiming to be more secure.

One that looks promising is Tox, an encrypted messaging, phone, and video calling service that is currently in pre-alpha. TOX is largely being developed in the open, with input from Redditors and GitHub users, and promises total transparency, a necessity we’ve talked about before in this Tracking story. The big claim, pulled from the project’s marketing copy:

Tox is both free for you to use, and free for you to change. You are completely free to both use and modify Tox. Furthermore, Tox will never harass you with ads, or require you to pay for features.

But some developers are concerned that Tox is simply part of a larger fad: Security as the new reusable shopping bag. In forums like Wilders Security, commenters are calling these types of products “privacy heroes,” and point out that secure solutions with noble motives have been around since before it was cool. When discussing Tox, commenters point to jitsi as an open source alternative to Skype that’s been around in some form since 2003. That’s a long time.

The conversation hasn’t really picked up on Twitter yet, but one frequent poster on Wilders Security put it this way:

I'm seeing a lot of "nobodies" coming out of the woodwork with promises that are harder to keep than most realize. I'm not at all saying this Tox or other providers are intentionally cashing in on the fear and distrust, but it's very easy to get sucked in . . .

That commenter may not be saying it, but I can. As long as people have strong concerns about their data security online, developers will make products to address the issue. Quality may vary, but price and transparency should be good indicators of whether or not a developer is trying to “cash in” or solve this increasingly worrisome problem.


What Motivates Coders To Share Their Hacks?

July 30, 2013

A few weeks ago I was scraping data to make a map and I needed a tool that would merge some datasets. One set was in JSON while the rest were CSV files, meaning I needed a converter. Google found me thousands of options, which I quickly narrowed to three. The first two were unworkable--too many superfluous steps. The third one allowed me to upload a JSON file and then download its corresponding CSV version--just what I needed.

And then I started wondering who had made this fine little utility, and what compelled him to share it with the world. After all, developers build little hacks every day, but they don’t share most of them--either for fear of showing sloppy code, for laziness, or for lack of time. I decided to find out what motivated this developer to share his hacks. So I called him.

Dan Mandle is a Colorado-based developer who runs Things I’ve Figured Out, a blog where he posts tricks and guides based on things he’s working on. Many developers are active on StackOverflow, IRC, or various other channels, but I talked with him about his motivation for running the blog, and the sundry (and perhaps sketchy) ways people were applying his hacks.

I see lots of scripts and open source tools online, and I think it’s really interesting that this is sort of a do-good thing in the dev community. Is that something you’ve encountered too?

Dan Mandle: Yeah, it’s really cool. And the sole reason for my blog is just if it took me too long to figure something out because there wasn’t enough information available, I write it up to share with other people. And so the issue with the JSON converter was there were a handful of them out there but none of them were particularly good, and the good ones you had to pay for. And it wasn’t that difficult to code up, so I figured I’d make it available for everybody.

I saw that you put a donate button on the JSON to .csv converter. Did you do that from the start or only when it started to get really popular?

DM: I did that once it started to get popular. I’m up to 4,787 conversions and two donations since March 5 when I started running Google Analytics. But the reward is that I’m making something that other people find valuable and can use.

And what do you think they use it for? Something like your JSON converter was produced for legitimate work, but it could also be a step in some shady stuff. There’s this duality to tools that are out there.

DM: Actually, that is sort of the origins of how I figured out how to make the converter. One of my roommates works for a company, and they needed to find out what every car had for roof rack adapters. And so he was originally tasked with going and looking at either pictures or physical cars to figure out what kind of roof racks they had. And so with a little bit of digging, originally on Thule’s and then on Yakima’s site, Yakima actually has a JSON and JSONP feed that essentially we can stitch together to find every make and model and what type of roof rack adapter they have. So we ran this script that basically fetched each of the combinations overnight, which I think was something like 96,000 requests to their website. They have one JSON feed for the different models and the car code and then you would have to plug it in for a query string to find out what kind of rack options it had. So I stored it in a JSON feed and kicked it out as a .csv so they could work with it in Excel. So I had done that a few months before and then found this other legitimate need for the JSON to .csv converter at work. I needed to pull some data out of Amazon SimpleDB and this was the easiest way that I could get it to my coworkers.

Yeah, there’s so much data up on the Internet that people might not want you to have and yet they’re the ones who made it available in some format.

DM: Yeah, it’s definitely a gray area in terms of is this actually proprietary information or not? Do I have the right to use this? Is it in the public domain?

So what else are you going to “figure out” on your blog? Or we don’t know yet?

DM: Yeah, we don’t know until I come up with a big issue I run into. My next project is gonna involve learning Ruby on Rails so I may run into some issues!


Why The New IP Address System Might Be A Spammer’s Worst Nightmare

July 25, 2013

Spam may be the bane of our cyber-existence, but there are geographic considerations that go into producing it. One way security companies guard clients against junk mail and other attacks is by blocking IP addresses where spam has been known to originate. When too many IP addresses get blocked in one place, spammers pack up and move to a neighboring country and keep going.

By looking at IP blacklist data, we can see one such dance taking place in eastern Europe earlier this year. In January, only about 5% of IP addresses in Belarus were being blocked, a number that rose to almost 30% in May. The same study, produced by international message security company Cloudmark, points out that Romania currently has the most blocked IP addresses of any country. Spammers probably switched to using IP addresses in nearby Belarus and Russia to get around the problem, causing the spike in blocked Belarusian addresses. But then hosting companies in those countries wised up, implemented tighter restrictions, and forced them back to Romania’s more permissive hosts, which caused Belarusian IP blocks to drop back to normal levels in May.

It’s difficult to assess spam output because there are multiple ways to measure it: You can look at it in terms of how many spam messages are produced, how many IP addresses are blocked, or the percentage of blocked addresses in a given country, to control for population. Many sources cite the three countries with the largest populations, China, India and the U.S., as the origins for the majority of spam. This makes some amount of sense, but it doesn’t tell the full story unless you adjust the data for population and number of allocated IP addresses.

The security industry has operated using these measurements since email became a popular target for scammers, but the dynamics of spam are about to change. Now that all available IPv4 addresses have been allocated, security companies are beginning to turn their attention to what the spam environment will be like in IPv6. Once email providers moves to IPv6, some fear that spammers will have an advantage because they will be able to take over huge numbers of IP addresses without having to worry about the geographic constraints of a given country. But others point out that for this very reason an IP address’s “reputation” will no longer be a good indicator of its credibility at all when there are so many addresses, and that this will motivate the industry to discard IP blocking as a security strategy and adopt better methods.

Laura Atkins of Word to the Wise writes:

I don't expect IP reputation to become a complete non-issue. I think it's still valuable data for ISPs and filters to evaluate as part of the delivery decision process. That being said, IP reputation is so much less a guiding factor in good email delivery than it was 3 or 4 years ago. Just having an IP with a great reputation is not sufficient for inbox delivery. You have to have a good IP reputation and good content and good URLs.

As IPv6 rolls out among email providers and in general, the physical game of cat and mouse that spammers have been playing all over the world may morph into something different. It’s unclear whether this change will meaningfully affect how much spam we get every day, so until we know, keep those filters running.


Lots Of People Can Read Your Private Chats--Not Just The NSA

July 12, 2013

The PRISM frenzy has added significantly to a discussion that was already simmering about the level of security protection on messaging apps like Apple’s iMessage. These services are so easy to use that most consumers don’t think about who might have access to their data. But usually at minimum, the company providing the service can parse messages and conversations, and often advertisers or investors have some access as well. But a desire to take advantage of now-basic digital communication should not preclude users from privacy, right? And probably anyone planning a bank heist knows about these security holes.

Peter Sunde’s new messaging app, Hemlis, promises to emulate the ease-of-use that makes messaging apps so popular, while also offering total anonymity from a data perspective. The company is saying that it won’t sell ads or user data and the plan is to fund Hemlis through donations and paid premium features.

All communications on today’s networks are being monitored by government agencies and private companies . . . That’s why we decided to build a messaging platform where no one can spy on you, not even us.

But the question is, what security strategies will Hemlis use? Because the extent of its security features will, at least in part, dictate who uses it. And how much shady business can be conducted over it. A lot of companies claim that messages sent via their apps live in encrypted fortresses. Even services with lousy track records, like iMessage, are touted as secure.

Conversations which take place over iMessage and FaceTime are protected by end-to-end encryption so no one but the sender and receiver can see or read them. Apple cannot decrypt that data.

But just by taking a moment to think about how iMessage works, it’s clear that Apple is full of it. Messages must be somehow accessible if conversation histories are saved in iCloud for easy restoration on new devices, and if users have continuous, uninterrupted access to those histories even after they change their handset or iCloud password. These concerns were clearly outlined in a blog post by Johns Hopkins cryptographer Matthew Green a few weeks ago. He wrote:

That's the problem with iMessage: Users don't suffer enough. The service is almost magically easy to use, which means Apple has made tradeoffs--or more accurately, they've chosen a particular balance between usability and security. And while there's nothing wrong with trade-offs, the particulars of their choices make a big difference when it comes to your privacy.

These trade-offs are the crucial dictator for how a messaging service can be used for sensitive communication. If message histories are saved, even locally, the messages themselves are not secure. They can only function as such if their abstract meaning is transient and will not be useful to a later reader. A messaging system that works like SnapChat may sound like a better alternative, but it would run into similar issues between utilities that autosave received communications and the ubiquity of devices capable of taking screencaps.

No matter how sweeping a company’s privacy statements, they always seem to turn out bogus. For example, in 2008 Skype claimed that it could not tap users’ calls no matter what entity (private, government, etc.) requested data. Jennifer Caukin, Skype's then-director of corporate communications said, “Because of Skype's peer-to-peer architecture and encryption techniques, Skype would not be able to comply with such a request.” But it turns out that this was never true, or at least wasn’t true by 2010 when a pre-Microsoft Skype signed on to provide the audio from calls for PRISM.

If Hemlis can deliver on its lofty privacy goals there will be no reason to use any other messaging app on principle. But it seems like the only way for a service like Hemlis to be trusted for intensely private communication is for its backend to be totally open to scrutiny and evaluation. Without complete transparency, it will just be another black box into which people subtly allude to tax fraud, unwisely share their bank PIN, or correspond with their pot dealer.

[Image: Flickr user Esther Gibbons]






Add New Comment

1 Comments