Encryption has been one of the hot topics in politics lately, with
Hillary Clinton,
John McCain, and
various Republican presidential candidates voicing support of adding a "back door" to encrypted systems, so that the government and law enforcement can access encrypted data. This is a
bad idea for
several reasons, the most important of which is that there's no such thing as a weakness that only the good guys can exploit. Tech companies are responding with the corporate equivalent of
"WTF, No."
I don't actually believe that laws will be passed requiring back doors to be built into encryption systems. However, the fact that we're even talking about this highlights the growing gap in understanding between the people who build and run information systems, and the government that oversees them. The incompetence of politicians in general when it comes to tech is starting to cause breakdowns: see the de-facto fast lanes allowed by a
loophole in the net neutrality regulations, or the UK government-mandated
porn filters that also blocked sex education websites and resources for victims of rape and abuse, or Spain's failed mandatory
link tax law.
The question for discussion is this: At what point do tech companies have the right - or even the responsibility - to refuse to follow laws intended to regulate them that are nonsensical or even harmful? How should the potential consequences of such disobedience (major perceived loss of government authority, for one thing) be weighed against the consequences of implementing a bad law? Discuss.
Posts
At the point where we can trust them to make that call for us.
So never.
They have the right to do so if a law were unconstitutional or an action taken by them would be illegal. That's pretty much it.
And that's the big thing - the tech industry doesn't have the best track record in this department. Again, at what point is it principled opposition, and at what point is it an attempt to evade legitimate regulation? This is an industry well known for not being fond of regulation of any kind, after all.
Yep. Email services have been shut down, for instance, for providing to provide access to the government.
There are perhaps technical solutions to some of these problems, however, such as structuring things in such a way that the host company does not have keys to private user data.
The thing is, in the specific case of weakening encryption (though I'm sure we could come up with hypothetical others), I would find that unethical, even if the government were to require it. Complying with warrants from law enforcement for users' information is one thing; breaking encryption for everybody would be quite another.
At my job, I have occasionally been asked to implement things in software that were either illegal or "merely" unethical (in that they would have created unacceptable security risks for our users). In most cases it was because the people asking for the changes didn't understand the implications. In every case, we all sat down, explained why whatever it was was a bad idea, and hashed out a better solution. That doesn't always work with governments, as shown by the examples above.
I mean, really, you could apply the same question to any field. State governments imposing bad regulations on doctors have been terrible for women's health, for example. The best doctors either find ways around the laws (e.g., by giving government-mandated false information very quietly while playing music), or just ignore them outright. It seems to me the only difference is one of scale.
(For the record, I'm not an anarchist or libertarian or anything like that. The "perfect" solution would be getting policymakers up to speed on everything they need to know.)
In my opinion we're both too reliant on truly secure computer systems to risk adding weaknesses like backdoors, and the genie is out of the bottle on encrypting things yourself (the second bullet point from my other post). Math is kind of impossible to censor.
This assumes the government should only be reactive on this front. Rather then perhaps preparing for the avenue s a future situation might involve.
Which isn't too say that there isn't also a degree of wanting to do it regardless but there are completely legitimate reasons to expand capabilities beyond just what was used in the latest attack.
I work for a state university. My life is awash in upholding FERPA. HIPAA. PCI, and all sorts of regulation. Just keeping communications in my department secure is a major headache, before going to the other IT shops on campus. And then there are the academics, who are fine with all information on the internet being public and damn what might be done with it, oh and they have fought every attempt to enforce password security over the last 10 years.
No. After watching PCI compliance regs morph to a degree of ridiculousness due to misunderstanding as to require an entirely separate physical network for anything that may touch a credit card, I don't want government requiring us to weaken our security because they will undoubtedly think they are smarter than those of us who know how encryption works
That's a good point. I think there's an element of reactivity in a lot of our security that is ineffective - banning liquids on flights after an attempted attack using binary liquid explosives while still failing to catch most edged weapons, for instance. Preparing for the next threat is obviously a preferable solution.
This just happens to be a case in which an authoritarian solution is being proposed by a government, which warrants careful scrutiny. The recent French issue is a good example because it's full of new "state of emergency" powers expanding the ability to conduct house arrests while limiting freedom of association.
This isn't a nuanced technocratic response that includes encryption as part of an overall approach to disrupting known and predicted threat models, but rather one that includes measures which can be expected to be used to disrupt political dissidence as has been the case elsewhere.
This sort of thing- in the form of Apple's device encryption for iphones- is exactly what started the most recent round of encryption bitching. So not really a solution.
I have yet to see a proposal for an encryption backdoor that would actually work without fatally weakening security. And that's not a theoretical problem, either.. FREAK was (indirectly) caused by attempting to block export of high grade encryption. Even years after it became irrelevant whoops, it's back to bite us in the ass.
Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
...It seems like offering some sort of carte blanche privileges / expectations to tech companies doesn't really address the issue (that there are currently too many people in western governments that desperately want to believe it's still the 60s and refuse to acknowledge changes in culture, technology, the environment, etc), and I'm skeptical that this would offer much protection to people that a hypothetical backdoor mandate would render vulnerable to law enforcement (I'm think it would mostly be minority kids who have cell phones?).
I'm not sure what a good solution might be. The most direct approach would be just to vote out people with those regressive & aggressively ignorant attitudes, but somehow this rarely appears to pan out.
Whether this works or not is based on the scale you look at I believe. In terms of US politics then it does seem that it's actually impossible to get elected if you have an understanding of the current environment.
However, on a wider scale, I'm genuinely hopeful that countries that enact such policies will start to fall behind as the tech and innovation moves to more sensible locales, which in turn will gradually reverse the bias that seems to be present in the US currently.
Of course the biggest risk to that is forced export of US policy as is happening with copyright through TPP and similar (though that's probably another thread) and consumer apathy. It does seem that the biggest revelation out of Snowden was that everyone was already assuming the NSA was survealing everyone, and that somehow that was OK.
"We want you to build backdoors into all of your operating systems so that we can always spy on what black teenagers we've charged with petty crimes have been texting about protect the public from TERRORISM."
"...Hm. Well, that is highly unethical. We'll do it only if we can reserve the right to use that same backdoor to harvest advertiser-relevant data from our users and use them for draconian DRM shenanigans."
"Deal."
Ultimately this is the key thing - the only thing government action can actually do is weaken the security of normal activities. They can't actually make it easier to get at criminals/terrorists/etc. because nigh-unbreakable encryption schemes have been public for decades, and any bad actor with a modicum of sense would just grab source code circa today and bam.
Furthermore, since crypto algorithms more or less are public by default (because otherwise they can't be audited and so a secret algorithm is never going to be trusted by any expert) any bad actor with access to a programmer would just remove the backdoor.
Actually, I'm not convinced that you couldn't reverse engineer any master key to an algorithm from source, and it's not practical to make a unique dupe key for every encryption that happens and save it with the government somewhere (actually, impossible, because unplug from internet, encrypt locally, send file).
I am of the firm opinion that the current agency heads are either lying through their teeth to the public (and should be fired for it) or incompetent (and don't know it, and so should be fired for it). There is no excuse for them trying to sell the public on handing all our PII to China et al. on a silver platter.
3DS: 0473-8507-2652
Switch: SW-5185-4991-5118
PSN: AbEntropy
well I mean, what's the government going to do if google and apple refuse to comply? shut them down?
there'd be riots
I agree these guys need to be regulated, but what we're talking about here is the government mandating that everyone keep a key within 10 feet of all locked doors, it's nonsensically horrible
Take them over or hamstring them.
Bit of a tangent since I agree with your overall point, but it is absolutely possible to have a master key that's hard to reverse engineer from source, at least if you discount brute forcing the key.
If you're not trying to hide the back door it's easy, all you need to do is include the law enforcement public key, then encrypt everything with that as well as the destination public key. Law enforcement has the private key so can decrypt the message, just like the intended recipient can, but nobody else can as they don't have the key. The fact the public key is embedded in the source doesn't help reverse engineering the key as it was public anyway. Ultimately the private key could be brute forced, but it's basically just a matter of making it big enough to resist brute force.
If you're trying to be subtle about it things get a bit harder, but it's still possible. The key to the theorised Dual EC DRGB backdoor can't be easily reversed engineered as it involves solving the same discrete log problem that EC crypto is based on in the first place. It's been shown that if you generate the constants a particular way you can backdoor the algorithm, and there's no explanation of how the NIST constants were generated, and it matches some of the projects hinted at in the Snowden docs. So it's probably backdoored but no researcher has found the key and proved that yet.
The big problem is not that the keys can be reverse engineered from source, it's that whoever the backdoor is targeted at has to have the key for the back door to be useful. History has shown that securing such a key is very problematic and it will leak eventually. Then you would need to have some way of updating all copies of the algorithm to use a new key, which is never actually going to happen.
As we know, software tends to live longer than expected, so any assumption about how big the key needs to be to reduce brute forcing, or how long a process needs to live to allow updating in the event of disclousre of the master key are bound to be wrong.
Oh dammit. I blame typing on an empty stomach for that.
3DS: 0473-8507-2652
Switch: SW-5185-4991-5118
PSN: AbEntropy
Duuuhh... Does that first description (basically standard pki, with an extra key pair) generate two cyphertexts or does the one decrypt via either key? The later sort of is not my understanding to the normal function of asymmetrical encryption.
It's basically encrypt twice, send the one encrypted using government public key to them. Defeated by unplugging the ethernet cord.
3DS: 0473-8507-2652
Switch: SW-5185-4991-5118
PSN: AbEntropy
That is, like, 32 different flavors of pants on the head retarded.
There have been a few variants. With many PKI usages, the only thing actually encrypted using the asymmetric algorithm is a session key, the rest of the encryption is symmetric. So in those cases it's just the session key that's re-encrypted for the government.
And no, they don't send the government copy to the governement, it's just sent along with the copy to the recipient and they rely on existing interception mechanisms to get the cypher text. The point of the back door is just to allow them to decrypt it once they've got it, not to obtain the cypher text.
There are also good ways of stopping people just stripping out the government encrypted copy, specifically you require that the decription algorithm checks for it and validates it using a hash or similar. That way, if you remove the government copy the message is no longer considered valid and so the intended recipient can't read it.
Of course, as was pointed out earlier in the thread, any competent adversary would just get a programmer to remove both bits, allowing secure encrypted communication. But in doing so, they are no longer able to communicate with standard clients. You protect against this with laws etc. Basically requiring companies like Apple to only ship the version of the algorithm that both adds and checks for the govt copy, and require it to be in tamper resistant hardware. This doesn't prevent truly bad actors as they just use hacked clients, but does prevent regular users circumventing and so does allow for routine mass surveillance. As you said earlier, all the government can do is weaken encryption for regular users, not really prevent bad actors,
What I'm describing is basically the Clipper chip from the last round.
In terms of the relevance of DUAL EC DRGB, this blog explains it much better than I would. But basically, the random number generator is backdoored meaning that an attacker can predict what random numbers will come out. This often means the whole pki crypto layer becomes irrelevant as you can just guess the session key or similar.
Another blog by Matthew Green gives an excellent summary of the different ways of building back doors.
Note: Still saying that this is a terrible idea, just explaining how it's done.
The inference being (IMO) quantum computing. Such hardware would be prohibitively expensive for all but large institutions for the medium term at least. And sufficiently developed quantum computing would render most encryption relatively trivial to decrypt. If possible, and it appears to be an engineering issue/challenge rather than a question of the possibility at this point (as I understand it), this would be the middle ground of allowing governments (especially if sufficiently large devices were regulated explicitly) to have the ability to legitimately access messages while retaining the general utility of encryption.
QEDMF xbl: PantsB G+
There's nothing theoretically impossible saying we can't just decrypt these messages. We just don't have a method yet. By definition, any message can be decrypted given a sufficient number of attempts. If we can use quantum computing to skip to the right answer (essentially), there may be a way for encryption to be secure against nearly everyone but not absolutely everyone.
QEDMF xbl: PantsB G+
Actual backdoors, particularly something with essentially a single key, is just pants on the head crazy to me. I do information security, and over the last 2 years there has been a huge number of vulnerabilities tied to flawed/weak/reverse compatible with weak implementations of transport encryption. To intentionally create flaws and hope no one finds them, and that companies and end users can respond quickly is ridiculous(even more so with mobile).
Laws are laws and companies operating in (providing services to users in) a country need to follow them, but it is irresponsible to the point of being unethical to expose users and a company to this sort of known risk. I can't help but think such a law would encourage companies to move offshore as much as is possible. I think it is sort of hard to overstate how much potential damage would result from hackers basically being able to break into any communication on the fly, which is sort of what we are talking about.
If the Clinton administration wants to invent quantum computers or prove p = np, well, those have the same problem as encryption: math is neutral. It'd be a paradigm shift but one I'd rather the US be ahead on. Though those are huge assumptions on what is actually possible to do in the nearish term.
This seems reasonable. A technological arms race in which exploits and other mechanisms are used, revealed, and/or expended by governments and other actors in accordance with their priorities (which we would hope but acknowledge are often not good) while their opponents (criminals, terrorists, political dissidents, corporate entities seeking to protect trade secrets) invest in a level of protection that is 'good enough'. It is problematic that a given actor could intercept encrypted data and just sit on it until a flaw is discovered or sufficient computing power attained to crack it, but that at least affords the protection of timeliness.
Back doors, as you said, are not a good solution even if your perspective is that privacy rights are anachronistic.
It would seem that the solution to the problem of government overreach in this regard would be to educate the public both on the importance of privacy and encryption and the mechanisms by which they could make use of it. Surely we aren't going to get Craig in Accounting to use PGP when he can't even be made not to clink phishing links in his email, but it may be possible to shift the tone of the conversation.
I have some contentions with number 1. The government doesn't need, nor is allowed, to open and read my mail. They're not allowed to break into my house and drill open my safe. They've never been able to just do these kinds of things with it warrants, and they've never had trouble staying in control of national security.
The invention of quantum computing (it's perhaps worth noting that quantum computing is strictly theoretical at this point in time, and some experts in computer engineering have begin to question whether or not it's even possible) isn't the same as the creation of atomic weapons, though, because atomic weapons are always necessarily limited to whomever can access both fissile material and a means to assemble that fissile material into a bomb (...though, even granted these restrictions, admittedly proliferation became a problem rather quickly).
Quantum computers would no doubt become mass market items a few years out from their creation, much like conventional computers. So, after a few years of tender & loving warrantless state surveillance, you're either back to square one, because now people can make sufficiently complex encryption to defeat quantum processing power, or encryption is now a thing of the past and any 4chan user can break into anything they like. So, which is worse for U.S. national security: the government being unable to decrypt some messages that (ostensibly) would allow police / military forces to intervene against TERRORISM!, or 4chan being able to break into / edit / exploit whatever they want, however they want, whenever they want?
I'm presuming that was supposed to read without warrants? Because warrants can authorize a lot of things and those are definitely both on the list.
But that's the whole point - the government can do all those things, provided that they demonstrate that there is a legitimate government interest to a court and acquire a warrant. Part of the issue here is that there are people who want to make even that functionally impossible.
If you go back some time, during the TorrentSpy case, there was a lot of criticism of the government penalizing them for interfering with the process of discovery, even though discovery rules are considered a legitimate use of government power.
It's been possible to communicate in ways beyond the reach of warrants since forever though. Cryptography isn't new. It's certainly easier, but it's not new. In-person meetings, documents destroyed after delivery... this is all centuries-old.
3DS: 0473-8507-2652
Switch: SW-5185-4991-5118
PSN: AbEntropy