Options

The Growing [Surveillance State]

2456787

Posts

  • Options
    XrddXrdd Registered User regular
    Xrdd wrote: »
    Intentionally weakening stuff that is actually widely used like AES (or DES when that was the standard) would be incredibly stupid and I doubt that the NSA actually did that (although who the fuck knows what's up with that PRNG). @Schrodinger is saying that they should have.

    No, what I'm saying is:

    1) No one should be surprised and
    Of course they should be, because, as has been pointed out repeatedly, backdoors in widely used crypto standards are fucking stupid.
    2) We still don't know what this "sabotage" actually entails, and what sort of resources would be required in order to exploit it. Not just in terms of inside knowledge, but also in terms of time and computer power.

    Is this going to be like the movie Sneakers, where the NSA can crack whatever they want in a matter of seconds?

    Or is this a weakness that requires the resources of $TEXAS to utilize, and is too impractical to use on anyone who isn't already on the FBI's most wanted list?
    A backdoor that is only marginally more effective than brute force wouldn't actually be a backdoor. And the reason we don't know any of that is because such sabotage is probably extremely rare and doesn't happen at all with widely used standards because it would be really fucking stupid.

  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Goumindong wrote: »
    FISA did not and has never required targets be agents of foreign governments

    If you say so.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    SchrodingerSchrodinger Registered User regular
    Xrdd wrote: »
    Xrdd wrote: »
    Intentionally weakening stuff that is actually widely used like AES (or DES when that was the standard) would be incredibly stupid and I doubt that the NSA actually did that (although who the fuck knows what's up with that PRNG). @Schrodinger is saying that they should have.

    No, what I'm saying is:

    1) No one should be surprised and
    Of course they should be, because, as has been pointed out repeatedly, backdoors in widely used crypto standards are fucking stupid.
    2) We still don't know what this "sabotage" actually entails, and what sort of resources would be required in order to exploit it. Not just in terms of inside knowledge, but also in terms of time and computer power.

    Is this going to be like the movie Sneakers, where the NSA can crack whatever they want in a matter of seconds?

    Or is this a weakness that requires the resources of $TEXAS to utilize, and is too impractical to use on anyone who isn't already on the FBI's most wanted list?
    A backdoor that is only marginally more effective than brute force wouldn't actually be a backdoor. And the reason we don't know any of that is because such sabotage is probably extremely rare and doesn't happen at all with widely used standards because it would be really fucking stupid.

    It depends on how long it would take to crack it brute force.

    For instance, if it takes several decades to crack it with brute force, but only several weeks on a $10 million computer to crack it with a back door, then the exploit is both hugely advantageous and completely impractical in most cases.

    If you can't even tell us how much time/computer power/inside knowledge would be necessary to exploit this problem, then how is anyone else actually expected to execute it?

  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Suppose you legally get your information from an informant. If the warlord learns of this, then the informant will be murdered.

    That would be a good reason for parallel constructionism.

    That would be a good reason for parallel construction!

    You know what wouldn't be a good reason for parallel construction? Concealing from the court whether evidence was acquired in an appropriate manner in the first place.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    XrddXrdd Registered User regular
    Xrdd wrote: »
    Xrdd wrote: »
    Intentionally weakening stuff that is actually widely used like AES (or DES when that was the standard) would be incredibly stupid and I doubt that the NSA actually did that (although who the fuck knows what's up with that PRNG). @Schrodinger is saying that they should have.

    No, what I'm saying is:

    1) No one should be surprised and
    Of course they should be, because, as has been pointed out repeatedly, backdoors in widely used crypto standards are fucking stupid.
    2) We still don't know what this "sabotage" actually entails, and what sort of resources would be required in order to exploit it. Not just in terms of inside knowledge, but also in terms of time and computer power.

    Is this going to be like the movie Sneakers, where the NSA can crack whatever they want in a matter of seconds?

    Or is this a weakness that requires the resources of $TEXAS to utilize, and is too impractical to use on anyone who isn't already on the FBI's most wanted list?
    A backdoor that is only marginally more effective than brute force wouldn't actually be a backdoor. And the reason we don't know any of that is because such sabotage is probably extremely rare and doesn't happen at all with widely used standards because it would be really fucking stupid.

    It depends on how long it would take to crack it brute force.

    For instance, if it takes several decades to crack it with brute force, but only several weeks on a $10 million computer to crack it with a back door, then the exploit is both hugely advantageous and completely impractical in most cases.

    Yes, I'm sure foreign governments would find a $10 million price tag to be an insurmountable obstacle. And, in case you hadn't noticed, computers are actually getting faster. So that price tag is coming down quickly. Furthermore, nobody who has anything the US government would be willing to spend those resources on is going to use NIST-standardised stuff. So you've just made civilian systems a lot less secure for no clear gain, even though those can be pretty damn important for national security (critical infrastructure!).
    If you can't even tell us how much time/computer power/inside knowledge would be necessary to exploit this problem, then how is anyone else actually expected to execute it?

    Did you miss this part?
    And the reason we don't know any of that is because such sabotage is probably extremely rare and doesn't happen at all with widely used standards because it would be really fucking stupid.
    Or this part?
    Did you even read my earlier response to you or are we doing this thing again where you repeat the same point over and over again and ignore peoples' responses?

  • Options
    SchrodingerSchrodinger Registered User regular
    Feral wrote: »
    Suppose you legally get your information from an informant. If the warlord learns of this, then the informant will be murdered.

    That would be a good reason for parallel constructionism.

    That would be a good reason for parallel construction!

    You know what wouldn't be a good reason for parallel construction? Concealing from the court whether evidence was acquired in an appropriate manner in the first place.

    I take anything said by the Guardian with a grain of salt, given that they're still trying to save face after their initial accounts turned out to be completely overblown. That doesn't mean I'm going to reject anything they have to say outright, but it does mean reading between the lines.

    The article mentions that wiretaps may have been used. It doesn't mention if the wiretaps were actually illegal.

    Here's a scenario: The NSA wiretaps a foreign citizen/terrorists on foreign soil. The terrorists offhandedly mentions a shipment of drugs, so the NSA notifies the DEA in the hopes that the drug dealers can be arrested, which will hopefully trace back to the organization.

    If the NSA reveals that they wiretapped a foreign citizen, then the organization will realize that the NSA is onto them. Ergo, parallel constructionism.

  • Options
    XrddXrdd Registered User regular
    edited October 2013
    Xrdd wrote: »
    The way I see it, it's basically like expecting Tony Stark not to have a contingency plan in case some bad guy manages to steal his armor.

    Only instead of "bad guy steals his armor," it's more like "bad guys can have open access to his armor whenever they want."

    Tony Stark owns his armor and can control who has access to it. The US government does not own and did not create most crypto algorithms and is in no position to control who has access to them. Putting a huge hole in his armor just in case someone steals it also seems out of character for Stark.

    EDIT: Did you even read my earlier response to you or are we doing this thing again where you repeat the same point over and over again and ignore peoples' responses?

    The US government does own the NIST standards. It's a US government agency.

    I mean you do realize it's a US government agency, and the Advanced Encryption Standard is a US government specification (which is used by everyone because if it's good enough for top secret, it's probably good enough for whatever you're doing).

    You do not seem to understand how encryption standards work.

    Missed this post earlier. I do understand how they work, but unlike you I also understand where the algorithms that are being standardised actually come from in the first place. Look at my post again, I said the US government does not own the algorithms, not the standards. Yes, AES is very, very obviously a US government standard. Rijndael is something some guys from Belgium came up with and very, very obviously not owned by the US government.

    Xrdd on
  • Options
    SchrodingerSchrodinger Registered User regular
    Xrdd wrote: »
    Xrdd wrote: »
    Xrdd wrote: »
    Intentionally weakening stuff that is actually widely used like AES (or DES when that was the standard) would be incredibly stupid and I doubt that the NSA actually did that (although who the fuck knows what's up with that PRNG). @Schrodinger is saying that they should have.

    No, what I'm saying is:

    1) No one should be surprised and
    Of course they should be, because, as has been pointed out repeatedly, backdoors in widely used crypto standards are fucking stupid.
    2) We still don't know what this "sabotage" actually entails, and what sort of resources would be required in order to exploit it. Not just in terms of inside knowledge, but also in terms of time and computer power.

    Is this going to be like the movie Sneakers, where the NSA can crack whatever they want in a matter of seconds?

    Or is this a weakness that requires the resources of $TEXAS to utilize, and is too impractical to use on anyone who isn't already on the FBI's most wanted list?
    A backdoor that is only marginally more effective than brute force wouldn't actually be a backdoor. And the reason we don't know any of that is because such sabotage is probably extremely rare and doesn't happen at all with widely used standards because it would be really fucking stupid.

    It depends on how long it would take to crack it brute force.

    For instance, if it takes several decades to crack it with brute force, but only several weeks on a $10 million computer to crack it with a back door, then the exploit is both hugely advantageous and completely impractical in most cases.

    Yes, I'm sure foreign governments would find a $10 million price tag to be an insurmountable obstacle. And, in case you hadn't noticed, computers are actually getting faster. So that price tag is coming down quickly. Furthermore, nobody who has anything the US government would be willing to spend those resources on is going to use NIST-standardised stuff. So you've just made civilian systems a lot less secure for no clear gain, even though those can be pretty damn important for national security (critical infrastructure!).
    If you can't even tell us how much time/computer power/inside knowledge would be necessary to exploit this problem, then how is anyone else actually expected to execute it?

    Did you miss this part?
    And the reason we don't know any of that is because such sabotage is probably extremely rare and doesn't happen at all with widely used standards because it would be really fucking stupid.
    Or this part?
    Did you even read my earlier response to you or are we doing this thing again where you repeat the same point over and over again and ignore peoples' responses?

    If you're going to talk about a hypothetical weakness, then you need to put it in real world terms how weak it actually is in order to put it in perspective.

  • Options
    GoumindongGoumindong Registered User regular
    edited October 2013
    Feral wrote: »
    Goumindong wrote: »
    FISA did not and has never required targets be agents of foreign governments

    If you say so.
    For purposes of FISA, a U.S. person is a person or corporation that is in the United States lawfully. This includes citizens, permanent resident aliens, U.S. corporations, and associations substantially comprised of U.S. persons.

    Foreign powers and agents of foreign powers are the potential targets for FISA surveillance. Foreign powers are foreign governments and their representatives, factions of foreign nations, international terrorist groups, and entities directed and controlled by one or more foreign governments.

    An agent of a foreign power is basically anyone other than a U.S. person who acts in the United States on behalf of a foreign power. This includes officers of a foreign government, members of international terrorist groups, and visitors who are secretly gathering intelligence. Additionally, an agent of a foreign power is anyone, including a U.S. person, who engages in international terrorism activities or secretly gathers intelligence for a foreign power and may break the law by doing so.

    If you say so.

    The language of FISA is not restricted to whom we would typically call an "foreign agent" because a "foreign power" is basically any international organization and does not have to be affiliated with a State in the way you're implying.

    edit: And its also important to know that they are talking about the specific search requirements (I.E. not the general level/metadata level intelligence collection like Pen Register data) so there is also that

    Goumindong on
    wbBv3fj.png
  • Options
    GoumindongGoumindong Registered User regular
    Question: You keep saying that breaking standard encryption would be "really stupid" but i am having a hard time figuring out why. If you know the backdoor then you likely know when other people are using it, can exploit it yourself, and generally give yourself a massive intelligence advantage over other foreign powers.

    wbBv3fj.png
  • Options
    XrddXrdd Registered User regular
    Xrdd wrote: »
    Xrdd wrote: »
    Xrdd wrote: »
    Intentionally weakening stuff that is actually widely used like AES (or DES when that was the standard) would be incredibly stupid and I doubt that the NSA actually did that (although who the fuck knows what's up with that PRNG). @Schrodinger is saying that they should have.

    No, what I'm saying is:

    1) No one should be surprised and
    Of course they should be, because, as has been pointed out repeatedly, backdoors in widely used crypto standards are fucking stupid.
    2) We still don't know what this "sabotage" actually entails, and what sort of resources would be required in order to exploit it. Not just in terms of inside knowledge, but also in terms of time and computer power.

    Is this going to be like the movie Sneakers, where the NSA can crack whatever they want in a matter of seconds?

    Or is this a weakness that requires the resources of $TEXAS to utilize, and is too impractical to use on anyone who isn't already on the FBI's most wanted list?
    A backdoor that is only marginally more effective than brute force wouldn't actually be a backdoor. And the reason we don't know any of that is because such sabotage is probably extremely rare and doesn't happen at all with widely used standards because it would be really fucking stupid.

    It depends on how long it would take to crack it brute force.

    For instance, if it takes several decades to crack it with brute force, but only several weeks on a $10 million computer to crack it with a back door, then the exploit is both hugely advantageous and completely impractical in most cases.

    Yes, I'm sure foreign governments would find a $10 million price tag to be an insurmountable obstacle. And, in case you hadn't noticed, computers are actually getting faster. So that price tag is coming down quickly. Furthermore, nobody who has anything the US government would be willing to spend those resources on is going to use NIST-standardised stuff. So you've just made civilian systems a lot less secure for no clear gain, even though those can be pretty damn important for national security (critical infrastructure!).
    If you can't even tell us how much time/computer power/inside knowledge would be necessary to exploit this problem, then how is anyone else actually expected to execute it?

    Did you miss this part?
    And the reason we don't know any of that is because such sabotage is probably extremely rare and doesn't happen at all with widely used standards because it would be really fucking stupid.
    Or this part?
    Did you even read my earlier response to you or are we doing this thing again where you repeat the same point over and over again and ignore peoples' responses?

    If you're going to talk about a hypothetical weakness, then you need to put it in real world terms how weak it actually is in order to put it in perspective.

    I'm not, I'm talking about why any such hypothetical weakness would be a really fucking stupid idea. Again, something that is costly but practical for a government to exploit today is going to be feasible to exploit for a lot of other entities within a few years. You are also making the false assumption that a potential attacker (China, Russia...) isn't going to be able to dedicate similar resources to the attack even now. There is no possible scenario where a backdoor would be practical for the US government but impossible to exploit for anyone else. Which is why backdoors are stupid.

  • Options
    XrddXrdd Registered User regular
    Goumindong wrote: »
    Question: You keep saying that breaking standard encryption would be "really stupid" but i am having a hard time figuring out why. If you know the backdoor then you likely know when other people are using it, can exploit it yourself, and generally give yourself a massive intelligence advantage over other foreign powers.

    No, the fact that you created the backdoor does not magically allow you to know when someone else has found it. That someone else would be just as capable of exploting the backdoor as you are, making all systems using the standard you sabotaged potentially vulnerable to attacks by that someone else. The people who rely on your standards are probably not your enemies. The people who have found your backdoor might be. You have made the first group vulnerable to attacks by the second group.

  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited October 2013
    Goumindong wrote: »
    Question: You keep saying that breaking standard encryption would be "really stupid" but i am having a hard time figuring out why. If you know the backdoor then you likely know when other people are using it, can exploit it yourself, and generally give yourself a massive intelligence advantage over other foreign powers.

    A backdoor program isn't necessarily going to track people using it.

    Incenjucar on
  • Options
    SchrodingerSchrodinger Registered User regular
    Xrdd wrote: »
    I'm not, I'm talking about why any such hypothetical weakness would be a really fucking stupid idea. Again, something that is costly but practical for a government to exploit today is going to be feasible to exploit for a lot of other entities within a few years. You are also making the false assumption that a potential attacker (China, Russia...) isn't going to be able to dedicate similar resources to the attack even now. There is no possible scenario where a backdoor would be practical for the US government but impossible to exploit for anyone else. Which is why backdoors are stupid.

    security.png

  • Options
    XrddXrdd Registered User regular
    Xrdd wrote: »
    I'm not, I'm talking about why any such hypothetical weakness would be a really fucking stupid idea. Again, something that is costly but practical for a government to exploit today is going to be feasible to exploit for a lot of other entities within a few years. You are also making the false assumption that a potential attacker (China, Russia...) isn't going to be able to dedicate similar resources to the attack even now. There is no possible scenario where a backdoor would be practical for the US government but impossible to exploit for anyone else. Which is why backdoors are stupid.

    security.png

    You are either posting an irrelevant comic, misunderstanding my post, misunderstanding the comic or misunderstanding both. You'll need to clarify which it is.

  • Options
    SchrodingerSchrodinger Registered User regular
    Xrdd wrote: »
    Goumindong wrote: »
    Question: You keep saying that breaking standard encryption would be "really stupid" but i am having a hard time figuring out why. If you know the backdoor then you likely know when other people are using it, can exploit it yourself, and generally give yourself a massive intelligence advantage over other foreign powers.

    No, the fact that you created the backdoor does not magically allow you to know when someone else has found it. That someone else would be just as capable of exploting the backdoor as you are, making all systems using the standard you sabotaged potentially vulnerable to attacks by that someone else. The people who rely on your standards are probably not your enemies. The people who have found your backdoor might be. You have made the first group vulnerable to attacks by the second group.

    How do we know that the backdoor is something that can even be "found"?

    I'll admit I don't know the first thing about cryptography. But it doesn't seem like anyone is doing a very good job of explaining what these weaknesses entail.

    If the backdoor is a magic universal password that instantly unlocks everything, then yeah, I suppose it's possible for someone to guess it with enough trials.

    OTOH, suppose the "back door" is simply a super secret method to reduce the decryption time from several decades to several weeks. In that case, even if you manage to miraculously guess the formula, it would still take several weeks to find out if the formula actually works.

  • Options
    GoumindongGoumindong Registered User regular
    Xrdd wrote: »
    Goumindong wrote: »
    Question: You keep saying that breaking standard encryption would be "really stupid" but i am having a hard time figuring out why. If you know the backdoor then you likely know when other people are using it, can exploit it yourself, and generally give yourself a massive intelligence advantage over other foreign powers.

    No, the fact that you created the backdoor does not magically allow you to know when someone else has found it. That someone else would be just as capable of exploting the backdoor as you are, making all systems using the standard you sabotaged potentially vulnerable to attacks by that someone else. The people who rely on your standards are probably not your enemies. The people who have found your backdoor might be. You have made the first group vulnerable to attacks by the second group.
    I think this is wrong. Having a backdoor doesn't magically let you know who has found it (though it can give you a good idea using pretty standard counter intelligence methods because knowing the parameters necessary to get in means you can find them). You also have to weight the probability that someone finds the backdoor(and can use it surreptitiously with the value you get out of being able to get in.

    Question: How many people knew about this backdoor before Snowden leaked that it exists?

    Followup: What is this backdoor precisely?

    These go directly to the point about "weakening the systems for our allies".

    wbBv3fj.png
  • Options
    IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    If there's enough money or fun to be had, you're going to get people looking for holes in a system. Tech companies and internet companies already get hacked every year.

  • Options
    SchrodingerSchrodinger Registered User regular
    Incenjucar wrote: »
    If there's enough money or fun to be had, you're going to get people looking for holes in a system. Tech companies and internet companies already get hacked every year.

    The holes are usually in the human side of the equation.

    Of course, that all goes out the window if you're dealing with foreign terrorists who don't want to be found in the first place, rather than simply breaking into the computer of a known person.

  • Options
    XrddXrdd Registered User regular
    Xrdd wrote: »
    Goumindong wrote: »
    Question: You keep saying that breaking standard encryption would be "really stupid" but i am having a hard time figuring out why. If you know the backdoor then you likely know when other people are using it, can exploit it yourself, and generally give yourself a massive intelligence advantage over other foreign powers.

    No, the fact that you created the backdoor does not magically allow you to know when someone else has found it. That someone else would be just as capable of exploting the backdoor as you are, making all systems using the standard you sabotaged potentially vulnerable to attacks by that someone else. The people who rely on your standards are probably not your enemies. The people who have found your backdoor might be. You have made the first group vulnerable to attacks by the second group.

    How do we know that the backdoor is something that can even be "found"?

    I'll admit I don't know the first thing about cryptography. But it doesn't seem like anyone is doing a very good job of explaining what these weaknesses entail.

    If the backdoor is a magic universal password that instantly unlocks everything, then yeah, I suppose it's possible for someone to guess it with enough trials.

    OTOH, suppose the "back door" is simply a super secret method to reduce the decryption time from several decades to several weeks. In that case, even if you manage to miraculously guess the formula, it would still take several weeks to find out if the formula actually works.
    Don't take this the wrong way, but the bolded bit is really obvious. You don't find weaknesses by trial and error, you do it by cryptanalysis, and a backdoor would be discovered in the same way that unintentional vulnerabilities in cryptographic algorithms are discovered by researchers all the time.
    Goumindong wrote: »
    I think this is wrong. Having a backdoor doesn't magically let you know who has found it (though it can give you a good idea using pretty standard counter intelligence methods because knowing the parameters necessary to get in means you can find them). You also have to weight the probability that someone finds the backdoor(and can use it surreptitiously with the value you get out of being able to get in.
    Bolded part doesn't make sense to me. But what is the value you get out of being able to get in if the only people who are affected by your backdoor are people who trust you enough to use your encryption standards in the first place - that is, not your enemies?
    Question: How many people knew about this backdoor before Snowden leaked that it exists?

    Followup: What is this backdoor precisely?

    These go directly to the point about "weakening the systems for our allies".

    Plenty of people had their suspicions about Dual_EC_DRBG prior to the Snowden leaks. As far as anyone knows, no backdoors exist in standards anyone actually uses.

  • Options
    jmcdonaldjmcdonald I voted, did you? DC(ish)Registered User regular
    I'm also curious to what this "sabotaging" actually entails.

    http://www.youtube.com/watch?v=z5rRZdiu1UE

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    The way I see it, it's basically like expecting Tony Stark not to have a contingency plan in case some bad guy manages to steal his armor.

    Only instead of "bad guy steals his armor," it's more like "bad guys can have open access to his armor whenever they want."

    Also no one is actually certain that's what happened.

    Basically, everyone was convinced this is what the NSA did to DES in the 90s or whenever, when they were pushing for certain magic numbers for the algorithm, and arguing for a reduced key-size (56 bits).

    What people forget is that the computer hardware of the time was exponentially less powerful. There were really good reasons to want to keep the DES keys short, because DES is slow and hardware crypto-engines weren't common. The magic numbers turned out to be because the NSA had identified a differential cryptanalysis attack on DES, but didn't want to disclose that they'd done it, and the numbers they were pushing for initialized the algorithm such that differential cryptanalysis wouldn't work.

    All of which makes me enormously suspicious of the people who are thrown into hysterics everytime you can fit "NSA" into the title of a news article, claiming that the NSA definitely tried to backdoor an algorithm. I'll believe it when someone shows me the memo or technical brief which says "the factors for Dual_EC_DRBG form the public part of a private key, giving us easy access to the random sequence".

    Because seriously, the citations on this point are incredibly weak and no primary evidence has been presented.

    As I understand the algorithm the constants have to be the public part of a public-private keypair. The question is if the public part was generated directly or from a known private key

  • Options
    shrykeshryke Member of the Beast Registered User regular
    Feral wrote: »
    Suppose you legally get your information from an informant. If the warlord learns of this, then the informant will be murdered.

    That would be a good reason for parallel constructionism.

    That would be a good reason for parallel construction!

    You know what wouldn't be a good reason for parallel construction? Concealing from the court whether evidence was acquired in an appropriate manner in the first place.

    Why? That's like the perfect example of when you should REQUIRE parallel construction. As long as they can construct a fully legal chain of reasoning, what's the problem?

    Hell, there's like 5 dozen episodes of Law and Order about this shit. :p

  • Options
    GoumindongGoumindong Registered User regular
    edited October 2013
    Xrdd wrote: »
    Bolded part doesn't make sense to me. But what is the value you get out of being able to get in if the only people who are affected by your backdoor are people who trust you enough to use your encryption standards in the first place - that is, not your enemies?
    Not sure if you've read the news, but we spy on allies too ;)
    Plenty of people had their suspicions about Dual_EC_DRBG prior to the Snowden leaks. As far as anyone knows, no backdoors exist in standards anyone actually uses.

    I am not sure that article says what you think it says. It basically says that people who use the same key over and over again are vulnerable if someone gets a hold of that key.

    Essentially: All DRBG's are subject to be broken by solving their primary algorithm(or an instance of their primary algorithm). They're deterministic, that is what deterministic means.

    There is legitimately a threat of a backdoor here(if you use that RNG and you use its constants and the NSA decided to generate the constants by the method they show and recorded it), but there is no more a threat of it being broken than any RNG being broken (unless the algorithm itself is actually easier to break than the other DRBG's*)

    Essentially finding the backdoor does not necessarily get you access, you have to actually be able to break in the front door in order to get access to the backdoor so described.

    *which you might note they did not do.

    Edit: its important to note that any set of constants used for the seed of a DRBG leaves you open to the same attack (which is why the NIST suggests changing your seed

    Goumindong on
    wbBv3fj.png
  • Options
    XrddXrdd Registered User regular
    Goumindong wrote: »
    Xrdd wrote: »
    Bolded part doesn't make sense to me. But what is the value you get out of being able to get in if the only people who are affected by your backdoor are people who trust you enough to use your encryption standards in the first place - that is, not your enemies?
    Not sure if you've read the news, but we spy on allies too ;)
    Pretty sure US-friendly governments don't just blindly adopt US standards, either. So... such backdoors would allow you to spy on civilians who don't care enough to not use your shit?
    Plenty of people had their suspicions about Dual_EC_DRBG prior to the Snowden leaks. As far as anyone knows, no backdoors exist in standards anyone actually uses.

    I am not sure that article says what you think it says.
    No, it says exactly what I said it says: People realized that something was up with Dual_EC_DRBG before the leaks.
    It basically says that people who use the same key over and over again are vulnerable if someone gets a hold of that key.

    Essentially: All DRBG's are subject to be broken by solving their primary algorithm(or an instance of their primary algorithm). They're deterministic, that is what deterministic means.

    There is legitimately a threat of a backdoor here(if you use that RNG and you use its constants and the NSA decided to generate the constants by the method they show and recorded it), but there is no more a threat of it being broken than any RNG being broken (unless the algorithm itself is actually easier to break than the other DRBG's*)

    Essentially finding the backdoor does not necessarily get you access, you have to actually be able to break in the front door in order to get access to the backdoor so described.

    *which you might note they did not do.

    As for the bolded, a standard with a backdoor is broken. As for the rest... you might want to re-read that article (or look at a better source than a fairly superficial blog post from 2007).

    Here, let me quote the relevant parts for you:
    This is how it works: There are a bunch of constants -- fixed numbers -- in the standard used to define the algorithm's elliptic curve. These constants are listed in Appendix A of the NIST publication, but nowhere is it explained where they came from.
    Huge red flag. There's a reason why this is a thing.
    What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output. To put that in real terms, you only need to monitor one TLS internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.
    That is not behaviour exhibited by a decent PRNG.
    The researchers don't know what the secret numbers are. But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.

    Of course, we have no way of knowing whether the NSA knows the secret numbers that break Dual_EC-DRBG. We have no way of knowing whether an NSA employee working on his own came up with the constants -- and has the secret numbers. We don't know if someone from NIST, or someone in the ANSI working group, has them. Maybe nobody does.

    We don't know where the constants came from in the first place. We only know that whoever came up with them could have the key to this backdoor. And we know there's no way for NIST -- or anyone else -- to prove otherwise.
    So, to sum up, there's a backdoor and the NSA (or someone else) may have the metaphorical key, therefore you may now be vulnerable to attacks by them or by anyone who manages to obtain it. By cryptographic standards, that's pretty damn broken.


  • Options
    GoumindongGoumindong Registered User regular
    edited October 2013
    I read both the linked article and the read the relevant papers the article linked(though frankly i don't understand how the second attack linked supposedly works and I am slightly biased against such a result because of it). You don't need to quote shit back at me after you've linked it. You're reading the article incorrectly. Here is what the people who found the exploit said
    Conclusion

    WHAT WE ARE NOT SAYING: NIST intentionally put a back door in this PRNG

    WHAT WE ARE SAYING:
    The prediction resistance of this PRNG (as presented in NIST SP800-90) is dependent on solving one instance of the elliptic curve discrete log problem. (And we do not know if the algorithm designer knew this before hand.)

    And the response is "yes but so is every DRBG!" [that is not to say the work isn't valuable its just to say it says much less than you think it says]

    Goumindong on
    wbBv3fj.png
  • Options
    XrddXrdd Registered User regular
    Goumindong wrote: »
    I read both the linked article and the read the relevant papers the article linked(though frankly i don't understand how the second attack linked supposedly works and I am slightly biased against such a result because of it). You don't need to quote shit back at me after you've linked it. You're reading the article incorrectly. Here is what the people who found the exploit said
    Conclusion

    WHAT WE ARE NOT SAYING: NIST intentionally put a back door in this PRNG

    WHAT WE ARE SAYING:
    The prediction resistance of this PRNG (as presented in NIST SP800-90) is dependent on solving one instance of the elliptic curve discrete log problem. (And we do not know if the algorithm designer knew this before hand.)

    And the response is "yes but so is every DRBG!" [that is not to say the work isn't valuable its just to say it says much less than you think it says]

    No. This absolutely does not apply to every PRNG. It doesn't even apply to Dual_EC_DRBG if you're using a random Q. It does apply if you're using the constants NIST used to suggest (they now strongly recommend not using Dual_EC_DRBG). That's the whole reason people care about this! Maybe you should make an effort to actually understand the vulnerability in question before making statements like that. Here is a more detailed explanation.

  • Options
    GoumindongGoumindong Registered User regular
    edited October 2013
    You do realize that that article said exactly what I just said right?
    So what does this mean? In general, recovering the EC point shouldn't actually be a huge problem. In theory it could lead to a weakness -- say predicting future outputs -- but in a proper design you would still have to solve a discrete logarithm instance for each and every point in order to predict the next bytes output by the generator.

    And here is where things get interesting.

    [rest of paper = THE NSA COULD TOTALLY HAVE THE VALUES THAT LET THEM DO THIS!]

    Still does not indicate an issue.

    edit: You also realize the NIST standards suggest using your own Q (and throwing away more bits incidentally)

    Goumindong on
    wbBv3fj.png
  • Options
    RchanenRchanen Registered User regular
    The rest of this thread is going to be Xrdd and Guo arguing over encryption standards, isn't it?

    And I am cool with that.

  • Options
    GoumindongGoumindong Registered User regular
    Maybe this will make sense Xrdd.

    You're arguing that the NSA having backdoors to this RNG (or crypto in general) implies that other people can access those backdoors. However the situtation we are discussion is one in which even if the NSA has a backdoor, in order for someone else to exploit such a backdoor they would have to be capable of breaking the entire system. That is, accessing the back door is not as trivial as finding it(which itself is not trivial for non-math definitions of trivial)

    This implies that no, backdoors are not symmetric in the manner you're suggesting. And if they aren't then the argument that we should not have backdoors because it makes our allies weaker goes out the window.

    wbBv3fj.png
  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    The vulnerability in this case would be someone inside the nsa giving the backdoor to the attackers

  • Options
    PolaritiePolaritie Sleepy Registered User regular
    The way I see it, it's basically like expecting Tony Stark not to have a contingency plan in case some bad guy manages to steal his armor.

    Only instead of "bad guy steals his armor," it's more like "bad guys can have open access to his armor whenever they want."

    Also no one is actually certain that's what happened.

    Basically, everyone was convinced this is what the NSA did to DES in the 90s or whenever, when they were pushing for certain magic numbers for the algorithm, and arguing for a reduced key-size (56 bits).

    What people forget is that the computer hardware of the time was exponentially less powerful. There were really good reasons to want to keep the DES keys short, because DES is slow and hardware crypto-engines weren't common. The magic numbers turned out to be because the NSA had identified a differential cryptanalysis attack on DES, but didn't want to disclose that they'd done it, and the numbers they were pushing for initialized the algorithm such that differential cryptanalysis wouldn't work.

    All of which makes me enormously suspicious of the people who are thrown into hysterics everytime you can fit "NSA" into the title of a news article, claiming that the NSA definitely tried to backdoor an algorithm. I'll believe it when someone shows me the memo or technical brief which says "the factors for Dual_EC_DRBG form the public part of a private key, giving us easy access to the random sequence".

    Because seriously, the citations on this point are incredibly weak and no primary evidence has been presented.

    The factors do form, essentially, a public key. That much has been mathematically proven. The question is whether the NSA has it (that is, whether they generated it from the private key, or the private key is just an unknown solution). The problem is that if they have it, it can be stolen. Additionally, the pressure placed on NIST to include the algorithm in the standard, while circumstantial, does suggest they have it. That the proof was only published recently barely mitigates this, since the NSA has historically been well ahead of everyone else in crypto (it is, after all, their job... though frankly I feel they should be reduced to tech support and have no power to actually task surveillance - roll them into the CIA, with its convenient legal prohibition from acting domestically)

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Options
    GoumindongGoumindong Registered User regular
    Alternately, again from the Xrdd linked article
    The first three SP800-90 proposals used standard symmetric components like hash functions and block ciphers. Dual_EC_DRBG was the odd one out, since it employed mathematics more that are typically used to construct public-key cryptosystems. This had some immediate consequences for the generator: Dual-EC is slow in a way that its cousins aren't. Up to a thousand times slower.

    Now before you panic about this, the inefficiency of Dual_EC is not necessarily one of its flaws! Indeed, the inclusion of an algebraic generator actually makes a certain amount of sense. The academic literature includes a distinguished history of provably secure PRGs based on on number theoretic assumptions, and it certainly didn't hurt to consider one such construction for standardization. Most developers would probably use the faster symmetric alternatives, but perhaps a small number would prefer the added confidence of a provably-secure construction.

    Note that Dual_EC_DRBG is actually provably secure under two circumstances which the article lines up for you.

    I.E. you drop >16 bits and if you're concerned about the NSA you generate your own Q.

    wbBv3fj.png
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    Polaritie wrote: »
    The way I see it, it's basically like expecting Tony Stark not to have a contingency plan in case some bad guy manages to steal his armor.

    Only instead of "bad guy steals his armor," it's more like "bad guys can have open access to his armor whenever they want."

    Also no one is actually certain that's what happened.

    Basically, everyone was convinced this is what the NSA did to DES in the 90s or whenever, when they were pushing for certain magic numbers for the algorithm, and arguing for a reduced key-size (56 bits).

    What people forget is that the computer hardware of the time was exponentially less powerful. There were really good reasons to want to keep the DES keys short, because DES is slow and hardware crypto-engines weren't common. The magic numbers turned out to be because the NSA had identified a differential cryptanalysis attack on DES, but didn't want to disclose that they'd done it, and the numbers they were pushing for initialized the algorithm such that differential cryptanalysis wouldn't work.

    All of which makes me enormously suspicious of the people who are thrown into hysterics everytime you can fit "NSA" into the title of a news article, claiming that the NSA definitely tried to backdoor an algorithm. I'll believe it when someone shows me the memo or technical brief which says "the factors for Dual_EC_DRBG form the public part of a private key, giving us easy access to the random sequence".

    Because seriously, the citations on this point are incredibly weak and no primary evidence has been presented.

    The factors do form, essentially, a public key. That much has been mathematically proven. The question is whether the NSA has it (that is, whether they generated it from the private key, or the private key is just an unknown solution). The problem is that if they have it, it can be stolen. Additionally, the pressure placed on NIST to include the algorithm in the standard, while circumstantial, does suggest they have it. That the proof was only published recently barely mitigates this, since the NSA has historically been well ahead of everyone else in crypto (it is, after all, their job... though frankly I feel they should be reduced to tech support and have no power to actually task surveillance - roll them into the CIA, with its convenient legal prohibition from acting domestically)

    This is the point of my little story about the history of DES. Because at the time it looked like exactly the same thing. Only it turned out it wasn't - the point of the magic numbers was to close up an attack vector for the US government, with the added benefit that anyone who thought this was "totally suspicious" would be likely to pick different numbers, which could then be cracked much more easily.

    That's win-win - your government is secure, anyone doing business with your government is secure, and your adversaries in their mistrust of you make themselves insecure.

    And it's my point about the Dual_EC_DRBG : as Goum has been pointing out, there's good reasons to include it. There's possibly good reasons the NSA wanted it in the NIST standards, and it's entirely possible that the factors they've chosen have certain desirable properties which they don't want to disclose. In fact I would go so far as to argue that were I the NSA, then making it look like I've broken an algorithm to scare off people who would be suspicious of me, while very carefully making sure I hadn't, is a pretty good plan because people who implement their own crypto usually screw it up.

  • Options
    [Tycho?][Tycho?] As elusive as doubt Registered User regular
    Xrdd wrote: »
    I'm not, I'm talking about why any such hypothetical weakness would be a really fucking stupid idea. Again, something that is costly but practical for a government to exploit today is going to be feasible to exploit for a lot of other entities within a few years. You are also making the false assumption that a potential attacker (China, Russia...) isn't going to be able to dedicate similar resources to the attack even now. There is no possible scenario where a backdoor would be practical for the US government but impossible to exploit for anyone else. Which is why backdoors are stupid.

    security.png

    Jesus, and this is why I can't participate in these threads. Lets spend our time arguing about legal and technical minutiae which we clearly don't understand (as evidenced by this reply), while constantly obfuscating real discussion about the various concerns people may have with the NSA spying on [subject of the week].

    mvaYcgc.jpg
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited October 2013
    Xrdd wrote: »
    Xrdd wrote: »
    The way I see it, it's basically like expecting Tony Stark not to have a contingency plan in case some bad guy manages to steal his armor.

    Only instead of "bad guy steals his armor," it's more like "bad guys can have open access to his armor whenever they want."

    Tony Stark owns his armor and can control who has access to it. The US government does not own and did not create most crypto algorithms and is in no position to control who has access to them. Putting a huge hole in his armor just in case someone steals it also seems out of character for Stark.

    EDIT: Did you even read my earlier response to you or are we doing this thing again where you repeat the same point over and over again and ignore peoples' responses?

    The US government does own the NIST standards. It's a US government agency.

    I mean you do realize it's a US government agency, and the Advanced Encryption Standard is a US government specification (which is used by everyone because if it's good enough for top secret, it's probably good enough for whatever you're doing).

    You do not seem to understand how encryption standards work.

    Missed this post earlier. I do understand how they work, but unlike you I also understand where the algorithms that are being standardised actually come from in the first place. Look at my post again, I said the US government does not own the algorithms, not the standards. Yes, AES is very, very obviously a US government standard. Rijndael is something some guys from Belgium came up with and very, very obviously not owned by the US government.

    No you clearly don't. Rjindael isn't AES. AES is AES - which is based (1 to 1) on Rjindael. The US government can't backdoor a standard they don't release, because the Belgium's can simply say "well, we don't like those changes, this is the one we think you should use". Do you trust Belgium?

    Hell, this is why Blowfish and Twofish came into existence - they were both dissenting submissions for the AES standard which the authors felt were better then the ones being offered. Many people implemented them as a means by which people could choose a non-NIST standard with a strong advocate. (but you know - at a cost. Blowfish has known attacks and is considered outdated which why no one bothers with it).

    So again, arguing about who owns the algorithm makes as much sense as arguing about the dimensions of a square circle.

    In fact the US by and large can't even stop people writing their crypto algorithms. In fact there's a good argument they should never bother with this, because people usually fail in just implementing known ones in hilariously insecure ways.

    EDIT: Annnnd this is also exactly what I was ranting about previously. This is not an intelligent discussion about surveillance. It does not deal with reality in a manner which you know, acknowledges it. Instead it's an argument about minutae which most people don't understand but which sounds dramatic and scary because the mundane side of the story - that the NSA would have a big database of common implementation flaws and exploits (similar to that freely available from any number of hacker/security researcher papers sites) isn't exciting or scary, it's exactly what you'd expect a signals intercept agency to have.

    electricitylikesme on
  • Options
    SchrodingerSchrodinger Registered User regular
    [Tycho?] wrote: »
    Xrdd wrote: »
    I'm not, I'm talking about why any such hypothetical weakness would be a really fucking stupid idea. Again, something that is costly but practical for a government to exploit today is going to be feasible to exploit for a lot of other entities within a few years. You are also making the false assumption that a potential attacker (China, Russia...) isn't going to be able to dedicate similar resources to the attack even now. There is no possible scenario where a backdoor would be practical for the US government but impossible to exploit for anyone else. Which is why backdoors are stupid.

    security.png

    Jesus, and this is why I can't participate in these threads. Lets spend our time arguing about legal and technical minutiae which we clearly don't understand (as evidenced by this reply), while constantly obfuscating real discussion about the various concerns people may have with the NSA spying on [subject of the week].

    Xrdd wants me to care about this awful back door that any foreign hacker could supposedly exploit, but he can't actually tell me the nature of this backdoor or how much time and computing power it would take to exploit this backdoor, or how much inside knowledge they would require before they even knew where to look for a potential backdoor.

    Instead, all he's really said is that mathematically, it might be possible. Which is technically true, but pretty much useless for the sake of the discussion.

    The XKCD comic is amusing because it goes over this delusion that encryption needs to be absolutely unbreakable, even to people who are willing to invest in multi-million dollar clusters. Which, let's face it, the NSA is not going to bother with just for the sake of hacking your twitter account or whatever.

    It's also relevant in response to the comments of, "I'm sure the Chinese can find and exploit this backdoor, since big companies get hacked all the time!" When big companies get hacked, it's usually some sort of failure at the human level. Hence, $5 wrench.

  • Options
    KaputaKaputa Registered User regular
    edited October 2013
    One advantage to instituting surveillance programs and legislation in secret is that it changes the framing of the debate entirely. If the administration or some group of Senators had openly proposed, say, PRISM and phone record collection, proponents would have had to persuade others of the necessity and efficacy of such programs. When passed in secret, these laws and programs are debated in terms of whether or not they should be repealed/abolished, rather than whether or not they should be instituted. Opponents of the law are left attempting to prove beyond a shadow of a doubt that it is harmful, and defenders don't even have to defend the law on its merits.

    If a surveillance apparatus of this scale is to be tolerated, proponents need to show that it is necessary or at least desirable, and that it is effective in its goals. I read most of the last thread and don't really feel that anyone did so, though it was pretty long so I may be mistaken.

    Kaputa on
  • Options
    GoumindongGoumindong Registered User regular
    1) PRISM et all was not passed in secret. While you can argue that the original patriot act was passed in bad faith it has been repeatedly reauthorized which renders that issue moot.

    2) the power of the government to search with a warrant is not under question. This is literally one of the reasons why we have courts. This isn't even a figuratively literally. We fucking wrote it down, this is why we have courts.

    wbBv3fj.png
  • Options
    KaputaKaputa Registered User regular
    edited October 2013
    Goumindong wrote: »
    1) PRISM et all was not passed in secret. While you can argue that the original patriot act was passed in bad faith it has been repeatedly reauthorized which renders that issue moot.

    2) the power of the government to search with a warrant is not under question. This is literally one of the reasons why we have courts. This isn't even a figuratively literally. We fucking wrote it down, this is why we have courts.
    The degree of power and funding we allow the government's searching apparatus, and the legal framework governing it, are under question. The current surveillance architecture entails a large amount of both. Do you feel that the current degree of funding and capability is necessary or desirable? Would you be opposed to repealing some of the post-2001 laws, such as the PATRIOT Act, which lay the foundation for the modern surveillance policies?

    Kaputa on
Sign In or Register to comment.