Options

The Growing [Surveillance State]

1246787

Posts

  • Options
    XrddXrdd Registered User regular
    This might be a valid point if Xrdd's position was generally uncontroversial within the field. However, we seem to have a lot of people in this thread who are familiar with the math who insist, "No, dude, that's not what the math implies." So how do I know who to believe?

    It should be noted that this isn't a cryptography thread. This isn't a math thread. It's a thread on general surveillance policy.

    It is. There is one person here who disagrees, and I still very much doubt he fully understands the problem (see the completely unfounded claims that it also applies to other PRNGs or the bad analogies involving convential public key crypto or OTPs). I'm not aware of a single expert who disagrees that Dual_EC_DRBG is broken.

  • Options
    GoumindongGoumindong Registered User regular
    edited October 2013
    Xrdd wrote: »
    No, finding or having the backdoor does get you access. What it was saying, is essentially: if you broke down the front door to a single house, you would be able to find the backdoor to every house.

    Is there an actual source to back this up?

    The presentation by Shumow and Ferguson (original source) or the articles by Schneier or Green, all of which were linked in this thread. Breaking down the front door in this case amounts to solving an elliptic curve discrete logarithm problem (or someone leaking the solution...), which gets you the e for which P=Q^e. This allows you to determine the internal state of any Dual_EC_DRBG instance using this P and Q with just 32 bytes of output, which allows you to predict all future outputs, completely and thoroughly breaking the PRNG.

    So short answer is "no" This problem is actually unsolvable if someone doesn't know d[or e]. [ I mean besides the standard attacks which can break roughly anything given enough time]

    Here is exactly what they said

    a) If an attacker knows d such that d*P=Q then they can easily compute e such that e*Q=P (invert mod group order)

    b)If an attacker knows e then they can determine a small number of possibilities for the internal state of the Dual Ec PRNG and predict future outputs

    c)We do not know how the point Q was chosen, so we don’t know if the algorithm designer knows d or e

    It is important to realize that they did not just break the system and find e or d [and haven't in 6 years] for the given constants. In their experimental verification they generated Q from a random d.

    The single instance solvablity described in the conclusion refers to the attack if you know e or d, NOT finding e or d.

    Edit: Its important to realize what this entails and what you're trying to do to break the system. The SP800-90A(pg 77) gives us a bit of insight into the generation process
    The security of Dual_EC_DRBG requires that the points P and Q be properly generated. To avoid using potentially weak points, the points specified in Appendix A.1should be used. However, an implementation may use different pairs of points, provided that they are verifiably random, as evidenced by the use of the procedure specified in Appendix A.2.1 below, and the self-test procedure in Appendix A.2.2. An implementation that uses alternative points generated by this approved method shall have them “hard-wired” into its source code, or hardware, as appropriate, and loaded into the working_state at instantiation.

    To conform to this Recommendation, alternatively generated points shall use the procedure given in Appendix A.2.1, and verify their generation using Appendix A.2.2. A.2.1 Generating Alternative P, Q The curve shall be one of the NIST curves from [FIPS 186] that is specified in Appendix A.1 of this Recommendation, and shall be appropriate for the desired security_strength, as specified in Table 4 in Section 10.3.1.

    The points P and Q shall be valid base points for the selected elliptic curve that are generated to be verifiably random using the procedure specified in [X9.62] . The following input is required for each point:

    An elliptic curve E= (Fp, a, b), cofactor h, prime n, a bit string domain_parameter_seed 7, and hash function Hash (). The definition of these parameters is provided in Appendix A.1 of this Recommendation. The domain_parameter_seed shall be different for each point, and the minimum length m of each domain_parameter_seed shall conform to Section 10.3.1, Table 4, under “Seed length”. The bit length of the domain_parameter_seed may be larger than m. The hash function for generating P and Q shall be SHA-512 in all cases.

    The domain_parameter_seed shall be different for each point P and Q. A domain parameter seed shall not be the seed used to instantiate a DRBG. The domain parameter seed is an arbitrary value that may, for example, be determined from the output of a DRBG.

    If the output from the generation procedure in [X9.62] is “failure”, a different domain_parameter_seed shall be used for the point being generated

    Unfortunately American National Standard X9.62-2005 costs $100 so i can't/wont verify the Q and P generation process. (Though they do not appear to be "magic numbers" despite the claims made by Bruce Schneier)
    Kaputa wrote: »
    Goumindong wrote: »
    Kaputa wrote: »
    shryke wrote: »
    Kaputa wrote: »
    shryke wrote: »
    Kaputa wrote: »
    Kaputa wrote: »
    One advantage to instituting surveillance programs and legislation in secret is that it changes the framing of the debate entirely. If the administration or some group of Senators had openly proposed, say, PRISM and phone record collection, proponents would have had to persuade others of the necessity and efficacy of such programs. When passed in secret, these laws and programs are debated in terms of whether or not they should be repealed/abolished, rather than whether or not they should be instituted. Opponents of the law are left attempting to prove beyond a shadow of a doubt that it is harmful, and defenders don't even have to defend the law on its merits.

    If a surveillance apparatus of this scale is to be tolerated, proponents need to show that it is necessary or at least desirable, and that it is effective in its goals. I read most of the last thread and don't really feel that anyone did so, though it was pretty long so I may be mistaken.

    And this is the sort of logic that resulted in a 24 billion dollar government shut down earlier this month.
    Not really? The ACA is and has been a far more transparent phenomenon than the growth of NSA surveillance, and proponents of health care reform have clearly stated their case for necessity and desirability of government action.

    edit- Honestly I'm not really seeing how the two relate
    Goumindong wrote: »
    The short answer is "well we had that discussion and you lost"
    Who is the "we" here? The Bush administration led that "discussion" after 9/11 and the ensuing propaganda whipped the nation into a frenzy. "We" (mostly left-wing Democrats at the time) lost, but in our opinion the merits of the PATRIOT Act were never adequately espoused. We lost on Afghanistan and Iraq too, and for similar reasons; unfortunately those decisions cannot be reversed.

    And you can do that. Legislatively. The same way the argument was won the first time.
    So are we in agreement then? I'm advocating a repeal of post-9/11 legislation enabling the expansion of US government surveillance.

    OK, well is there any particular reason why you're advocating for that?

    Because the more i keep reading about the issue the more i become convinced that the security apparatus which was in place before the Patriot act was insufficient to deal with the expanding space in which people operate.

    Simply put, I am not comfortable creating a space in which the United States has no police authority. Prior to the provisions in the PATRIOT ACT which expanded wiretapping laws and the FISA courts to cover types of electronic surveillance which are not land line telephones we were most definitely in such a space for sophisticated enough users. Without said updating we would almost assuredly be in a space today where the vast majority of the population had the sophistication to carry out whatever illegal activities they wanted and leave the evidence for this in an effectively unsearchable space. As I understand

    This is especially problematic when we are talking about the national security apparatus because we expect the people that we have to work against will be more sophisticated in general. I don't find that the additional metadata searches (roughly equivalent to noting who comes/goes from a particular location and making a list of known acquaintances) are either particularly invasive or threatening with regards to some theoretical Orwellian state
    In theory, I am comfortable creating a space in which the US has no police authority. In order for me to be uncomfortable with this, I have to be convinced that that space is being used or is very likely to be used to endanger my person or other people, and that giving police authority over that space will be a significant impediment to such endangerment. The rationale for police patrolling neighborhoods is not that a neighborhood without police is bad in and of itself, but that, without police, worse evils like murder and theft will become rampant.

    The collection of phone metadata works fine as an example case. If the government did not have the technical or legal ability to acquire such data from cell phone corporations, would my life or the lives of others be in danger to an appreciably greater degree? Has this capability been demonstrated to significantly make us safer and to be worth the expense and possibilities for abuse?

    I disagree with the notion that government access to data on people's locations/movement and who they associate with is not invasive or potentially threatening. You don't think access to such information would make it easier to crack down on the activities of, say, radical labor unions, environmentalist groups, or anti-war movements? How about Muslims in general? The NYPD has already demonstrated its willingness to hold Muslim-Americans to a highly invasive degree of surveillance, and IIRC was coordinating such efforts with the federal government. Most of this applies fairly well to PRISM as well.

    When we're discussing these programs in the context of a country which imprisons a higher percentage of its population than any other nation for which data is available, and whose justice system is notoriously biased against racial minorities and the poor, a higher degree of skepticism is warranted when the state attempts to expand its police abilities. I'm not opposed to the expansion of surveillance on the grounds that our government may one day become oppressive, but because it historically has been and currently is oppressive, with minorities disproportionately bearing the brunt of that oppression.

    1) location metadata is explicitly not part of these orders. You've been informed on this point before, please stop repeating the falsehood

    2) The NYPD did that without access to any of the technologies we are discussing and did not coordinate with the federal government (if i remember the article correctly the Feds were always like 'woah you can't do that')

    3) Its actually a lot easier to crack down on radical labor unions, environmental groups, and anti-war movements in the traditional manner. But i disagree that domestic political activity should be exempt from investigation. As already mentioned it is very difficult to determine whether a group is a greenpeace or an ELF without investigating them. Generally these things have public meetings and well you can just send an officer to them

    4) our racial discrimination and systematic problems with regards to policing have nothing to do with pen register metadata searches.

    edit: added links such that its easy to find what I am referencing

    Goumindong on
    wbBv3fj.png
  • Options
    QuidQuid Definitely not a banana Registered User regular
    edited October 2013
    spool32 wrote: »
    engaging in a massive domestic spying program and lying about it to Congress and the American people...

    Repeating this over and over still hasn't made it true.

    Edit:
    I do think the people defending NSA are overly curt and dismissive in these threads though, and that bothers me.

    And enlightened while I wholly understand that you might feel this way, that is not a helpful statement.

    Quid on
  • Options
    XrddXrdd Registered User regular
    Goumindong wrote: »
    Xrdd wrote: »
    No, finding or having the backdoor does get you access. What it was saying, is essentially: if you broke down the front door to a single house, you would be able to find the backdoor to every house.

    Is there an actual source to back this up?

    The presentation by Shumow and Ferguson (original source) or the articles by Schneier or Green, all of which were linked in this thread. Breaking down the front door in this case amounts to solving an elliptic curve discrete logarithm problem (or someone leaking the solution...), which gets you the e for which P=Q^e. This allows you to determine the internal state of any Dual_EC_DRBG instance using this P and Q with just 32 bytes of output, which allows you to predict all future outputs, completely and thoroughly breaking the PRNG.

    So short answer is "no" This problem is actually unsolvable if someone doesn't know d[or e]. [ I mean besides the standard attacks which can break roughly anything given enough time]

    Here is exactly what they said

    a) If an attacker knows d such that d*P=Q then they can easily compute e such that e*Q=P (invert mod group order)

    b)If an attacker knows e then they can determine a small number of possibilities for the internal state of the Dual Ec PRNG and predict future outputs

    c)We do not know how the point Q was chosen, so we don’t know if the algorithm designer knows d or e

    It is important to realize that they did not just break the system and find e or d [and haven't in 6 years] for the given constants. In their experimental verification they generated Q from a random d.

    The single instance solvablity described in the conclusion refers to the attack if you know e or d, NOT finding e or d.

    I am aware of all this. It says so in the post you are responding to. The whole problem is that, with a constant P and Q, an attacker only needs to find that one e (solving one instance of ECDLP or getting someone to leak it) and everyone using those constants is compromised. This is fundamentally different from a properly designed system, where you would at the very least need to solve ECDLP or a similarly hard problem for every single instance of the PRNG you want to attack.

    No one was ever suggesting that Shumow and Ferguson found a new, efficient way of solving discrete log problems on elliptic curves. That wouldn't be a backdoor (although it would pose huge problems for a number of cryptosystems). The problem here is that you apparently don't understand what criteria cryptographic algorithms need to fulfill to be deemed secure. Your posts on this are akin to someone saying "well, winters are still cold here so clearly global climate change is a lie".

  • Options
    SchrodingerSchrodinger Registered User regular
    Polaritie wrote: »
    Fruit of the poisonous tree basically means that warrantless wiretaps, and all evidence obtained directly as a result of them (that wouldn't have been found but for them) is inadmissable in court.

    Fruit of the poisonous tree applies to the chain of evidence. It is not the legal equivalent of a memory wipe for police officers.
    When the NSA dials the DEA and says "so and so will have drugs in their truck at 10:00 PM tomorrow on highway such and such", and the DEA performs a "random" stop and "just happens" to discover drugs, that evidence would normally be suppressed under the doctrine, because the stop would not have happened but for illegally obtained evidence.

    Yes, if the stop was actually random and for no reason whatsoever, and if the cops searched the car without cause and without a warrant, then the case would get thrown out. Because there's no chain of evidence.

    Here's a parallel constructionism scenario: Cops know that there are a hundred different pimps in the area. But they don't bother to bust these pimps, because they know that someone else will take their place, so it's better to maintain the status quo so that they at least know who all the pimps are.

    Then one day, they get an anonymous tip to investigate a specific pimp. So they go ahead and make a lawful arrest. Afterwards, they get a warrant discover that the pimp has a child sex slavery operation going on.

    If the case goes to trial, the cops will say, "We arrested him for pimping, and then discovered his child sex slavery ring." And the evidence will show that, clearly, the pimp was pimping in broad daylight, which was grounds for arrest.

    But according to you, that entire case needs to be thrown out because the only reason they arrested this specific pimp for pimping is because of an anonymous tip, and we don't know if the anonymous tipper broke the fourth amendment, so we should let the child sex slavery operation resume just to be on the safe side.
    Parallel construction is performed for the sole purpose of lying to the court and the defense about where evidence came from.

    If an innocent man is arrested for pimping and the cops use false accusations of pimping to search his house, and then the cops plant heroin in that house, then that would be lying about where the evidence came from.

    If an actual pimp is arrested for pimping, and the cops use a legit accusation to search his house, and the cops find heroin that was actually owned by the pimp, then the cops are telling the truth about where the evidence came from.
    There is no conceivable reason to construct a fictitious account

    You seem to assume that parallel constructionism requires perjury. This is not the case.

  • Options
    PolaritiePolaritie Sleepy Registered User regular
    Polaritie wrote: »
    Fruit of the poisonous tree basically means that warrantless wiretaps, and all evidence obtained directly as a result of them (that wouldn't have been found but for them) is inadmissable in court.

    Fruit of the poisonous tree applies to the chain of evidence. It is not the legal equivalent of a memory wipe for police officers.
    When the NSA dials the DEA and says "so and so will have drugs in their truck at 10:00 PM tomorrow on highway such and such", and the DEA performs a "random" stop and "just happens" to discover drugs, that evidence would normally be suppressed under the doctrine, because the stop would not have happened but for illegally obtained evidence.

    Yes, if the stop was actually random and for no reason whatsoever, and if the cops searched the car without cause and without a warrant, then the case would get thrown out. Because there's no chain of evidence.

    Here's a parallel constructionism scenario: Cops know that there are a hundred different pimps in the area. But they don't bother to bust these pimps, because they know that someone else will take their place, so it's better to maintain the status quo so that they at least know who all the pimps are.

    Then one day, they get an anonymous tip to investigate a specific pimp. So they go ahead and make a lawful arrest. Afterwards, they get a warrant discover that the pimp has a child sex slavery operation going on.

    If the case goes to trial, the cops will say, "We arrested him for pimping, and then discovered his child sex slavery ring." And the evidence will show that, clearly, the pimp was pimping in broad daylight, which was grounds for arrest.

    But according to you, that entire case needs to be thrown out because the only reason they arrested this specific pimp for pimping is because of an anonymous tip, and we don't know if the anonymous tipper broke the fourth amendment, so we should let the child sex slavery operation resume just to be on the safe side.
    Parallel construction is performed for the sole purpose of lying to the court and the defense about where evidence came from.

    If an innocent man is arrested for pimping and the cops use false accusations of pimping to search his house, and then the cops plant heroin in that house, then that would be lying about where the evidence came from.

    If an actual pimp is arrested for pimping, and the cops use a legit accusation to search his house, and the cops find heroin that was actually owned by the pimp, then the cops are telling the truth about where the evidence came from.
    There is no conceivable reason to construct a fictitious account

    You seem to assume that parallel constructionism requires perjury. This is not the case.

    You seem adept at strawman arguments.

    Steam: Polaritie
    3DS: 0473-8507-2652
    Switch: SW-5185-4991-5118
    PSN: AbEntropy
  • Options
    GoumindongGoumindong Registered User regular
    edited October 2013
    Xrdd wrote: »

    I am aware of all this. It says so in the post you are responding to. The whole problem is that, with a constant P and Q, an attacker only needs to find that one e (solving one instance of ECDLP or getting someone to leak it) and everyone using those constants is compromised. This is fundamentally different from a properly designed system, where you would at the very least need to solve ECDLP or a similarly hard problem for every single instance of the PRNG you want to attack.

    No one was ever suggesting that Shumow and Ferguson found a new, efficient way of solving discrete log problems on elliptic curves. That wouldn't be a backdoor (although it would pose huge problems for a number of cryptosystems). The problem here is that you apparently don't understand what criteria cryptographic algorithms need to fulfill to be deemed secure. Your posts on this are akin to someone saying "well, winters are still cold here so clearly global climate change is a lie".

    So why don't they have e?

    eh, screw it ill just quote the SP800-90A at you again
    The security of Dual_EC_DRBG requires that the points P and Q be properly generated. To avoid using potentially weak points, the points specified in Appendix A.1should be used. However, an implementation may use different pairs of points, provided that they are verifiably random, as evidenced by the use of the procedure specified in Appendix A.2.1 below, and the self-test procedure in Appendix A.2.2. An implementation that uses alternative points generated by this approved method shall have them “hard-wired” into its source code, or hardware, as appropriate, and loaded into the working_state at instantiation.


    Goumindong on
    wbBv3fj.png
  • Options
    SchrodingerSchrodinger Registered User regular
    edited October 2013
    Polaritie wrote: »
    Polaritie wrote: »
    Fruit of the poisonous tree basically means that warrantless wiretaps, and all evidence obtained directly as a result of them (that wouldn't have been found but for them) is inadmissable in court.

    Fruit of the poisonous tree applies to the chain of evidence. It is not the legal equivalent of a memory wipe for police officers.
    When the NSA dials the DEA and says "so and so will have drugs in their truck at 10:00 PM tomorrow on highway such and such", and the DEA performs a "random" stop and "just happens" to discover drugs, that evidence would normally be suppressed under the doctrine, because the stop would not have happened but for illegally obtained evidence.

    Yes, if the stop was actually random and for no reason whatsoever, and if the cops searched the car without cause and without a warrant, then the case would get thrown out. Because there's no chain of evidence.

    Here's a parallel constructionism scenario: Cops know that there are a hundred different pimps in the area. But they don't bother to bust these pimps, because they know that someone else will take their place, so it's better to maintain the status quo so that they at least know who all the pimps are.

    Then one day, they get an anonymous tip to investigate a specific pimp. So they go ahead and make a lawful arrest. Afterwards, they get a warrant discover that the pimp has a child sex slavery operation going on.

    If the case goes to trial, the cops will say, "We arrested him for pimping, and then discovered his child sex slavery ring." And the evidence will show that, clearly, the pimp was pimping in broad daylight, which was grounds for arrest.

    But according to you, that entire case needs to be thrown out because the only reason they arrested this specific pimp for pimping is because of an anonymous tip, and we don't know if the anonymous tipper broke the fourth amendment, so we should let the child sex slavery operation resume just to be on the safe side.
    Parallel construction is performed for the sole purpose of lying to the court and the defense about where evidence came from.

    If an innocent man is arrested for pimping and the cops use false accusations of pimping to search his house, and then the cops plant heroin in that house, then that would be lying about where the evidence came from.

    If an actual pimp is arrested for pimping, and the cops use a legit accusation to search his house, and the cops find heroin that was actually owned by the pimp, then the cops are telling the truth about where the evidence came from.
    There is no conceivable reason to construct a fictitious account

    You seem to assume that parallel constructionism requires perjury. This is not the case.

    You seem adept at strawman arguments.

    You claiming that parallel constructionism means pulling someone over and searching his car for no reason or submitting fictitious evidence is a strawman.

    Me pointing out that this isn't what parallel constructionism actually means is not a strawman at all. That's a correction.

    Schrodinger on
  • Options
    XrddXrdd Registered User regular
    Goumindong wrote: »
    Xrdd wrote: »

    I am aware of all this. It says so in the post you are responding to. The whole problem is that, with a constant P and Q, an attacker only needs to find that one e (solving one instance of ECDLP or getting someone to leak it) and everyone using those constants is compromised. This is fundamentally different from a properly designed system, where you would at the very least need to solve ECDLP or a similarly hard problem for every single instance of the PRNG you want to attack.

    No one was ever suggesting that Shumow and Ferguson found a new, efficient way of solving discrete log problems on elliptic curves. That wouldn't be a backdoor (although it would pose huge problems for a number of cryptosystems). The problem here is that you apparently don't understand what criteria cryptographic algorithms need to fulfill to be deemed secure. Your posts on this are akin to someone saying "well, winters are still cold here so clearly global climate change is a lie".

    So why don't they have e?

    eh, screw it ill just quote the SP800-90A at you again
    The security of Dual_EC_DRBG requires that the points P and Q be properly generated. To avoid using potentially weak points, the points specified in Appendix A.1should be used. However, an implementation may use different pairs of points, provided that they are verifiably random, as evidenced by the use of the procedure specified in Appendix A.2.1 below, and the self-test procedure in Appendix A.2.2. An implementation that uses alternative points generated by this approved method shall have them “hard-wired” into its source code, or hardware, as appropriate, and loaded into the working_state at instantiation.

    The NSA presumably does have e. For all you know, someone else might have it as well.

    And the part of the standard you quoted is from Appendix A.2. Let me just quote Appendix A.1 at you:
    The Dual_EC_DRBG requires the specifications of an elliptic curve and two points on the elliptic curve. One of the following NIST approved curves with associated points shall be used in applications requiring certification under [FIPS 140]. More details about these curves may be found in [FIPS 186]. If alternative points are desired, they shall be generated as specified in Appendix A.2.
    This is followed by 3 sets of curve parameters including fixed Qs. That's the the main problem, as I have repeatedly stated: The standard does not require you to generate your own points. To meet specific certification requirements, you have to use the backdoored version of the algorithm. Therefore, the standard is broken. If it actually required implementations to use randomly generated points, it would still be a shitty, flawed PRNG, but it would not have a backdoor. I have said as much several times already in this thread. I'm seriously beginning to wonder if you lack basic reading comprehension.

  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited October 2013
    Xrdd wrote: »
    This is followed by 3 sets of curve parameters including fixed Qs. That's the the main problem, as I have repeatedly stated: The standard does not require you to generate your own points. To meet specific certification requirements, you have to use the backdoored version of the algorithm. Therefore, the standard is broken. If it actually required implementations to use randomly generated points, it would still be a shitty, flawed PRNG, but it would not have a backdoor. I have said as much several times already in this thread. I'm seriously beginning to wonder if you lack basic reading comprehension.

    So again, let's revisit this: the standard doesn't require you to generate your own points. But you can also comply with it by generating your own points, and the standard suggests how this should be done. Doing it in this way would give you random P and Q parameters.

    You might want to avoid questioning the reading comprehension of others.

    EDIT: Or you know, lay off this point because by any metric the evidence that anything nefarious has happened here is incredibly weak except by the standards of cryptonerds.

    electricitylikesme on
  • Options
    GoumindongGoumindong Registered User regular
    1) No. "Should" is not "shall". There is no requirement to use the specific points. The shall in your quotes is referring to the curves, not the points.

    To put it to you again
    The security of Dual_EC_DRBG requires that the points P and Q be properly generated. To avoid using potentially weak points, the points specified in Appendix A.1should be used. However, an implementation may use different pairs of points, provided that they are verifiably random, as evidenced by the use of the procedure specified in Appendix A.2.1 below, and the self-test procedure in Appendix A.2.2. An implementation that uses alternative points generated by this approved method shall have them “hard-wired” into its source code, or hardware, as appropriate, and loaded into the working_state at instantiation.

    2) No. It would not be a shitty PRNG if you used a randomly generated set. To quote your sources again
    Just like on the graph at right, an elliptic curve point is a pair (x, y) that satisfies an elliptic curve equation. In general, both x and y are elements of a finite field, which for our purposes means they're just large integers.***

    The main operation of the PRNG is to apply mathematical operations to points on the elliptic curve, in order to generate new points that are pseudorandom -- i.e., are indistinguishable from random points in some subgroup.

    And the good news is that Dual-EC seems to do this first part beautifully! In fact Brown and Gjøsteen even proved that this part of the generator is sound provided that the Decisional Diffie-Hellman problem is hard in the specific elliptic curve subgroup. This is a well studied hardness assumption so we can probably feel pretty confident in this proof.

    There are two flaws in the implementation of NIST SP800-90A (a) too many bits and (b) potential backdoor if the NSA had decided to do that and you decide to not use your own Q

    I am totally OK with the backdoor and then see this PRNG as the only provably secure PRNG you can use. [provided you drop a few more bits(the low number of bits dropped was done to increase efficiency what everyone besides green is complaining about]

    wbBv3fj.png
  • Options
    XrddXrdd Registered User regular
    Xrdd wrote: »
    This is followed by 3 sets of curve parameters including fixed Qs. That's the the main problem, as I have repeatedly stated: The standard does not require you to generate your own points. To meet specific certification requirements, you have to use the backdoored version of the algorithm. Therefore, the standard is broken. If it actually required implementations to use randomly generated points, it would still be a shitty, flawed PRNG, but it would not have a backdoor. I have said as much several times already in this thread. I'm seriously beginning to wonder if you lack basic reading comprehension.

    So again, let's revisit this: the standard doesn't require you to generate your own points. But you can also comply with it by generating your own points, and the standard suggests how this should be done. Doing it in this way would give you random P and Q parameters.

    You might want to avoid questioning the reading comprehension of others.

    Bolded part is the only bit that matters, as it means that the standard covers (arguably even suggests, and for FIPS 140 certification, requires) variants of the algorithm that contain a backdoor. The standard is therefore broken.

  • Options
    GoumindongGoumindong Registered User regular
    Provided that all the other systems have no issues, are provably secure, and you care about an NSA backdoor.

    wbBv3fj.png
  • Options
    jmcdonaldjmcdonald I voted, did you? DC(ish)Registered User regular
    Xrdd wrote: »
    Xrdd wrote: »
    This is followed by 3 sets of curve parameters including fixed Qs. That's the the main problem, as I have repeatedly stated: The standard does not require you to generate your own points. To meet specific certification requirements, you have to use the backdoored version of the algorithm. Therefore, the standard is broken. If it actually required implementations to use randomly generated points, it would still be a shitty, flawed PRNG, but it would not have a backdoor. I have said as much several times already in this thread. I'm seriously beginning to wonder if you lack basic reading comprehension.

    So again, let's revisit this: the standard doesn't require you to generate your own points. But you can also comply with it by generating your own points, and the standard suggests how this should be done. Doing it in this way would give you random P and Q parameters.

    You might want to avoid questioning the reading comprehension of others.

    Bolded part is the only bit that matters, as it means that the standard covers (arguably even suggests, and for FIPS 140 certification, requires) variants of the algorithm that contain a backdoor. The standard is therefore broken.

    I'm not super conversant in this topic, but from my layman's perspective this seems to be a shifting of the goalposts. Even if one were to grant you the position that the standard were "broken" in one specific scenario, how does that make it broken for all?

  • Options
    XrddXrdd Registered User regular
    Goumindong wrote: »
    1) No. "Should" is not "shall". There is no requirement to use the specific points. The shall in your quotes is referring to the curves, not the points.

    The problem is that they are provided in the first place and that there is no requirement not to use them. How difficult is that to understand?
    2) No. It would not be a shitty PRNG if you used a randomly generated set.

    It would still be a shitty PRNG because it is hilariously slow and produces biased output (yes, I know there is a workaround).
    I am totally OK with the backdoor and then see this PRNG as the only provably secure PRNG you can use. [provided you drop a few more bits(the low number of bits dropped was done to increase efficiency what everyone besides green is complaining about]

    Might want to re-read Green on this:
    Let me spell this out as clearly as I can. In the course of proposing this complex and slow new PRG where the only damn reason you'd ever use the thing is for its security reduction, NIST forgot to provide one. This is like selling someone a Mercedes and forgetting to attach the hood ornament.
    Also, these two papers.

    And saying that you are cool with the backdoor is basically saying that you are willing to make the security of your systems contingent on nobody leaking (or selling to the highest bidder) the keys to the backdoor, which seems to be a questionable assumption in light of recent history.

  • Options
    GoumindongGoumindong Registered User regular
    Green's objection isn't that its not probably secure (as he claims it is provably secure) but that they didn't provide one in the NIST document (which i would not provides no assurances for any of the standards in it)

    I would think that yes, the NSA is far more secure than my company is

    wbBv3fj.png
  • Options
    XrddXrdd Registered User regular
    edited October 2013
    jmcdonald wrote: »
    Xrdd wrote: »
    Xrdd wrote: »
    This is followed by 3 sets of curve parameters including fixed Qs. That's the the main problem, as I have repeatedly stated: The standard does not require you to generate your own points. To meet specific certification requirements, you have to use the backdoored version of the algorithm. Therefore, the standard is broken. If it actually required implementations to use randomly generated points, it would still be a shitty, flawed PRNG, but it would not have a backdoor. I have said as much several times already in this thread. I'm seriously beginning to wonder if you lack basic reading comprehension.

    So again, let's revisit this: the standard doesn't require you to generate your own points. But you can also comply with it by generating your own points, and the standard suggests how this should be done. Doing it in this way would give you random P and Q parameters.

    You might want to avoid questioning the reading comprehension of others.

    Bolded part is the only bit that matters, as it means that the standard covers (arguably even suggests, and for FIPS 140 certification, requires) variants of the algorithm that contain a backdoor. The standard is therefore broken.

    I'm not super conversant in this topic, but from my layman's perspective this seems to be a shifting of the goalposts. Even if one were to grant you the position that the standard were "broken" in one specific scenario, how does that make it broken for all?
    It's not shifting the goalposts at all. Any standard that covers algorithms known to be insecure is broken because at that point compliance with the standard doesn't tell you anything about the actual security of the system anymore. Furthermore, in this case, the broken version is easier to implement and also required for FIPS 140 certification, so it's almost certain to be more common than the non-broken version.
    Goumindong wrote: »
    Green's objection isn't that its not probably secure (as he claims it is provably secure) but that they didn't provide one in the NIST document (which i would not provides no assurances for any of the standards in it)
    He makes no such claim. He says that PRNGs like it can be provably secure if they are designed in such a way that breaking them would imply breaking ECDLP or a similarly hard problem. This is not the case for Dual_EC_DRBG. Take a look at the papers I linked above.
    I would think that yes, the NSA is far more secure than my company is

    1. That doesn't matter, the risk of a leak at the NSA doesn't replace the risk of a leak at your company, it is an additional risk.

    2. If you used a properly designed PRNG, there would be no set of constants anyone at your company could leak that would allow an attacker to break any instance of that PRNG.

    Xrdd on
  • Options
    SoralinSoralin Registered User regular
    Goumindong wrote: »
    2) No. It would not be a shitty PRNG if you used a randomly generated set. To quote your sources again
    It's not a shitty random number generator, even though it requires you to first generate random numbers for it, before you can use it to generate random numbers? :) What program are you using to generate those random numbers? And why, if you already have such a program, aren't you just using that directly, and skip using this one completely? (especially since it's reported to be much slower, and obviously, less secure)

  • Options
    Knuckle DraggerKnuckle Dragger Explosive Ovine Disposal Registered User regular
    Polaritie wrote: »
    Archangle wrote: »
    Soralin wrote: »
    shryke wrote: »
    Feral wrote: »
    Suppose you legally get your information from an informant. If the warlord learns of this, then the informant will be murdered.

    That would be a good reason for parallel constructionism.

    That would be a good reason for parallel construction!

    You know what wouldn't be a good reason for parallel construction? Concealing from the court whether evidence was acquired in an appropriate manner in the first place.

    Why? That's like the perfect example of when you should REQUIRE parallel construction. As long as they can construct a fully legal chain of reasoning, what's the problem?

    Hell, there's like 5 dozen episodes of Law and Order about this shit. :p
    https://en.wikipedia.org/wiki/Fruit_of_the_poisonous_tree
    Fruit of the poisonous tree is a legal metaphor in the United States used to describe evidence that is obtained illegally.[1] The logic of the terminology is that if the source of the evidence or evidence itself (the "tree") is tainted, then anything gained from it (the "fruit") is tainted as well. The term fruit of the poisonous tree was first used in Silverthorne Lumber Co. v. United States, 251 U.S. 385 (1920).[2][3]

    Such evidence is not generally admissible in court.[4] For example, if a police officer conducted an unconstitutional (Fourth Amendment) search of a home and obtained a key to a train station locker, and evidence of a crime came from the locker, that evidence would most likely be excluded under the fruit of the poisonous tree legal doctrine. The discovery of a witness is not evidence in itself because the witness is attenuated by separate interviews, in-court testimony and his or her own statements.

    The doctrine is an extension of the exclusionary rule, which, subject to some exceptions, prevents evidence obtained in violation of the Fourth Amendment from being admitted in a criminal trial. Like the exclusionary rule, the fruit of the poisonous tree doctrine is intended to deter police from using illegal means to obtain evidence.
    The whole point of this is to prevent law enforcement from using illegal searches, by harshly preventing them from gaining any benefit from it. Parallel construction to hide illegal searches means that they can break the law and get away with it.

    Are you arguing that illegal searches aren't inherently a problem in and of themselves? Or disagree that there should be limitations on what the state is allowed to do, with regard to searches? Because if not, it would seem that finding a way to bypass the primary enforcement method against them would be a problem.
    Wait, what? No-one is arguing that illegal searches aren't a problem, nor that their should be no limitations on what the state is allowed to do, with regard to searches.

    There is a clearly defined process to searches - that is, if it involves a person's privacy then a court warrant is needed. This process is true even for parallel construction - if evidence is illegally obtained, and therefore that line of investigation deemed inadmissible under the Fruits of the Poisonous Tree doctrine, then a second line of investigation built on information that is either publicly available or gained through the warrant process is required.

    The warrant process addresses your "limitations on what the state is allowed to do" question, and the Office of Inspector General addresses your "are illegal searches inherently a problem in and of themselves" question. The direct answers are "the state still needs to follow due process" and "yes", respectively (at least, as in the order that I presented the questions).

    Parallel construction does not bypass the primary enforcement method against illegal searches. The two common arguments regarding parallel construction seen in previous versions of this thread are:

    1 - "It weakens due process". Contrary to popular opinion, neither warrants nor probable cause to gain a warrant grow on trees (unless the probable cause IS a tree...) Just because a law enforcement officer "knows where to look" from an illegal search does not necessarily mean they can satisfy the requirements for a legal warrant search; the probable cause must still either be publicly available or gained from witness statements. While it is true that "anonymous tips" can be used to obtain a warrant, they are the exception rather the rule - in general, law enforcement officers must vouch for the reliability of both the information and the witness submitting the information. If a law enforcement individual or office systematically attempts to obtain warrants based on anonymous tips, they are likely to be subject to investigation. Which leads me to the second argument...

    2 - "Law enforcement officers are incentivised to perform illegal searches to obtain convictions". While it's true that arrests and convictions play an important part in law enforcement careers, there are a number of disincentives as well. Chief among them being that if you are caught performing illegal searches or falsifying evidence to obtain a warrant, you face suspension, fines, loss of your job, or even civil or criminal prosecution. While it's true that if its thought that the infringement was unintentional or was believed to be legal (or was legislated as legal at the time of the investigation, such as with Patriot Act warrantless wiretaps) then the result is likely to be a slap on the wrist, the abovementioned Office of Inspector General takes a dim view of systematic 4th Amendment violations. And when it comes to internal auditing, even the NSA provides accounts of when its own investigations were found to have been improper or even unconstitutional. There may be a "boys club" for one officer covering up for another, but it is not systematic in the respect that internal audits still identify and take appropriate action for violations.

    ETA: The one thing that isn't clear is if the subject of an illegal search is required to be notified that their 4th amendment rights were violated. Certainly in the Warrantless Wiretap article linked earlier the subjects weren't notified, but the sentiment seems to be trending towards notifying the subject in addition to taking internal action against the law enforcement officers involved.

    Um... yes, it does. That's the entire point of it.

    Fruit of the poisonous tree basically means that warrantless wiretaps, and all evidence obtained directly as a result of them (that wouldn't have been found but for them) is inadmissable in court.

    When the NSA dials the DEA and says "so and so will have drugs in their truck at 10:00 PM tomorrow on highway such and such", and the DEA performs a "random" stop and "just happens" to discover drugs, that evidence would normally be suppressed under the doctrine, because the stop would not have happened but for illegally obtained evidence. Any evidence obtained from warrants as a result of the stop is suppressed, because it would not have been obtained but for the stop, which has been suppressed under the 4th. This could easily destroy a case altogether, and possibly result in a criminal walking free under double jeopardy.

    In a parallel construction case, the "random" stop and search has to be able to stand on its own. That is the whole point of parallel construction. It is not the fruit of the poisonous tree, because if the illegal evidence is excluded, it does not remove the legal justification of the stop and search, which was developed independent of and parallel to the NSA information.

    Let not any one pacify his conscience by the delusion that he can do no harm if he takes no part, and forms no opinion.

    - John Stuart Mill
  • Options
    GoumindongGoumindong Registered User regular
    edited October 2013
    Xrdd wrote: »

    2. If you used a properly designed PRNG, there would be no set of constants anyone at your company could leak that would allow an attacker to break any instance of that PRNG.

    So then no PRNG ever. Because every PRNG has this problem by function of their design [since they need inputs]
    Soralin wrote: »
    Goumindong wrote: »
    2) No. It would not be a shitty PRNG if you used a randomly generated set. To quote your sources again
    It's not a shitty random number generator, even though it requires you to first generate random numbers for it, before you can use it to generate random numbers? :) What program are you using to generate those random numbers? And why, if you already have such a program, aren't you just using that directly, and skip using this one completely? (especially since it's reported to be much slower, and obviously, less secure)

    A True RNG.

    Functionally all DRBG's are bad in some way. [Biased, predictable, etc] what they do is that take true random noise and they stretch it out a lot longer. It is impossible for a DRBG to not have these issues [if it did not it would not be a DRBG] because at the end of the day they're still not random.

    The reason you use DRBG's is because the amount of bits you get from a true random number generator is very small and not suitable for many/most applications

    Goumindong on
    wbBv3fj.png
  • Options
    [Tycho?][Tycho?] As elusive as doubt Registered User regular
    [Tycho?] wrote: »
    [Tycho?] wrote: »
    Xrdd wrote: »
    I'm not, I'm talking about why any such hypothetical weakness would be a really fucking stupid idea. Again, something that is costly but practical for a government to exploit today is going to be feasible to exploit for a lot of other entities within a few years. You are also making the false assumption that a potential attacker (China, Russia...) isn't going to be able to dedicate similar resources to the attack even now. There is no possible scenario where a backdoor would be practical for the US government but impossible to exploit for anyone else. Which is why backdoors are stupid.

    security.png

    Jesus, and this is why I can't participate in these threads. Lets spend our time arguing about legal and technical minutiae which we clearly don't understand (as evidenced by this reply), while constantly obfuscating real discussion about the various concerns people may have with the NSA spying on [subject of the week].

    Xrdd wants me to care about this awful back door that any foreign hacker could supposedly exploit, but he can't actually tell me the nature of this backdoor or how much time and computing power it would take to exploit this backdoor, or how much inside knowledge they would require before they even knew where to look for a potential backdoor.

    Instead, all he's really said is that mathematically, it might be possible. Which is technically true, but pretty much useless for the sake of the discussion.

    The XKCD comic is amusing because it goes over this delusion that encryption needs to be absolutely unbreakable, even to people who are willing to invest in multi-million dollar clusters. Which, let's face it, the NSA is not going to bother with just for the sake of hacking your twitter account or whatever.

    It's also relevant in response to the comments of, "I'm sure the Chinese can find and exploit this backdoor, since big companies get hacked all the time!" When big companies get hacked, it's usually some sort of failure at the human level. Hence, $5 wrench.

    So you demand Xrdd to give you technical details about secret backdoors placed in software, while admitting you know nothing at all about the subject and are completely unable to parse the technical details that you ask for.

    Any policy change advocacy will have to go through congress, which means being able to translate these ideas to plain English and putting them into some sort of meaningful perspective. If you can't do that, then that's a failure on your part.

    Suppose a libertarian automaker insists that government standards have made cars less secure and less safe. How does the government respond? Well, chances are, they would ask for these safety concerns to be put in perspective, in real world terms.
    You say most errors occur at the human level, as though this somehow means that cryptography isn't important. That these backdoors aren't a big deal after all, because most problems are at the human level, right? It must be true, you read an xkcd comic about it.

    Look at the bitcoining world, where you have a lot of crypto nerds absolutely convinced that they have the most secure form of currency ever known to man. That doesn't stop them from getting ripped off and scammed on a regular basis, however.
    Obfuscate, obfuscate, obfuscate. You demand minutiae about top secret programs buried in laws a mile long. How in the world is anyone here going to know the computing power required to exploit a backdoor in cryptographic software? Without these minutiae, there can be no valid arguments, which means the status-quo is and must be acceptable. Which is why this thread and its previous iteration are jokes.

    You're trying to argue a point on an unknown unknown.

    How exactly does someone refute an unknown unknown in meaningful discussion?
    Instead of demanding information from others, try reading a book on the subject. You'll learn stuff.

    This might be a valid point if Xrdd's position was generally uncontroversial within the field. However, we seem to have a lot of people in this thread who are familiar with the math who insist, "No, dude, that's not what the math implies." So how do I know who to believe?

    It should be noted that this isn't a cryptography thread. This isn't a math thread. It's a thread on general surveillance policy.

    Dude, you were the one asking for the technical details, and admitting you know nothing about the cryptography, and saying that Xrdd's technical examples are uncontroversial in the field of cryptography - of which you know nothing. Don't tell me what this thread is and isn't about.

    And the rest of your post is fucked. If its a failure on my part not to structure my arguments as a congressional policy change, then ok. I'm also not structuring my argument as something a libertarian automaker would make to a government. I'm talking on a forum here, to people. Unknown unknown? Who are you, Donald Rumsfeld? And why should I care about bitcoin scams? You don't know how common "human level" problems are, because as I said already people don't like admitting they got broken in to, and anyway you don't know anything about the subject anyway. You read xkcd and post in the bitcoin thread.
    I'll admit I don't know the first thing about cryptography.

    Read. A. Book. Try this one. You'll understand public key cryptography by the end of it. It also talks about how hard the NSA fought to keep strong encryption off the internet in the 90s. How they wanted to put backdoors in software. How they had invented public key cryptography many years previous and absolutely didn't want anyone else to have it. How the NSA was eventually defeated in court, and couldn't throw some of the inventors in jail.

    You proclaim complete ignorance for yourself, and require every little detail to be explained to you. Once you learn about the subject, it will be ok for you to demand every little technical detail, and require it be presented to you in perfect legalese that would carry a majority decision of the supreme court. It will still be childish, but at least you'll understand the responses you get.

    Seriously. You post on this subject non-fucking stop, and you claim to know nothing about cryptography, after all this time? Why the fuck is anyone still bothering to explain anything to you when you don't do any research for yourself?

    mvaYcgc.jpg
  • Options
    SoralinSoralin Registered User regular
    edited October 2013
    Goumindong wrote: »
    Xrdd wrote: »

    2. If you used a properly designed PRNG, there would be no set of constants anyone at your company could leak that would allow an attacker to break any instance of that PRNG.

    So then no PRNG ever. Because every PRNG has this problem by function of their design [since they need inputs]
    Soralin wrote: »
    Goumindong wrote: »
    2) No. It would not be a shitty PRNG if you used a randomly generated set. To quote your sources again
    It's not a shitty random number generator, even though it requires you to first generate random numbers for it, before you can use it to generate random numbers? :) What program are you using to generate those random numbers? And why, if you already have such a program, aren't you just using that directly, and skip using this one completely? (especially since it's reported to be much slower, and obviously, less secure)

    A True RNG.

    Functionally all DRBG's are bad in some way. [Biased, predictable, etc] what they do is that take true random noise and they stretch it out a lot longer. It is impossible for a DRBG to not have these issues [if it did not it would not be a DRBG] because at the end of the day they're still not random.

    The reason you use DRBG's is because the amount of bits you get from a true random number generator is very small and not suitable for many/most applications
    And this one requires that you do it twice, once for the algorithm values, and then again for the seed. And then it's still slower and potentially more predictable then other RNGs (if a third party is involved, can you be sure that they didn't generate the values in such a way to let them unencrypt your communications? If this became widespread and became used in a default encryption protocol in your internet browser, would you have been aware of it? Would you have been able to change those values in that case, even if you were aware of it?). So how is it not shitty?

    edit: All of which I realize is distracting from the point. It doesn't matter how good or how bad the algorithm is, because it's compromised, untrustworthy. And intentionally and maliciously so. (And yes, I already provided support for that claim)

    Soralin on
  • Options
    GoumindongGoumindong Registered User regular
    I am not sure its more predictable than other RNG's. There are two possible implementation vulnerabilities and no discussions of how relatively secure the other methods are. We have already gone over why a protocol like this would have been included

    wbBv3fj.png
  • Options
    ArchangleArchangle Registered User regular
    edited October 2013
    Polaritie wrote: »
    Archangle wrote: »
    Soralin wrote: »
    shryke wrote: »
    Feral wrote: »
    Suppose you legally get your information from an informant. If the warlord learns of this, then the informant will be murdered.

    That would be a good reason for parallel constructionism.

    That would be a good reason for parallel construction!

    You know what wouldn't be a good reason for parallel construction? Concealing from the court whether evidence was acquired in an appropriate manner in the first place.

    Why? That's like the perfect example of when you should REQUIRE parallel construction. As long as they can construct a fully legal chain of reasoning, what's the problem?

    Hell, there's like 5 dozen episodes of Law and Order about this shit. :p
    https://en.wikipedia.org/wiki/Fruit_of_the_poisonous_tree
    Fruit of the poisonous tree is a legal metaphor in the United States used to describe evidence that is obtained illegally.[1] The logic of the terminology is that if the source of the evidence or evidence itself (the "tree") is tainted, then anything gained from it (the "fruit") is tainted as well. The term fruit of the poisonous tree was first used in Silverthorne Lumber Co. v. United States, 251 U.S. 385 (1920).[2][3]

    Such evidence is not generally admissible in court.[4] For example, if a police officer conducted an unconstitutional (Fourth Amendment) search of a home and obtained a key to a train station locker, and evidence of a crime came from the locker, that evidence would most likely be excluded under the fruit of the poisonous tree legal doctrine. The discovery of a witness is not evidence in itself because the witness is attenuated by separate interviews, in-court testimony and his or her own statements.

    The doctrine is an extension of the exclusionary rule, which, subject to some exceptions, prevents evidence obtained in violation of the Fourth Amendment from being admitted in a criminal trial. Like the exclusionary rule, the fruit of the poisonous tree doctrine is intended to deter police from using illegal means to obtain evidence.
    The whole point of this is to prevent law enforcement from using illegal searches, by harshly preventing them from gaining any benefit from it. Parallel construction to hide illegal searches means that they can break the law and get away with it.

    Are you arguing that illegal searches aren't inherently a problem in and of themselves? Or disagree that there should be limitations on what the state is allowed to do, with regard to searches? Because if not, it would seem that finding a way to bypass the primary enforcement method against them would be a problem.
    Wait, what? No-one is arguing that illegal searches aren't a problem, nor that their should be no limitations on what the state is allowed to do, with regard to searches.

    There is a clearly defined process to searches - that is, if it involves a person's privacy then a court warrant is needed. This process is true even for parallel construction - if evidence is illegally obtained, and therefore that line of investigation deemed inadmissible under the Fruits of the Poisonous Tree doctrine, then a second line of investigation built on information that is either publicly available or gained through the warrant process is required.

    The warrant process addresses your "limitations on what the state is allowed to do" question, and the Office of Inspector General addresses your "are illegal searches inherently a problem in and of themselves" question. The direct answers are "the state still needs to follow due process" and "yes", respectively (at least, as in the order that I presented the questions).

    Parallel construction does not bypass the primary enforcement method against illegal searches. The two common arguments regarding parallel construction seen in previous versions of this thread are:

    1 - "It weakens due process". Contrary to popular opinion, neither warrants nor probable cause to gain a warrant grow on trees (unless the probable cause IS a tree...) Just because a law enforcement officer "knows where to look" from an illegal search does not necessarily mean they can satisfy the requirements for a legal warrant search; the probable cause must still either be publicly available or gained from witness statements. While it is true that "anonymous tips" can be used to obtain a warrant, they are the exception rather the rule - in general, law enforcement officers must vouch for the reliability of both the information and the witness submitting the information. If a law enforcement individual or office systematically attempts to obtain warrants based on anonymous tips, they are likely to be subject to investigation. Which leads me to the second argument...

    2 - "Law enforcement officers are incentivised to perform illegal searches to obtain convictions". While it's true that arrests and convictions play an important part in law enforcement careers, there are a number of disincentives as well. Chief among them being that if you are caught performing illegal searches or falsifying evidence to obtain a warrant, you face suspension, fines, loss of your job, or even civil or criminal prosecution. While it's true that if its thought that the infringement was unintentional or was believed to be legal (or was legislated as legal at the time of the investigation, such as with Patriot Act warrantless wiretaps) then the result is likely to be a slap on the wrist, the abovementioned Office of Inspector General takes a dim view of systematic 4th Amendment violations. And when it comes to internal auditing, even the NSA provides accounts of when its own investigations were found to have been improper or even unconstitutional. There may be a "boys club" for one officer covering up for another, but it is not systematic in the respect that internal audits still identify and take appropriate action for violations.

    ETA: The one thing that isn't clear is if the subject of an illegal search is required to be notified that their 4th amendment rights were violated. Certainly in the Warrantless Wiretap article linked earlier the subjects weren't notified, but the sentiment seems to be trending towards notifying the subject in addition to taking internal action against the law enforcement officers involved.

    Um... yes, it does. That's the entire point of it.

    Fruit of the poisonous tree basically means that warrantless wiretaps, and all evidence obtained directly as a result of them (that wouldn't have been found but for them) is inadmissable in court.

    When the NSA dials the DEA and says "so and so will have drugs in their truck at 10:00 PM tomorrow on highway such and such", and the DEA performs a "random" stop and "just happens" to discover drugs, that evidence would normally be suppressed under the doctrine, because the stop would not have happened but for illegally obtained evidence. Any evidence obtained from warrants as a result of the stop is suppressed, because it would not have been obtained but for the stop, which has been suppressed under the 4th. This could easily destroy a case altogether, and possibly result in a criminal walking free under double jeopardy.

    In a parallel construction case, the "random" stop and search has to be able to stand on its own. That is the whole point of parallel construction. It is not the fruit of the poisonous tree, because if the illegal evidence is excluded, it does not remove the legal justification of the stop and search, which was developed independent of and parallel to the NSA information.
    Also, the DEA still needs a warrant to search the truck involved in a "random stop". Here's the point of contention - the DEA (or whatever agency/officer involved) cannot just "just happen" to do a search unless they have a warrant or the criminal violation is in plain view. Parallel Construction does not magically give them access to computers or the legal authority to search vehicles. The best it does is tells them that information may be found - they still need to follow due process to actually find it.

    (Edits) Evidence being inadmissable in court is not the primary enforcement method against illegal searches - it prevents the consequences of illegal searches, but does not prevent illegal searches occurring in the first place nor takes corrective action. The Office of Inspector General and Internal Affairs is the primary enforcement method against illegal searches. As mentioned previously in the thread all parallel construction does in the situation you mentioned is prevent immunity from prosecution if the first investigation violates the subject's rights, otherwise if someone gets picked up 10 years later with 10kg of cocaine in plain view on the passenger seat they only have say "You only knew to stop me because of previous violations - this evidence is inadmissable". In most cases, parallel construction is used to protect a source who gained information legally but who may be endangered if the source was revealed in court - so a second case that stands without relying on the initial source is constructed. We can't emphasize enough that the second case must still follow due process from square 1.

    ETA:
    Polaritie wrote: »
    The NSA has a history of avoiding any court scrutiny of the legality of its actions and then claiming they are justified by laws. This in itself is appalling. Parallel construction is just more of the same crap.
    This is misleading. NSA must still obtain warrants through courts (that would be the C in "FISC"), and the most recent internal NSA report explicitly identifies cases where their investigations were NOT justified by laws, and in some cases were unconstitutional.

    Archangle on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    Goumindong wrote: »
    I am not sure its more predictable than other RNG's. There are two possible implementation vulnerabilities and no discussions of how relatively secure the other methods are. We have already gone over why a protocol like this would have been included

    Also this thread has been super-interesting because I didn't know Dual_EC_DBRG had that property (that it's provably secure, whereas the others are not).

    I find it super-interesting that no one winding themselves up about it has ever included that detail in the describing it.

  • Options
    SoralinSoralin Registered User regular
    Goumindong wrote: »
    I am not sure its more predictable than other RNG's. There are two possible implementation vulnerabilities and no discussions of how relatively secure the other methods are. We have already gone over why a protocol like this would have been included
    There's generally no discussions about how secure other methods are, because they've been secure for ages.

    Here's a simple one for example, not quite 30 years old: https://en.wikipedia.org/wiki/Blum_Blum_Shub
    The generator is not appropriate for use in simulations, only for cryptography, because it is very slow. However, there is a proof reducing its security to the computational difficulty of the Quadratic residuosity problem. Since the only known way to solve that problem requires factoring the modulus, the difficulty of Integer factorization is generally regarded as providing security. When the primes are chosen appropriately, and O(log log M) lower-order bits of each xn are output, then in the limit as M grows large, distinguishing the output bits from random should be at least as difficult as factoring M.

    If integer factorization is difficult (as is suspected) then B.B.S. with large M should have an output free from any nonrandom patterns that can be discovered with any reasonable amount of calculation. Thus it appears to be as secure as other encryption technologies tied to the factorization problem, such as RSA encryption.
    Not particularly fast, but as secure as RSA is.

    And here's an example of a quick and dirty random number generator, not intended for use in cryptography: https://en.wikipedia.org/wiki/Mersenne_twister
    The algorithm in its native form is not suitable for cryptography (unlike Blum Blum Shub). Observing a sufficient number of iterations (624 in the case of MT19937, since this is the size of the state vector from which future iterations are produced) allows one to predict all future iterations.

    Basically, if you don't have a mathematical proof showing that it's essentially impossible to break by any means, it's not secure. When you have mathematicians develop encryption, they're generally not going to stop until they reach (this will take longer than the age of the universe to crack) levels, if possible. :)

    https://en.wikipedia.org/wiki/Transcomputational_problem
    https://en.wikipedia.org/wiki/Bremermann's_limit
    Bremermann's Limit, named after Hans-Joachim Bremermann, is the maximum computational speed of a self-contained system in the material universe. It is derived from Einstein's mass-energy equivalency and the Heisenberg uncertainty principle, and is c2/h ≈ 1.36 × 1050 bits per second per kilogram. [1][2] This value is important when designing cryptographic algorithms, as it can be used to determine the minimum size of encryption keys or hash values required to create an algorithm that could never be cracked by a brute-force search.

    For example, a computer with the mass of the entire Earth operating at the Bremermann's limit could perform approximately 1075 mathematical computations per second. If we assume that a cryptographic key can be tested with only one operation, then a typical 128 bit key could be cracked in under 10−36 seconds. However, a 256 bit key (which is already in use in some systems) would take about two minutes to crack. Using a 512 bit key would increase the cracking time to approaching 1072 years, without increasing the time for encryption by more than a constant factor (depending on the encryption algorithms used).
    That's for a symmetric key at least, asymmetric keys tend to use larger values, since they're a bit easier to break, although that can be handled rather easily just by increasing the key size a bit, (thus the 4096-bit RSA from the xkcd comic, as essentially something that is impossible to crack by conventional computers) Although quantum computing may cause problems with cryptography based on integer factorization (mainly asymmetric key encryption, symmetric key encryption would be just fine, just double the size of the key). But solutions to that have already been created and are being worked on: https://en.wikipedia.org/wiki/Post-quantum_cryptography

  • Options
    XrddXrdd Registered User regular
    Goumindong wrote: »
    Xrdd wrote: »

    2. If you used a properly designed PRNG, there would be no set of constants anyone at your company could leak that would allow an attacker to break any instance of that PRNG.

    So then no PRNG ever. Because every PRNG has this problem by function of their design [since they need inputs]

    This is complete and utter bullshit. What do you think that number would be for HMAC_DRBG? Every PRNG requires a seed, but a seed isn't a constant, it should be a true random value, different for each instance. P and Q aren't seeds.


    Goumindong wrote: »
    I am not sure its more predictable than other RNG's. There are two possible implementation vulnerabilities and no discussions of how relatively secure the other methods are. We have already gone over why a protocol like this would have been included

    Also this thread has been super-interesting because I didn't know Dual_EC_DBRG had that property (that it's provably secure, whereas the others are not).

    I find it super-interesting that no one winding themselves up about it has ever included that detail in the describing it.

    Because it isn't. Look at the 2 papers I linked above. Designs like it can be provably secure, if you can show that breaking them would imply breaking ECDLP or a similarly hard problem. No such security reduction exists for Dual_EC_DRBG.

  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited October 2013
    Xrdd wrote: »
    Goumindong wrote: »
    I am not sure its more predictable than other RNG's. There are two possible implementation vulnerabilities and no discussions of how relatively secure the other methods are. We have already gone over why a protocol like this would have been included

    Also this thread has been super-interesting because I didn't know Dual_EC_DBRG had that property (that it's provably secure, whereas the others are not).

    I find it super-interesting that no one winding themselves up about it has ever included that detail in the describing it.

    Because it isn't. Look at the 2 papers I linked above. Designs like it can be provably secure, if you can show that breaking them would imply breaking ECDLP or a similarly hard problem. No such security reduction exists for Dual_EC_DRBG.

    You realize provably secure only means "provably secure against some standard"?

    The potential ability to formulate a PRNG such that it could be broken with a key doesn't make it insecure, it just means its only as secure as the relative difficulty of recovering the key. If the key isn't known, then hey - it's still hard to break.

    All of which still doesn't support your original contention:
    However, Appendix A.2 of the NIST document, which describes the weakness, does contain a method of generating a new key pair which will mitigate the backdoor if it exists.

    Seriously. The standard includes a complete description of the exact weakness.

    electricitylikesme on
  • Options
    XrddXrdd Registered User regular
    Xrdd wrote: »
    Goumindong wrote: »
    I am not sure its more predictable than other RNG's. There are two possible implementation vulnerabilities and no discussions of how relatively secure the other methods are. We have already gone over why a protocol like this would have been included

    Also this thread has been super-interesting because I didn't know Dual_EC_DBRG had that property (that it's provably secure, whereas the others are not).

    I find it super-interesting that no one winding themselves up about it has ever included that detail in the describing it.

    Because it isn't. Look at the 2 papers I linked above. Designs like it can be provably secure, if you can show that breaking them would imply breaking ECDLP or a similarly hard problem. No such security reduction exists for Dual_EC_DRBG.

    You realize provably secure only means "provably secure against some standard"?

    No, once again: Provably secure means that you can show that breaking it would imply breaking ECDLP or a similarly hard problem (security reduction). That is not the case for Dual_EC_DRBG. Just look at the papers I linked.
    The potential ability to formulate a PRNG such that it could be broken with a key doesn't make it insecure, it just means its only as secure as the relative difficulty of recovering the key. If the key isn't known, then hey - it's still hard to break.
    Oh look, someone else who doesn't understand what criteria a cryptographic algorithm needs to fulfill to be considered secure.
    All of which still doesn't support your original contention:
    However, Appendix A.2 of the NIST document, which describes the weakness, does contain a method of generating a new key pair which will mitigate the backdoor if it exists.

    Seriously. The standard includes a complete description of the exact weakness.

    Appendix A.2 doesn't describe or even acknowledge the weakness, which is that the person who generates P and Q can generate the second point in such a way that they know e. It describes the process you should follow to generate alternative P, Q if you so desire. It also states this:
    The security of Dual_EC_DRBG requires that the points P and Q be properly generated. To avoid using potentially weak points, the points specified in Appendix A.1 should be used.
    Using the points from A.1 is what introduces the backdoor.

    Look, the actual standard has been linked here before, here it is again. You don't need to drag shit from Wikipedia in here. If you think the standard actually acknowledges the attack (it doesn't), show me where it says so (you can't).

  • Options
    spool32spool32 Contrary Library Registered User regular
    Goumindong wrote: »
    Green's objection isn't that its not probably secure (as he claims it is provably secure) but that they didn't provide one in the NIST document (which i would not provides no assurances for any of the standards in it)

    I would think that yes, the NSA is far more secure than my company is

    So you guys have had more than one high-profile whistleblower exposing secrets to everyone in the world?

  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited October 2013
    Xrdd wrote: »
    ...a bunch of debateable crypto-stuff

    You know, I'm not a cryptography expert, so I had a big post typed up on this but then I realized that I'm falling into the trap I accuse others of: none of this matters.

    No one's forced to use a particular standard, no one is forced to use one specific standard and none of the people yelling the loudest about this have provided any primary documentation on where they're drawing their claims from - there are no slides, scans or internal memos, just the NYT and Guardian articles.

    And while we could go round in circles forever on this, since it's got nothing to do with the legal system or the judicial dimensions and necessity of surveillance and what legal protections someone might have against disclosure, or how far personal info can be disclosed before its considered public I'm bored by it.

    We live in an age where there are possibly practical quantum computers. Whether an algorithm is strong or not isn't going to matter in about 5 minutes because you can off-the-shelf something from D-Wave (assuming they can get it to work for the hard mathematical problems).

    electricitylikesme on
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Archangle wrote: »
    This is misleading. NSA must still obtain warrants through courts (that would be the C in "FISC"), and the most recent internal NSA report explicitly identifies cases where their investigations were NOT justified by laws, and in some cases were unconstitutional.

    Can you link this report?

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    joshofalltradesjoshofalltrades Class Traitor Smoke-filled roomRegistered User regular
    I got way too invested and passionate in the last thread, so I'm reading this thread and will chime in if I feel like it will actually matter. But I do want to ask if we can stop using phrases like "cryptonerds". Because to me, it comes across as dismissive of somebody who actually may know more about cryptography than you do. It might not be what you mean, but it just isn't a useful term to throw out there. It's like the Surveillance State Thread equivalent of "neckbeard".

  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    spool32 wrote: »
    Goumindong wrote: »
    Green's objection isn't that its not probably secure (as he claims it is provably secure) but that they didn't provide one in the NIST document (which i would not provides no assurances for any of the standards in it)

    I would think that yes, the NSA is far more secure than my company is

    So you guys have had more than one high-profile whistleblower exposing secrets to everyone in the world?

    Yeah I bet no company has ever had that happen.

  • Options
    spool32spool32 Contrary Library Registered User regular
    spool32 wrote: »
    Goumindong wrote: »
    Green's objection isn't that its not probably secure (as he claims it is provably secure) but that they didn't provide one in the NIST document (which i would not provides no assurances for any of the standards in it)

    I would think that yes, the NSA is far more secure than my company is

    So you guys have had more than one high-profile whistleblower exposing secrets to everyone in the world?

    Yeah I bet no company has ever had that happen.

    Yes but has his company?

    :bz

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    There are crypto algorithms for which quantum computers don't seem to help much. Hashes and symmetric crypto are still good, just need size increases, and there are some algorithms to replace asymmetric crypto (though each has problems - typically very slow or excessive key sizes)

  • Options
    ArchangleArchangle Registered User regular
    Feral wrote: »
    Archangle wrote: »
    This is misleading. NSA must still obtain warrants through courts (that would be the C in "FISC"), and the most recent internal NSA report explicitly identifies cases where their investigations were NOT justified by laws, and in some cases were unconstitutional.

    Can you link this report?
    http://apps.washingtonpost.com/g/page/national/nsa-report-on-privacy-violations-in-the-first-quarter-of-2012/395/

  • Options
    AngelHedgieAngelHedgie Registered User regular
    I got way too invested and passionate in the last thread, so I'm reading this thread and will chime in if I feel like it will actually matter. But I do want to ask if we can stop using phrases like "cryptonerds". Because to me, it comes across as dismissive of somebody who actually may know more about cryptography than you do. It might not be what you mean, but it just isn't a useful term to throw out there. It's like the Surveillance State Thread equivalent of "neckbeard".

    Sorry, but no. The reason I use the term is to point out that these people aren't just acting as experts, but they have an underlying ideology that is driving their position. In that sense, the analogy to neckbeard is quite on the mark, as the same mentality is in play.

    Besides, I don't look highly on trying to redefine the playing field.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    I got way too invested and passionate in the last thread, so I'm reading this thread and will chime in if I feel like it will actually matter. But I do want to ask if we can stop using phrases like "cryptonerds". Because to me, it comes across as dismissive of somebody who actually may know more about cryptography than you do. It might not be what you mean, but it just isn't a useful term to throw out there. It's like the Surveillance State Thread equivalent of "neckbeard".

    Sorry, but no. The reason I use the term is to point out that these people aren't just acting as experts, but they have an underlying ideology that is driving their position. In that sense, the analogy to neckbeard is quite on the mark, as the same mentality is in play.

    Besides, I don't look highly on trying to redefine the playing field.

    I don't care for either term because they're both not productive to constructive discourse.

    I do agree with the observation that there's a lot of people in the debate who are using one position they're an expert in to make broad-ranging claims about a lot of areas they're not.

    Everyone thinks they're an expert in international relations for some reason. Now sure, you don't have to be but the number of times I'm seeing a country's government talked about as though its a literal single human being is just ridiculous.

  • Options
    GoumindongGoumindong Registered User regular
    On my phone so no quoting as it's a pain. Just some things to note

    1) the NIST does not provide security assurances for any of the prngs listed in SP 800-90a. You can read it yourself if you don't believe me.

    2) as already linked by xrdd there is a security proof for dual_EC_drbg. It requires solving the ECDLP which is hard if another well regarded as hard problem is hard(I don't recall which one).

    There are two potential problems both of which are discussed and discussed as to how to avoid them in the NIST SP 800-90a.

    3) all prngs are subject to the type of attack that xrdd is ranting about. Having the d or e in this instance is equivalent to having the hash for a hash based design. The hash is equivalent to the key so if you have it you have the keys to the kingdom just as much as you do with having e or d in dual ec drbg.

    wbBv3fj.png
Sign In or Register to comment.