The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.

Lies, Damned Lies, and Statistics

ElJeffeElJeffe Registered User, ClubPA regular
edited June 2011 in Debate and/or Discourse
So here's a conversation I recently held with a friend of mine, paraphrased:


Her: "Hey, guess what the worst food is?"
Me: "The worst food?"
Her: "Yeah, it's the worst by 57%."
Me: "Umm. 57% of what?"
Her: "It's 57% worse than the next worst one."
Me: "I don't what that means, though. It's 57% higher in calories? Higher in fat? 57% more likely to give you cancer, or makes you 57% fatter...?"
Her: "What difference does it make? Just guess. It's 57% worse."
Me: "But that doesn't mean anything! You need to have some metric by which it's 57% worse, or else that number is useless! It's like saying a car is 57% better than every other car!"
Her: "God, stop being so analytical."
Me: "Okay, fine. Umm... pasta with alfredo sauce."
Her: "Nope, it's french fries!"
Me: "So fries are 57% worse than the next worse food. What's the next worse food?"
Her: "I don't know, but isn't that crazy?"
Me: *stabbing self in brain to hopefully end the pain*


Statistics are truly the worst thing. In fact, they are 83% worse than the next worse thing. So let's discuss statistics! How are they misused? And can we make it better? What will it take to teach people that reducing something to a single number almost never provides any useful information, because almost any statement containing a single number can be made true by clever use of metrics?

(Note: This is not the place to discuss food, or french fries, or whatnot. We are talking about statistics here, and if you don't do it right I am 100% likely to punch you.)

I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
ElJeffe on
«13

Posts

  • TaramoorTaramoor Storyteller Registered User regular
    edited June 2011
    78% of all statistics are made up on the spot.

    Give or take 18%.

    Taramoor on
  • Captain CarrotCaptain Carrot Alexandria, VARegistered User regular
    edited June 2011
    In the last century, no incumbent president with an unemployment rate between 7.5% and 23.5% has failed to win reelection.
    That applies only to FDR and Reagan, IIRC

    Captain Carrot on
  • AtomikaAtomika Live fast and get fucked or whatever Registered User regular
    edited June 2011
    Without meaningless and empty statistics, all of our sports analysts would be out of a job.


    "How can you trade Thompson?! He leads the team in 8th-inning doubles in the month of July!"

    "Watch out for Ramirez this season. He's never lost when his team has been up by 21 or more points going into the fourth quarter."


    Also, the use of statistics should have made professional sports drafts worthless, yet they persist.

    "We're picking this shitty kid for QB with our first pick, despite the fact that we already have a quarterback and this kid is really, really shitty, in a really obvious way. But what can we do? He went to Notre Dame!"

    Atomika on
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    I was thinking about making a statistics thread! I love wacky statistics!

    I like the birthday problem. (I know, I used it in an OP on DNA evidence last year.) How many people do you need to have in a room for there to be a 50% probability of any two people having the same birthday?
    23.

    And 99%?
    57.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Captain CarrotCaptain Carrot Alexandria, VARegistered User regular
    edited June 2011
    On that subject, Silas Brown, Thomamelas, and I are all regulars in [chat]. Silas and I were born on the same day, same year, and Thom is eight days, eight years older.

    Captain Carrot on
  • themightypuckthemightypuck MontanaRegistered User regular
    edited June 2011
    I love statistics. Without them we are wandering blind in our biases.

    themightypuck on
    “Reject your sense of injury and the injury itself disappears.”
    ― Marcus Aurelius

    Path of Exile: themightypuck
  • TomantaTomanta Registered User regular
    edited June 2011
    Taramoor wrote: »
    78% of all statistics are made up on the spot.

    Give or take 18%.

    And it's getting worse. In 1960 only 55% of statistics were made up on the spot (+- 20%).

    Further, today only 15% of people understand statistics which is down from 25%.

    Related to statistics, I really hate that people do not understand correlation != causation. Yes, as ice cream sales increase so do drowning deaths. But ice cream does not cause you to drown. Although drowning in ice cream sounds like a fun way to go.

    Tomanta on
  • KakodaimonosKakodaimonos Code fondler Helping the 1% get richerRegistered User regular
    edited June 2011
    Feral wrote: »
    I was thinking about making a statistics thread! I love wacky statistics!

    I like the birthday problem. (I know, I used it in an OP on DNA evidence last year.) How many people do you need to have in a room for there to be a 50% probability of any two people having the same birthday?
    23.

    And 99%?
    57.

    Combinatorial probability is always interesting. Do we just want to discuss statistics? Or more advanced stuff? Serial correlation always bites people in the ass when they're investing. LTCM is a classic example.

    A classic investment problem is:

    Which investment is better? A 10% return on capital with 20% variance? Or 50% return on capital with 100% variance?

    Or EGARCH models? Which account for the clustering of volatility. Some of the more interesting research is showing that the market "remembers" negative events in the volatility models much longer than it "remembers" positive events.

    Kakodaimonos on
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    Combinatorial probability is always interesting. Do we just want to discuss statistics? Or more advanced stuff? Serial correlation always bites people in the ass when they're investing. LTCM is a classic example.

    A classic investment problem is:

    Which investment is better? A 10% return on capital with 20% variance? Or 50% return on capital with 100% variance?

    Or EGARCH models? Which account for the clustering of volatility. Some of the more interesting research is showing that the market "remembers" negative events in the volatility models much longer than it "remembers" positive events.

    I would like to understand this, but I am caught on the financial buzzwords. I'm also caught on the use of the word 'variance' to refer to a percentage. Does "variance" in investing mean the same thing as it does in stats?

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • DrezDrez Registered User regular
    edited June 2011
    I'm going to be 100% cliche here and make a lame statistics joke, 20% of which will contain percentages.

    Drez on
    Switch: SW-7690-2320-9238Steam/PSN/Xbox: Drezdar
  • emnmnmeemnmnme Registered User regular
    edited June 2011
    I want to know how the right wingers can conjure up one set of statistics to support their claims and the left wingers can conjure up another set of statistics to support their claims. Math is supposed to be impartial.

    emnmnme on
  • BullioBullio Registered User regular
    edited June 2011
    Without meaningless and empty statistics, all of our sports analysts would be out of a job.


    "How can you trade Thompson?! He leads the team in 8th-inning doubles in the month of July!"

    "Watch out for Ramirez this season. He's never lost when his team has been up by 21 or more points going into the fourth quarter."


    Also, the use of statistics should have made professional sports drafts worthless, yet they persist.

    "We're picking this shitty kid for QB with our first pick, despite the fact that we already have a quarterback and this kid is really, really shitty, in a really obvious way. But what can we do? He went to Notre Dame!"

    Yes, my brain would hurt a lot less after watching a game if the announcers would stop throwing out so many pointless stats. I don't care what Sabbathia's ERA is in the 1st with runners in scoring position during day games in the month of June.
    emnmnme wrote: »
    I want to know how the right wingers can conjure up one set of statistics to support their claims and the left wingers can conjure up another set of statistics to support their claims. Math is supposed to be impartial.

    Depends on the thinktank responsible.

    Bullio on
    steam_sig.png
  • TomantaTomanta Registered User regular
    edited June 2011
    I have a question for stats people.

    I know that a completely random survey will give representative results at, say, 1500 responses or so. That is usually when there is a small number of possible answers.

    But does this hold up when there are hundreds of possible answers to a given question?

    In other words, how much can we trust Neilson TV ratings / what would their sample size need to be?

    Tomanta on
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    Oh, I've got another one.

    Let's say you have a medical test for an exotic disease and you're testing the general population. If the likelihood of a false positive on the test is higher than the prevalence of that disease, then a positive result is more likely to be wrong than it is right.

    In other words, let's say there's a new disease called Fabricatosis. 1% of the people in the world have fabricatosis; there's a blood test with a false positive rate of 5% and a false negative rate of 5%. If we test 10,000 people, we would expect:

    95 true positives
    500 false positives
    5 false negatives
    9,400 true negatives

    There are a couple of ways (that I know of) to get around this.

    1) Don't bother testing random people. Only test people who have symptoms or have been exposed. The prevalence of fabricatosis in the general population might be 1%, but we can presume that the prevalence is actually much higher among the subset of people who show the symptoms of fabricatosis. This is what we do with mononucleosis.

    2) Develop a different test and use the two tests together. This reduces the false positive rate exponentially. This is what we do for HIV.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    edited June 2011
    I think one of the most important things we should do is provide both relative and absolute percentages when dealing with risk, especially in reporting and advertising.

    "Neglecting Vitamin J correlated with a 300% increase in risk of toenail cancer!"

    This is misleading compared to "The risk of toenail cancer in the population of those who neglect Vitamin J within their diet is 0.03% (or 3 in 50,000 <this should also be specified and be noted as per year or per death, however it's measured>) compared to 0.01% for those with sufficiently Vitamin J filled diets."

    Apothe0sis on
  • furlionfurlion Riskbreaker Lea MondeRegistered User regular
    edited June 2011
    I am a fan of statistics and using them to convey information in a useful manner. It really pisses me off when people use them to trick other people though. For instance while my wife was pregnant she saw an article in a magazine saying that women who had a mother or sister with preterm delivery were 50% more likely to have a preterm delivery. She was quite upset about it since her sister had two. Upon carefully perusing the article I noticed at the bottom in very very small print it said the incidence went from 2.7% to 4%. So yes, a roughly 50% increase, but that is not the number they should have emphasized. Taking advantage of people's ignorance like that is disgusting.

    furlion on
    sig.gif Gamertag: KL Retribution
    PSN:Furlion
  • kuhlmeyekuhlmeye Registered User regular
    edited June 2011
    Apothe0sis wrote: »
    I think one of the most important things we should do is provide both relative and absolute percentages when dealing with risk, especially in reporting and advertising.

    "Neglecting Vitamin J correlated with a 300% increase in risk of toenail cancer!"

    This is misleading compared to "The risk of toenail cancer in the population of those who neglect Vitamin J within their diet is 0.03% (or 3 in 50,000 <this should also be specified and be noted as per year or per death, however it's measured>) compared to 0.01% for those with sufficiently Vitamin J filled diets."

    But that way I can't scare people into buying Vitamin J!

    kuhlmeye on
    PSN: the-K-flash
  • Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    edited June 2011
    furlion wrote: »
    I am a fan of statistics and using them to convey information in a useful manner. It really pisses me off when people use them to trick other people though. For instance while my wife was pregnant she saw an article in a magazine saying that women who had a mother or sister with preterm delivery were 50% more likely to have a preterm delivery. She was quite upset about it since her sister had two. Upon carefully perusing the article I noticed at the bottom in very very small print it said the incidence went from 2.7% to 4%. So yes, a roughly 50% increase, but that is not the number they should have emphasized. Taking advantage of people's ignorance like that is disgusting.

    This is exactly what I'm tlaking about - without the base rates or absolute percentages the relative increases mean nothing.

    Apothe0sis on
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    furlion wrote: »
    I am a fan of statistics and using them to convey information in a useful manner. It really pisses me off when people use them to trick other people though. For instance while my wife was pregnant she saw an article in a magazine saying that women who had a mother or sister with preterm delivery were 50% more likely to have a preterm delivery. She was quite upset about it since her sister had two. Upon carefully perusing the article I noticed at the bottom in very very small print it said the incidence went from 2.7% to 4%. So yes, a roughly 50% increase, but that is not the number they should have emphasized. Taking advantage of people's ignorance like that is disgusting.

    How dare she put her baby at risk.

    How dare she.

    She should rip out her ovaries and give them to somebody who is fit to be a mother.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    edited June 2011
    Feral wrote: »
    There are a couple of ways (that I know of) to get around this.

    1) Don't bother testing random people. Only test people who have symptoms or have been exposed. The prevalence of fabricatosis in the general population might be 1%, but we can presume that the prevalence is actually much higher among the subset of people who show the symptoms of fabricatosis. This is what we do with mononucleosis.

    2) Develop a different test and use the two tests together. This reduces the false positive rate exponentially. This is what we do for HIV.

    You can also, depending on the test, run it twice. Sometimes the false positive rate has to do with the test more than some quirk of whoever's getting it.

    Shivahn on
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited June 2011
    BTW, I hope this isn't off-topic, but I am selling tiger-repelling rocks and I take Paypal.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Captain CarrotCaptain Carrot Alexandria, VARegistered User regular
    edited June 2011
    Tiger-repelling rock? How can I lose!

    Captain Carrot on
  • Fuzzy Cumulonimbus CloudFuzzy Cumulonimbus Cloud Registered User regular
    edited June 2011
    p09-101e1h.gif

    Statistical Mechanics let me do science! They are witchcraft and no matter how much we pretend we know it, there are only like 10 dudes who really do, and they are probably witches!

    Fuzzy Cumulonimbus Cloud on
  • DarkPrimusDarkPrimus Registered User regular
    edited June 2011
    At work, they display the secret shopper score ratings on a chart next to the punch clock. The thing is that they start the ratings at 70% and end the ratings at 120%. There is no way for the secret shopper score to go past 100%. If they ended the chart at 100% the difference between the various scores would be all the more apparent.

    DarkPrimus on
  • ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    edited June 2011
    Feral wrote: »
    BTW, I hope this isn't off-topic, but I am selling tiger-repelling rocks and I take Paypal.

    Can I just mail you hundred dollar bills instead?

    Shivahn on
  • Hahnsoo1Hahnsoo1 Make Ready. We Hunt.Registered User, Moderator, Administrator admin
    edited June 2011
    Feral wrote: »
    Oh, I've got another one.

    Let's say you have a medical test for an exotic disease and you're testing the general population. If the likelihood of a false positive on the test is higher than the prevalence of that disease, then a positive result is more likely to be wrong than it is right.

    In other words, let's say there's a new disease called Fabricatosis. 1% of the people in the world have fabricatosis; there's a blood test with a false positive rate of 5% and a false negative rate of 5%. If we test 10,000 people, we would expect:

    95 true positives
    500 false positives
    5 false negatives
    9,400 true negatives

    There are a couple of ways (that I know of) to get around this.

    1) Don't bother testing random people. Only test people who have symptoms or have been exposed. The prevalence of fabricatosis in the general population might be 1%, but we can presume that the prevalence is actually much higher among the subset of people who show the symptoms of fabricatosis. This is what we do with mononucleosis.

    2) Develop a different test and use the two tests together. This reduces the false positive rate exponentially. This is what we do for HIV.
    You're missing the one that actually is used in the medical technology market. You fudge the numbers and play up the ones that look favorable while sending attractive-looking representatives to inundate hospital staff with graphs and free food. The next step, of course, is profit.

    In theory, sensitivity and specificity of any given medical test should determine what is used for therapy, but in practice, there are a lot of logistical problems. For example, whether or not a test is available in a treatment setting, the cost of the test (compared to other tests), and the benefit of treatment (why bother doing the test if the treatment is going to be the same?).
    Shivahn wrote: »
    Feral wrote: »
    BTW, I hope this isn't off-topic, but I am selling tiger-repelling rocks and I take Paypal.

    Can I just mail you hundred dollar bills instead?
    But there's this new thing that I just discovered! It's called the internet. Apparently, you don't have to use mail to send him money! Just put all of your personal information right here...

    Hahnsoo1 on
    8i1dt37buh2m.png
  • ronyaronya Arrrrrf. the ivory tower's basementRegistered User regular
    edited June 2011
    emnmnme wrote: »
    I want to know how the right wingers can conjure up one set of statistics to support their claims and the left wingers can conjure up another set of statistics to support their claims. Math is supposed to be impartial.

    In the absence of plausibly controlled experiments, a lot of statistics as applied to the social sciences rely on theory to fill in the gaps. Theory varies by ideology.

    ronya on
    aRkpc.gif
  • ronyaronya Arrrrrf. the ivory tower's basementRegistered User regular
    edited June 2011
    Or EGARCH models? Which account for the clustering of volatility. Some of the more interesting research is showing that the market "remembers" negative events in the volatility models much longer than it "remembers" positive events.

    One hypothesis: disentangling the financial issues caused by unexpected negative shocks (i.e., at least someone out there is going to be overleveraged) is itself costly - paying unexpected opportunity costs is a lot easier than paying unexpected costs.

    ronya on
    aRkpc.gif
  • kaleeditykaleedity Sometimes science is more art than science Registered User regular
    edited June 2011
    God I love statistics, but people are so bad at understanding them. So bad. There's even a study out there somewhere (my googling is failing me) that showed that roughly half of doctors in a given sample (sampling was done well iirc! This should be the first thing that is checked in every study!) failed a test on statistics concerning prescription drugs. I'd argue that statistical thinking, or the basic concepts of statistics at least, need to be taught so much earlier in school. Statistics class should not be a class that is nearly reviled.

    For one thing, it'd help if the texts were done a bit better. At least in my second semester college course, there was actually a homework problem assigned that involved comparing the rate of different types of paint drying. I am not sure if the statisticians were trying to troll students or something, but god man, really?
    I have a question for stats people.

    I know that a completely random survey will give representative results at, say, 1500 responses or so. That is usually when there is a small number of possible answers.

    But does this hold up when there are hundreds of possible answers to a given question?

    In other words, how much can we trust Neilson TV ratings / what would their sample size need to be?
    You can use information like that, but it's going to depend a lot on how you define language and what you're trying to figure out. You're not going to be able to prove specific points this way; you'd have to be going for something fairly general. Most likely, if you're trying to make decisions on information like this, you should be figuring out a better way to sample, or a better methodology of questioning that can provide more discrete information. Or you can figure out ways to mine that data, but that's again going to heavily depend on what you have and what you're looking for. I don't know too much about Nielsen ratings other than extremely basic stuff so it's hard to comment.

    kaleedity on
  • edited June 2011
    This content has been removed.

  • Anarchy Rules!Anarchy Rules! Registered User regular
    edited June 2011
    DarkPrimus wrote: »
    At work, they display the secret shopper score ratings on a chart next to the punch clock. The thing is that they start the ratings at 70% and end the ratings at 120%. There is no way for the secret shopper score to go past 100%. If they ended the chart at 100% the difference between the various scores would be all the more apparent.

    Presumably that's one instant where percentages above 100 are actually useful? I.e. 120% score being above the level expected of the employee/shop?

    Whilst I'm here can I recommend Bad Science by Ben Goldacre and Irrationality by Stuart Sutherland. Both cover statistics (amongst other things) and their abuses.

    Anarchy Rules! on
  • SimpsonsParadoxSimpsonsParadox Registered User regular
    edited June 2011
    ElJeffe wrote: »
    So let's discuss statistics! How are they misused?

    Misused stats usually fall into two categories. The first is when a critical piece of information that explains the statistic is being withheld, like in the absolute vs relative percentage discussion above, but the number isn't exactly wrong just misleading. The other is bad data from the start. If I poll 5 Americans on their views on gay marriage, it doesn't matter how perfect my study and questions were because the data is fundamentally screwy (not a big enough sample if I'm trying to estimate for all of America).
    ElJeffe wrote: »
    And can we make it better? What will it take to teach people that reducing something to a single number almost never provides any useful information, because almost any statement containing a single number can be made true by clever use of metrics?

    I'd love for there to be a basic "Surviving in the Real World" class that you have to take during your Senior year at HS that covers subjects like basic statistics, basic borrowing/debt, etc.

    But that's boring/unfunny stuff! I'm currently studying for a degree in Statistics and the sheer amount of hilarity I run into (from professors, other students, and, I'll admit, my own ineptitude) is surprising.

    For example: in one of my stat courses we had an end of the year project to gather up data, come up with a hypothesis about that data, and run a bunch of tests. It's real purpose was to make sure we had a basic understanding of how to run the tests and knew how to get the computer to run them for us. My group - being composed of two Economics majors and myself - decided to gather up a three year stretch of housing prices from both before and after the recession and create a model to estimate that end housing price based off of a bunch of factors (gross lot size, number of beds and baths, etc). We'd then compare them to see if, say, a bathroom was worth more or less pre-recession or post-recession.

    After a few days of pain and tears as we manually trudged through a list of central florida housing prices and manually transcribed 10ish variables from a few different sources into a big excel worksheet our data was finally collected. One sunday afternoon spent yelling at a computer later, we were finally had our model!...and promptly found that it suggested that adding a bathroom to your house, both pre- and post- recession, took 200,000 dollars off of your house's sale price. In fact, our model predicted that the only thing that would actually increase the price of your house was gross lot size and heated lot size. In non-housing terms, our several days of work ended up stating that a house the exact size of it's lot without any bedrooms or bathrooms was the best thing since sliced bread, at least price wise.

    Needless to say, we didn't end up suggesting that anyone go out and use that model to predict housing prices.

    In a different stats course, our professor randomly asks us to raise our hands if we had smoked Marijuana recently. No one actually raised their hands, which lead him to confess that he thought that *all* Americans smoked weed whenever they were depressed or stressed or anything. He was certainly an odd one.


    -edit-
    kaleedity wrote: »
    For one thing, it'd help if the texts were done a bit better. At least in my second semester college course, there was actually a homework problem assigned that involved comparing the rate of different types of paint drying. I am not sure if the statisticians were trying to troll students or something, but god man, really?

    I have no doubt in my mind that my intro stats book was trolling students. After finding the probability that Chad guesses Mary's birthday perfectly on the first date, the very next question asked what the probability of a second date would be if Mary found out that Chad already knew her birthday and was just trying to impress her.

    SimpsonsParadox on
  • ronyaronya Arrrrrf. the ivory tower's basementRegistered User regular
    edited June 2011
    After a few days of pain and tears as we manually trudged through a list of central florida housing prices and manually transcribed 10ish variables from a few different sources into a big excel worksheet our data was finally collected. One sunday afternoon spent yelling at a computer later, we were finally had our model!...and promptly found that it suggested that adding a bathroom to your house, both pre- and post- recession, took 200,000 dollars off of your house's sale price. In fact, our model predicted that the only thing that would actually increase the price of your house was gross lot size and heated lot size. In non-housing terms, our several days of work ended up stating that a house the exact size of it's lot without any bedrooms or bathrooms was the best thing since sliced bread, at least price wise.

    A house with fewer rooms has larger rooms, and it seems quite plausible that differing layouts like these are rrelated with other factors (geography, proximity to desirable features?) that may be correlated with house price.

    i.e., omitted variable?

    just a guess, anyway

    ronya on
    aRkpc.gif
  • Skoal CatSkoal Cat Registered User regular
    edited June 2011
    To be fair, the central Florida housing market never made much sense in the first place

    Skoal Cat on
  • SimpsonsParadoxSimpsonsParadox Registered User regular
    edited June 2011
    ronya wrote: »
    A house with fewer rooms has larger rooms, and it seems quite plausible that differing layouts like these are rrelated with other factors (geography, proximity to desirable features?) that may be correlated with house price.

    i.e., omitted variable?

    just a guess, anyway

    In the end, we came to the same conclusion you did: We just didn't have enough variables, mostly because we were running off of public available data and just didn't have that many variables to work with. The story, thankfully for my grade, has a happy ending. Our model presentation was pretty funny due to the silly numbers we kept getting and, perhaps more importantly, we only took a few minutes after a long string of twenty to thirty minute presentations by people who were trying far too hard to be funny. We all ended up getting an A on the project.

    SimpsonsParadox on
  • devCharlesdevCharles Gainesville, FLRegistered User regular
    edited June 2011
    The crazy thing about sample size is that you can get statistically significant data from a pretty tiny sample size as long as it fits a few rules (granted, the rules are pretty intense.) Once you get over 30, you're in very significant territory. As long as you follow the basic rules, you'll be surprised the kind of trends that can exist.

    The rules according to my statistics textbook:

    It must be a random sample. This means that every potential thing that could be under observation has an equal chance of being picked.

    It must have parameters. Basically, you have to gather the data from the relevant population. If you're judging how people will vote, and you ask 12 year old's, it won't be that useful.

    Variables have to be defined properly. If you use a nebulous word in a poll like whether or not people like taxes, obviously everyone will say no. If you say, taxes on a specific group, that group will say no, the rest will be more mixed. Misleading variables is usually the "damn lies" part of statistics. You can see really dishonest stuff here.

    Now, if you go with the sample size 3, your confidence interval (the percentage of confidence that you'll have that all further observations will fall into your conclusion) could be pretty weak with a high margin of error, but you could also have a distribution that is surprisingly close to a normal distribution depending on what you're testing for.

    Sometimes I'll see people on these forums refer to anecdotes with a "lol sample size," but, actually, that's very rarely the problem. If they have more than 3 or more people, they actually have enough to make a inference. What you should say is "lol non random sample."

    devCharles on
    Xbox Live: Hero Protag
    SteamID: devCharles
    twitter: https://twitter.com/charlesewise
  • KakodaimonosKakodaimonos Code fondler Helping the 1% get richerRegistered User regular
    edited June 2011
    Feral wrote: »
    Combinatorial probability is always interesting. Do we just want to discuss statistics? Or more advanced stuff? Serial correlation always bites people in the ass when they're investing. LTCM is a classic example.

    A classic investment problem is:

    Which investment is better? A 10% return on capital with 20% variance? Or 50% return on capital with 100% variance?

    Or EGARCH models? Which account for the clustering of volatility. Some of the more interesting research is showing that the market "remembers" negative events in the volatility models much longer than it "remembers" positive events.

    I would like to understand this, but I am caught on the financial buzzwords. I'm also caught on the use of the word 'variance' to refer to a percentage. Does "variance" in investing mean the same thing as it does in stats?

    Yeah, variance is the average squared distance from the mean. It's usually calculated off of the expected return. Where the expected return is your first moment/population mean.

    Volatility is simply the price variation in an instrument/stock. There's a lot of different ways to calculate it and different types. Usually, unless someone mentions otherwise, it's the yearly annualized volatility.

    Kakodaimonos on
  • SanderJKSanderJK Crocodylus Pontifex Sinterklasicus Madrid, 3000 ADRegistered User regular
    edited June 2011
    Apothe0sis wrote: »
    furlion wrote: »
    I am a fan of statistics and using them to convey information in a useful manner. It really pisses me off when people use them to trick other people though. For instance while my wife was pregnant she saw an article in a magazine saying that women who had a mother or sister with preterm delivery were 50% more likely to have a preterm delivery. She was quite upset about it since her sister had two. Upon carefully perusing the article I noticed at the bottom in very very small print it said the incidence went from 2.7% to 4%. So yes, a roughly 50% increase, but that is not the number they should have emphasized. Taking advantage of people's ignorance like that is disgusting.

    This is exactly what I'm tlaking about - without the base rates or absolute percentages the relative increases mean nothing.

    One of the better things my newspaper has in it's overall pretty mediocre science section is a quarterpage examining whatever the 'hyped science news of the week" was. Statistics comes up a lot, and this week it was a claim how sleeping on your left side during the latter months of pregancy reduces stillbirth. The small sample study produced a rate decrease of 0.37% to 0.19%. It apparently made headlines around the world as "Sleeping on left-side halves stillbirth!"

    Edit: Most of the mentions of it imply a clear causal relation, where the researchers only find correlation.

    SanderJK on
    Steam: SanderJK Origin: SanderJK
  • furlionfurlion Riskbreaker Lea MondeRegistered User regular
    edited June 2011
    Oh man that is another huge one. Correlation and causation. Even if people have no idea what statistics is, how to do it, or what it means, they should just remember that correlation does not equal causation. I bet I honestly say that once every few days to my wife or friends over something or other on the news or in the newspaper.

    furlion on
    sig.gif Gamertag: KL Retribution
    PSN:Furlion
  • themightypuckthemightypuck MontanaRegistered User regular
    edited June 2011
    Much of statistics is about how to make causal inferences from correlations. Also, statistics people sometimes forget about the real world. Classic example is how super effective stock pickers are not doing better than random. After all, out of 100K financial advisers one or two should have a ten year hot streak. Just luck. This is true but from the perspective of an investor, the lucky picker who beat the market ten years running is a better bet than a random pick of the pickers.

    themightypuck on
    “Reject your sense of injury and the injury itself disappears.”
    ― Marcus Aurelius

    Path of Exile: themightypuck
Sign In or Register to comment.