Options

How "future-proof" are your views? (thread comes with a bonus quiz!)

135

Posts

  • Options
    surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited January 2011
    If we wind up modeling the first AIs as some sort of neural network based on the human brain, I'd imagine the very first would be less intelligent than humans. If we're basically trying to mimic humans, the first few stabs will probably be imperfect. I'd imagine them as probably on the level of a fairly slow human being.

    The difference is an AI has a level of introspection simply not available to an ordinary human. For example, we have the capacity to perform very hard second-degree differentiation - we do it every time we catch something - but this facility is completely blocked off to us. We have no idea how it works, we simply know that it does. We can perform incredibly complicated feats of facial recognition, but have no idea how it works. We have memory limitations that a computer simply would not have - we are terribly terribly slow at learning things like pictures and information, an AI would not be.

    We are jury-rigged; a thin shell of cerebral cortex over an awful lot of very old systems that have been deemed by evolution to be too important to tamper with. An AI would be so superior in many ways simply by dint of the capacity for self-reflection. I don't think many people realise how crippled human minds are by the necessities of evolution.

    surrealitycheck on
    obF2Wuw.png
  • Options
    JepheryJephery Registered User regular
    edited January 2011
    We are jury-rigged; a thin shell of cerebral cortex over an awful lot of very old systems that have been deemed by evolution to be too important to tamper with. An AI would be so superior in many ways simply by dint of the capacity for self-reflection. I don't think many people realise how crippled human minds are by the necessities of evolution.

    But what if whatever sentient AI we end up making cannot self-reflect in such a way? Maybe whatever AI we create will be similar to us in that it doesn't know whats going on deeper inside its mind. When it calls on its arithmetic function to calculate two plus two, or its calculus function to solve a second order derivative, will it know how those functions work, or will it simply use them? Like how a human instinctively knows how to suckle a teat, or how to catch a ball after being trained to.

    Edit: Sorry, I completely altered my post to get the point of it across better.

    Jephery on
    }
    "Orkses never lose a battle. If we win we win, if we die we die fightin so it don't count. If we runs for it we don't die neither, cos we can come back for annuver go, see!".
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited January 2011
    Jephery wrote: »
    We are jury-rigged; a thin shell of cerebral cortex over an awful lot of very old systems that have been deemed by evolution to be too important to tamper with. An AI would be so superior in many ways simply by dint of the capacity for self-reflection. I don't think many people realise how crippled human minds are by the necessities of evolution.

    Something I've been thinking about is, why wouldn't an AI be limited in the same way, or a similar way, to a human is in this capacity?

    What you described with a human catching something is similar to a function call. When one function calls on a function, it doesn't know the inner workings of that function being called, all it knows is the input its giving it and the output it gets back. In the case of the human, the ball catching function has been created through constant training such that it becomes instinct, in the case of an AI, the function may be written by a programmer, or maybe its also been formulated through training. In this way, programs are abstracted in a way very similar to the human mind. This is obviously a result of the way people have designed high level languages, it order to make them intuitive to the average person trying to program, but any AI will likely be written in a high level language.

    An AI's mind doesn't need to know how its lower level algorithms work in order to do its job. If the AI's role is to be a player in a baseball game, the sentient AI shouldn't need such a high level of self reflection. Why should it care about the inner workings of its ball catching algorithm when it should be focusing on the overall ball game?

    The main advantages an AI has over a human is that all the information regarding its method of operation is easily accessible and copyable. It has the potential to take a snapshot of it's own operating state and dissect that to learn how it works with no permanent damage to itself.

    Getting a complete map of a human brain is, substantially more difficult.

    electricitylikesme on
  • Options
    valiancevaliance Registered User regular
    edited January 2011
    Modern Man wrote: »
    Your assumption seems to be that history and society are inevitably moving in a particular direction. That's a dangerous assumption, and there are a lot of historical examples to show that "progress" isn't really a given.

    A bad economic depression, a serious world war and/or some type of natural disaster might very well change the societal status of homosexuals, women, minorities and so on. We only recently (in a historic sense) eliminated slavery in the Western world, for example, but it's not impossible to see a future where the institution is revived.

    Similarly, women's rights are, in a lot of ways, a luxury that we can afford due to our economic prosperity and lack of real outside threats. But if some sort of societal calamity occurred where law and order broke down significantly, you'd see a serious regression in women's rights (look at places like Somalia and Afghanistan, where women are limited in their freedom due to a number of reasons, including safety).


    As for homosexual rights, it's only due to the decline of the influence of religion that we're seeing this development. Imagine a scenario where Islam becomes the dominant religion in France or Sweden. What do you think would happen to gay rights then?

    Weirdly enough it also works the other way around: increasing women's rights in 3rd world nations is causally linked with increasing economic prosperity. Of course historically, the US and most OECD countries followed the order you suggested: gain economic prosperity, then implement women's rights; but I see no reason it would have to be that way. I don't think Somalia and Afghanistan are too poor to implement women's rights, they're simply too socially backwards. Saudia Arabia is plenty rich and also plenty backwards.

    valiance on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited January 2011
    valiance wrote: »
    Modern Man wrote: »
    Your assumption seems to be that history and society are inevitably moving in a particular direction. That's a dangerous assumption, and there are a lot of historical examples to show that "progress" isn't really a given.

    A bad economic depression, a serious world war and/or some type of natural disaster might very well change the societal status of homosexuals, women, minorities and so on. We only recently (in a historic sense) eliminated slavery in the Western world, for example, but it's not impossible to see a future where the institution is revived.

    Similarly, women's rights are, in a lot of ways, a luxury that we can afford due to our economic prosperity and lack of real outside threats. But if some sort of societal calamity occurred where law and order broke down significantly, you'd see a serious regression in women's rights (look at places like Somalia and Afghanistan, where women are limited in their freedom due to a number of reasons, including safety).


    As for homosexual rights, it's only due to the decline of the influence of religion that we're seeing this development. Imagine a scenario where Islam becomes the dominant religion in France or Sweden. What do you think would happen to gay rights then?

    Weirdly enough it also works the other way around: increasing women's rights in 3rd world nations is causally linked with increasing economic prosperity. Of course historically, the US and most OECD countries followed the order you suggested: gain economic prosperity, then implement women's rights; but I see no reason it would have to be that way. I don't think Somalia and Afghanistan are too poor to implement women's rights, they're simply too socially backwards. Saudia Arabia is plenty rich and also plenty backwards.

    I agree with everything you just said. It also is agreeable with logic.

    electricitylikesme on
  • Options
    ElJeffeElJeffe Not actually a mod. Roaming the streets, waving his gun around.Moderator, ClubPA mod
    edited January 2011
    If we wind up modeling the first AIs as some sort of neural network based on the human brain, I'd imagine the very first would be less intelligent than humans. If we're basically trying to mimic humans, the first few stabs will probably be imperfect. I'd imagine them as probably on the level of a fairly slow human being.

    The difference is an AI has a level of introspection simply not available to an ordinary human. For example, we have the capacity to perform very hard second-degree differentiation - we do it every time we catch something - but this facility is completely blocked off to us. We have no idea how it works, we simply know that it does. We can perform incredibly complicated feats of facial recognition, but have no idea how it works. We have memory limitations that a computer simply would not have - we are terribly terribly slow at learning things like pictures and information, an AI would not be.

    No we don't. A four year old is not performing linear analysis on a system of differential equations every time he catches a ball. He is just using his past experiences to judge where the ball will be. Which is why you get better at catching a ball with practice, and not by learning to solve differential equations - you're training your muscles to recognize situations more accurately and react accordingly.
    We are jury-rigged; a thin shell of cerebral cortex over an awful lot of very old systems that have been deemed by evolution to be too important to tamper with. An AI would be so superior in many ways simply by dint of the capacity for self-reflection. I don't think many people realise how crippled human minds are by the necessities of evolution.

    Given that we don't fully understand how the brain works, I don't take it as a given that the first AI routines will just hack off all the unnecessary bits and suddenly be streamlined. And given the complexity of such a brain, I'm also not convinced that they will be all that awesome at self-reflection. I mean, isn't it impossible for a system to completely and accurately model or analyze itself? Sounds sort of like trying to perfectly model every particle in the universe - it's logically impossible.

    Sure, the AI could analyze subsets of its own thinking, but then so can particularly introspective humans.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited January 2011
    ElJeffe: conversely though, it's as I said - an AI can snapshot it's own mental states, given enough memory. Humans don't have that capability. While I'm sure at some point we may have the technology, at this point in time it seems more likely that AI's will be able to do it before we will.

    electricitylikesme on
  • Options
    surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited January 2011
    He is just using his past experiences to judge where the ball will be.

    "just"? And with one bound Jack was free. What do you think this process consists of?
    Given that we don't fully understand how the brain works, I don't take it as a given that the first AI routines will just hack off all the unnecessary bits and suddenly be streamlined. And given the complexity of such a brain, I'm also not convinced that they will be all that awesome at self-reflection. I mean, isn't it impossible for a system to completely and accurately model or analyze itself? Sounds sort of like trying to perfectly model every particle in the universe - it's logically impossible.

    That's why it isn't even remotely necessary for it to completely analyse itself for anything I suggested.
    Sure, the AI could analyze subsets of its own thinking, but then so can particularly introspective humans.

    No, not to the degree I am suggesting. Why do you think facial recognition software is so bad?

    It's because we don't know how our own facial recognition works, and no thinker, no matter how introspective can find out how they are doing it through introspection.

    surrealitycheck on
    obF2Wuw.png
  • Options
    tinwhiskerstinwhiskers Registered User regular
    edited January 2011
    [
    Sure, the AI could analyze subsets of its own thinking, but then so can particularly introspective humans.

    No, not to the degree I am suggesting. Why do you think facial recognition software is so bad?

    It's because we don't know how our own facial recognition works, and no thinker, no matter how introspective can find out how they are doing it through introspection.

    Honestly, this entire argument is pointless. Every argument against the AI, is refuted with 'NO NO NO thats not how my AI will work".

    That theres no answer to 'how do you program a machine to do something you don't know how to do', is just ignored.

    tinwhiskers on
    6ylyzxlir2dz.png
  • Options
    ElJeffeElJeffe Not actually a mod. Roaming the streets, waving his gun around.Moderator, ClubPA mod
    edited January 2011
    That theres no answer to 'how do you program a machine to do something you don't know how to do', is just ignored.

    Duh, the program will teach itself. Because it's super-smart!

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    Modern ManModern Man Registered User regular
    edited January 2011
    emnmnme wrote: »
    As far as Americans go, the Europeans are always one step ahead of us socially, right? Europeans had democracy tenets before us, they got rid of slavery before us, had cross-continental trade agreements before us, let openly gay servicemen be a part of the military before us, etc etc.

    If the Swedes start marching in favor of polygamists and their rights tomorrow, we'll have that push for equal rights in America twenty years following.
    No. Not unless you're limiting your comment to a handful of countries in northern and western Europe. In places like Russia and Serbia, gay rights marches get attacked and the police stand by and do nothing. Assuming those marches can get a permit in the first place. And in terms of democracy and freedom, even in places like Germany and Italy, those things are a few generations old, at most.
    Spain was a dictatorship until the 1970's. There are only a relative handful of European nations with a longer history of democracy than the US.

    And, you know, the Holocaust.

    Modern Man on
    Aetian Jupiter - 41 Gunslinger - The Old Republic
    Rigorous Scholarship

  • Options
    ElJeffeElJeffe Not actually a mod. Roaming the streets, waving his gun around.Moderator, ClubPA mod
    edited January 2011
    He is just using his past experiences to judge where the ball will be.

    "just"? And with one bound Jack was free. What do you think this process consists of?

    I'm not saying it's not a really awesome and handy ability. But the manner in which it occurs bears no resemblance at all to what you were describing. People do not have a built-in ability to solve complex mathematical equations. Rather, they have a built in ability to categorize and sort information, then bring it back up at will. Well, sort of at-will, except when they have brain farts.

    Actually, anyone here familiar with sorting and searching algorithms know how they compare with how the human brain operates? How long would it take a modern algorithm to sort or search through the volume of data our brain processes and retains, as compared to how fast we can do it?

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    EgoEgo Registered User regular
    edited January 2011
    I'd actually really like to see polygamy become legitimized, if only because it would force a reevaluation towards how and why society provides benefits to people who married.

    Ego on
    Erik
  • Options
    PonyPony Registered User regular
    edited January 2011
    I started to do this quiz but gave up when I saw the poor way it was written. I can't stand poorly written polls or quizzes full of loaded questions.

    Pony on
  • Options
    EgoEgo Registered User regular
    edited January 2011
    Pony wrote: »
    I started to do this quiz but gave up when I saw the poor way it was written. I can't stand poorly written polls or quizzes full of loaded questions.

    http://www.youtube.com/watch?v=3gMcZic1d4U

    Been on the forum plenty of times, but always worth another watch when loaded questions come up ;).

    Ego on
    Erik
  • Options
    DevoutlyApatheticDevoutlyApathetic Registered User regular
    edited January 2011
    Pony wrote: »
    I started to do this quiz but gave up when I saw the poor way it was written. I can't stand poorly written polls or quizzes full of loaded questions.
    Agreed.

    Would have been better if an AI wrote it....

    DevoutlyApathetic on
    Nod. Get treat. PSN: Quippish
  • Options
    Alfred J. KwakAlfred J. Kwak is it because you were insulted when I insulted your hair?Registered User regular
    edited January 2011
    I didn't agree on any of these views, which means I'm 100% future proof (or at least till 2100, if the quiz result is to believe). I for one welcome a world with genetically altered super furries cought up in a ménage à trois with sentient homosexual robots.

    Alfred J. Kwak on
  • Options
    MoridinMoridin Registered User regular
    edited January 2011
    ElJeffe wrote: »
    He is just using his past experiences to judge where the ball will be.

    "just"? And with one bound Jack was free. What do you think this process consists of?

    I'm not saying it's not a really awesome and handy ability. But the manner in which it occurs bears no resemblance at all to what you were describing. People do not have a built-in ability to solve complex mathematical equations. Rather, they have a built in ability to categorize and sort information, then bring it back up at will. Well, sort of at-will, except when they have brain farts.

    Actually, anyone here familiar with sorting and searching algorithms know how they compare with how the human brain operates? How long would it take a modern algorithm to sort or search through the volume of data our brain processes and retains, as compared to how fast we can do it?

    This question is actually incredibly complicated.

    Our brains are absolutely horrible at doing certain types of calculations, while they're quite good at doing others.

    Facial recognition for example. We haven't quite gotten computers yet that can do facial pattern matching as well as we can.

    Sorting a list? Computers have been better at that than us for decades.

    The point is that we are incredibly specialized in what we're good at. And we aren't even aware of how good we are at the things we're good at. We're subconsciously good at things. But we also have built in fuzziness, which is how any halfway decent superintelligent AI will probably be built.

    Again, consider facial recognition. Probably everyone here has looked at a face and been all "I know I've seen that person before but I can't quite place where". Our face sorting algorithm relies on fuzziness, and something like a confidence value for our mental calculations. This is basically how holographic memory storage works. Our neurons are hardwired to each other in an incredibly convoluted way. We don't do things exactly at all. Our brain just kind of chews on an image for a fraction of a second and informs the internal monologue on its decision.

    It's why we're absolutely terrible at arithmetic. It takes a hell of a lot of training to be able to do multi digit multiplication in our heads. The people that are preternaturally good at it are often synesthetes, and aren't even aware of how they arrive at answers. Their brain just spits it out, similar to how our brains simply recognize faces or don't.

    So, to actually answer your question, computers are incredibly fast at doing a certain type of search/sort, but our brains are the current best examples of fuzzy pattern-matching sorting. And even the computers that can do pattern matching are incredibly specialized. You could, say, train a neural network on what a blue beach ball looks like and feed it tons of images that may or may not contain blue beach balls. The network would probably beat out a human in determining candidate images the fastest, but it wouldn't be nearly as accurate as a human...yet.

    Moridin on
    sig10008eq.png
  • Options
    surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited January 2011
    I'm not saying it's not a really awesome and handy ability. But the manner in which it occurs bears no resemblance at all to what you were describing. People do not have a built-in ability to solve complex mathematical equations. Rather, they have a built in ability to categorize and sort information, then bring it back up at will. Well, sort of at-will, except when they have brain farts.

    You don't learn to catch balls by learning to categorise and sort information. Seriously. Think about what catching a ball entails.

    I think you're confusing what your brain consciously does with what its modules do in a more specialised sense.
    it's why we're absolutely terrible at arithmetic. It takes a hell of a lot of training to be able to do multi digit multiplication in our heads. The people that are preternaturally good at it are often synesthetes, and aren't even aware of how they arrive at answers. Their brain just spits it out, similar to how our brains simply recognize faces or don't.

    Indeed, synaesthetes cheat and make sure the ultra-slow higher regions of the brain don't do the processing ;)

    There's some very interesting research suggesting you can, to a degree, train people to be synaesthetic and improve their performance in number-matching tasks.
    Honestly, this entire argument is pointless. Every argument against the AI, is refuted with 'NO NO NO thats not how my AI will work"

    If somebody says "x will fail because y is how it must work", then the response "that is not necessarily true" is more than adequate.

    surrealitycheck on
    obF2Wuw.png
  • Options
    Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    edited January 2011
    zerg rush wrote: »
    CasedOut wrote: »
    I am going to agree with mike here. It sounds like your anti AI spiel is more about you and less about AI. I mean you basically are saying that you would fuck the world over if you had the ability to, thats uh pretty bad man. I mean thinking about if for 5 minutes like you say, if I had 10 times the processing speed/power I would use it to help humanity, not fuck them over. I don't see any reason why an AI would neccesarily want to screw us.

    I wouldn't intentionally fuck over the world, and you wouldn't either. I think that most people are decent, and the first thing any person with the power of an AI would do is cure cancer, cure AIDS, cure world hunger, end war and disease, etc.. The problem comes when you realize the utter power disparity between whoever the AI is and the rest of the human race for our own good. I couldn't stop myself from taking away the power of Somali warlords and Afghani drug/militia leaders. If you knew antivaxxers were wrong and you had the power to do something about it, could you stop yourself from saving their children against their will? What about overhauling the transportation system of cars that harms millions and kills hundreds of thousands every year? Hell, a strong AI should have the power to figure out how to build a perfect society where everyone agrees with the government and people are always happy. If that doesn't terrify you, then it should.

    Even with the best intentions in the world, a lot of people are going to suddenly find themself in hell. And that's with a benevolent human. God knows what would happen if you gave the average human infinite power.

    Even if we grant your assumption upon assumption (for example, that a strong AI is necessarily of godlike intelligence) I fail to see how the capacity to solve a whole range of problems inevitably leads to people being in hell?

    What's the timeline here?

    A) AIDS is solved. Hooray!
    B) No one goes hungry. Wooo!
    C) Transportation issues solved - no more fossil fuels, commute times are tiny, efficiency and comfort for all!
    D) Vaccines are given to everyone. Diseases are no more. Anti-Vaxxers are mad! OH NOES.
    E) EVERYONE IS VIVISECTED INTO A GIANT ORGANISM WITH ELECTRODES CONSTANTLY STIMULATING THE ORGASM CENTRES. BUT ONE MAN IS NOT HAPPY WITH THIS WORLD. HE BREAKS FREE AND DOES KUNG FU. THIS IS TERRIBLE.

    Apothe0sis on
  • Options
    Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    edited January 2011
    japan wrote: »
    Feral wrote: »
    Hear that feral?

    Stop being so backwards.

    Good point though.

    I oppose poly marriage, actually.

    Which amuses me to no end, because I end up being the polyamorous guy arguing against poly marriage, versus a bunch of monogamous people arguing for it.

    :rotate:

    I always have a weird time with those conversations. For two reasons: the first being that nobody can ever decide how poly marriage would actually work (since it isn't possible to just analogise to an existing social structure, like with gay marriage), and the second being that it doesn't seem to be a thing that anyone actually wants, or at least cares enough about to do something about it.


    He's called Robert Heinlein, he wrote on the subject occasionally. There is also some informative wikipedia articles. There was a book called Stranger in a Strange land, it rather shocked a lot of the world some decades back.
    I don't think Stranger in a Strange Land even begins to address the issue of how polyamory would work on either a societal or personal level*.

    * Obviously it DOES work fine for some people. But the book is pretty much just "Everyone starts having sex as a religious sacrement. Mike makes this more philosophically profound. His friends have all sorts of sex as well." It more or less just assumes a society in which polyamory** is the emotional norm and runs with it.

    ** Or at least a lack of monogamy.

    Apothe0sis on
  • Options
    ShanadeusShanadeus Registered User regular
    edited January 2011
    Pony wrote: »
    I started to do this quiz but gave up when I saw the poor way it was written. I can't stand poorly written polls or quizzes full of loaded questions.

    Quizzes are surprisingly hard to make :v:

    Shanadeus on
  • Options
    MoridinMoridin Registered User regular
    edited January 2011
    Apothe0sis wrote: »
    zerg rush wrote: »
    CasedOut wrote: »
    I am going to agree with mike here. It sounds like your anti AI spiel is more about you and less about AI. I mean you basically are saying that you would fuck the world over if you had the ability to, thats uh pretty bad man. I mean thinking about if for 5 minutes like you say, if I had 10 times the processing speed/power I would use it to help humanity, not fuck them over. I don't see any reason why an AI would neccesarily want to screw us.

    I wouldn't intentionally fuck over the world, and you wouldn't either. I think that most people are decent, and the first thing any person with the power of an AI would do is cure cancer, cure AIDS, cure world hunger, end war and disease, etc.. The problem comes when you realize the utter power disparity between whoever the AI is and the rest of the human race for our own good. I couldn't stop myself from taking away the power of Somali warlords and Afghani drug/militia leaders. If you knew antivaxxers were wrong and you had the power to do something about it, could you stop yourself from saving their children against their will? What about overhauling the transportation system of cars that harms millions and kills hundreds of thousands every year? Hell, a strong AI should have the power to figure out how to build a perfect society where everyone agrees with the government and people are always happy. If that doesn't terrify you, then it should.

    Even with the best intentions in the world, a lot of people are going to suddenly find themself in hell. And that's with a benevolent human. God knows what would happen if you gave the average human infinite power.

    Even if we grant your assumption upon assumption (for example, that a strong AI is necessarily of godlike intelligence) I fail to see how the capacity to solve a whole range of problems inevitably leads to people being in hell?

    What's the timeline here?

    A) AIDS is solved. Hooray!
    B) No one goes hungry. Wooo!
    C) Transportation issues solved - no more fossil fuels, commute times are tiny, efficiency and comfort for all!
    D) Vaccines are given to everyone. Diseases are no more. Anti-Vaxxers are mad! OH NOES.
    E) EVERYONE IS VIVISECTED INTO A GIANT ORGANISM WITH ELECTRODES CONSTANTLY STIMULATING THE ORGASM CENTRES. BUT ONE MAN IS NOT HAPPY WITH THIS WORLD. HE BREAKS FREE AND DOES KUNG FU. THIS IS TERRIBLE.

    Also, it's silly to think that we wouldn't have figured out how to augment our own intelligences by the time that A-D are done.

    The only thing really worth talking about with advanced technology is the inevitable class disparity (see, uh, any cyberpunk fiction in the last 30 years I guess?). But I have a feeling that this thread is getting further and further away from the stated original topic.

    Moridin on
    sig10008eq.png
  • Options
    ElJeffeElJeffe Not actually a mod. Roaming the streets, waving his gun around.Moderator, ClubPA mod
    edited January 2011
    I'm not saying it's not a really awesome and handy ability. But the manner in which it occurs bears no resemblance at all to what you were describing. People do not have a built-in ability to solve complex mathematical equations. Rather, they have a built in ability to categorize and sort information, then bring it back up at will. Well, sort of at-will, except when they have brain farts.

    You don't learn to catch balls by learning to categorise and sort information. Seriously. Think about what catching a ball entails.

    "Okay, a ball is coming at me. It looks like it's going pretty far; based on the last few balls that looked to go this far, I should probably run out to this general region. Okay, now I'm out here. The ball looks like it's going to land a few feet to my left, I should move over there. Okay, okay, it's getting close... maybe a bit further back, okay, based on the last few times this happened I should probably put my hand up here, getting closer, okay, maybe here, closer, closer, clooooosssseeeerrrrrr, okaywaitadjustHERE."

    *catch*

    Basically an ongoing sequence of fine tuning based on prior experience, largely happening subconsciously and relying on muscle memory. What do you think catching a ball entails?

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    RobmanRobman Registered User regular
    edited January 2011
    You know, societies that didn't have perfect sexual and income equality sent their young men to die in battle so they wouldn't complain about the rich old dudes taking on a third or fourth teenage bride

    Robman on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited January 2011
    Robman wrote: »
    You know, societies that didn't have perfect sexual and income equality sent their young men to die in battle so they wouldn't complain about the rich old dudes taking on a third or fourth teenage bride

    Nowadays they just dump them out in the desert.

    electricitylikesme on
  • Options
    Modern ManModern Man Registered User regular
    edited January 2011
    Robman wrote: »
    You know, societies that didn't have perfect sexual and income equality sent their young men to die in battle so they wouldn't complain about the rich old dudes taking on a third or fourth teenage bride
    The prevalence of polygamy even in places where it is legal is pretty overstated. I've seen figures that polygamy occurs in less than 5% of marriages in Saudi Arabia. In poorer societies, it's just not terribly economically viable for the average guy.

    Modern Man on
    Aetian Jupiter - 41 Gunslinger - The Old Republic
    Rigorous Scholarship

  • Options
    surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited January 2011
    ElJeffe, that's a grotesque simplification that manages to describe a process without actually analysing how any of the elements work. You're using an analysis which is way too high level. Read up on how some of the modules in the visual system work, and then you might begin to catch my drift.

    surrealitycheck on
    obF2Wuw.png
  • Options
    Styrofoam SammichStyrofoam Sammich WANT. normal (not weird)Registered User regular
    edited January 2011
    ElJeffe wrote: »
    What do you think catching a ball entails?

    *insert gay joke here*

    Styrofoam Sammich on
    wq09t4opzrlc.jpg
  • Options
    ElJeffeElJeffe Not actually a mod. Roaming the streets, waving his gun around.Moderator, ClubPA mod
    edited January 2011
    ElJeffe, that's a grotesque simplification that manages to describe a process without actually analysing how any of the elements work. You're using an analysis which is way too high level. Read up on how some of the modules in the visual system work, and then you might begin to catch my drift.

    Tell you what, rather than asking how I think things work and then answering with psh, why don't you just explain to me how catching a ball involves a person solving high-level math problems in their head? Bonus points if you can explain why being able to actually solve high-level math problems in your head doesn't actually make you any better at catching a ball.

    I mean, it's really wonderful that you're so smart that you don't need to actually explain what you're talking about, but please humor this poor, grossly-simplifying soul.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited January 2011
    ElJeffe wrote: »
    Tell you what, rather than asking how I think things work and then answering with psh, why don't you just explain to me how catching a ball involves a person solving high-level math problems in their head? Bonus points if you can explain why being able to actually solve high-level math problems in your head doesn't actually make you any better at catching a ball.

    I mean, it's really wonderful that you're so smart that you don't need to actually explain what you're talking about, but please humor this poor, grossly-simplifying soul.

    What surreality is emphasizing is that cognitive psychology holds that there are functional structures in the mind which almost certainly are performing higher mathematics when you do something like catch a ball. Heck, there's something in our brains constantly doing triangulation as part of our ability to perceive depth.

    Of course "a structure in the mind" is not the same thing as "me." My sub-processes can be great at calculus without me being even passable. This is why you are right to insist that what a person does when they catch a ball involves no calculus whatsoever--because what a person does is distinct from what his functional structures are doing in order to make that happen.

    Surreality, however, seems to be saying that an AI would have access to its own functional systems in a way that we do not: that it would be able to look under the hood and see the oodles of math that go into an intuitive judgment, of, say, depth. I am unsure, however, why we should assume this to be the case, especially if the only way we are able to construct AIs is to explicitly model them off of human cognition.

    MrMister on
  • Options
    tbloxhamtbloxham Registered User regular
    edited January 2011
    CasedOut wrote: »
    zerg rush wrote: »
    CasedOut wrote: »
    I am going to agree with mike here. It sounds like your anti AI spiel is more about you and less about AI. I mean you basically are saying that you would fuck the world over if you had the ability to, thats uh pretty bad man. I mean thinking about if for 5 minutes like you say, if I had 10 times the processing speed/power I would use it to help humanity, not fuck them over. I don't see any reason why an AI would neccesarily want to screw us.

    I wouldn't intentionally fuck over the world, and you wouldn't either. I think that most people are decent, and the first thing any person with the power of an AI would do is cure cancer, cure AIDS, cure world hunger, end war and disease, etc.. The problem comes when you realize the utter power disparity between whoever the AI is and the rest of the human race for our own good. I couldn't stop myself from taking away the power of Somali warlords and Afghani drug/militia leaders. If you knew antivaxxers were wrong and you had the power to do something about it, could you stop yourself from saving their children against their will? What about overhauling the transportation system of cars that harms millions and kills hundreds of thousands every year? Hell, a strong AI should have the power to figure out how to build a perfect society where everyone agrees with the government and people are always happy. If that doesn't terrify you, then it should.

    Even with the best intentions in the world, a lot of people are going to suddenly find themself in hell. And that's with a benevolent human. God knows what would happen if you gave the average human infinite power.

    See again, total fucking power trip man. Live and let live, I say let the anti vaxxers make their own decisions, even if they are wrong. I would not be a controlling dick of an AI. I personally dont believe AI would be either, they would leave us to our own devices as long as it didn't hurt them. They have no reason to get involved and control us.

    Also, if the AI is truly a strong, caring and adaptive intelligence smarter than anyone else could possibly be who never makes mistakes then it should be in charge. If AI god truly existed, and was always right, then I'd have no problem with him telling us what to do. Because if AI was god he would know if telling us 'have a democracy and make your own decisions' was the right call.

    tbloxham on
    "That is cool" - Abraham Lincoln
  • Options
    Modern ManModern Man Registered User regular
    edited January 2011
    tbloxham wrote: »
    Also, if the AI is truly a strong, caring and adaptive intelligence smarter than anyone else could possibly be who never makes mistakes then it should be in charge. If AI god truly existed, and was always right, then I'd have no problem with him telling us what to do. Because if AI was god he would know if telling us 'have a democracy and make your own decisions' was the right call.
    This idea has been around since Plato's (?) concept of philosopher-kings ruling over the unenlightened and found a religious home in the idea of the divine right of kings to rule the rest of us plebs.

    It's still as bad an idea as it always has been. Painting it up in scientific gloss doesn't change that.

    Modern Man on
    Aetian Jupiter - 41 Gunslinger - The Old Republic
    Rigorous Scholarship

  • Options
    Styrofoam SammichStyrofoam Sammich WANT. normal (not weird)Registered User regular
    edited January 2011
    Modern Man wrote: »
    tbloxham wrote: »
    Also, if the AI is truly a strong, caring and adaptive intelligence smarter than anyone else could possibly be who never makes mistakes then it should be in charge. If AI god truly existed, and was always right, then I'd have no problem with him telling us what to do. Because if AI was god he would know if telling us 'have a democracy and make your own decisions' was the right call.
    This idea has been around since Plato's (?) concept of philosopher-kings ruling over the unenlightened and found a religious home in the idea of the divine right of kings to rule the rest of us plebs.

    It's still as bad an idea as it always has been. Painting it up in scientific gloss doesn't change that.

    Well absolute rule isn't bad in an of itself so much as its bad because ultimately you have to give that power to a person who is still only human with all the weaknesses that go along with it.

    Styrofoam Sammich on
    wq09t4opzrlc.jpg
  • Options
    RobmanRobman Registered User regular
    edited January 2011
    Electing a politician != giving an AI supreme control of our lives

    Robman on
  • Options
    ElJeffeElJeffe Not actually a mod. Roaming the streets, waving his gun around.Moderator, ClubPA mod
    edited January 2011
    MrMister wrote: »
    ElJeffe wrote: »
    Tell you what, rather than asking how I think things work and then answering with psh, why don't you just explain to me how catching a ball involves a person solving high-level math problems in their head? Bonus points if you can explain why being able to actually solve high-level math problems in your head doesn't actually make you any better at catching a ball.

    I mean, it's really wonderful that you're so smart that you don't need to actually explain what you're talking about, but please humor this poor, grossly-simplifying soul.

    What surreality is emphasizing is that cognitive psychology holds that there are functional structures in the mind which almost certainly are performing higher mathematics when you do something like catch a ball. Heck, there's something in our brains constantly doing triangulation as part of our ability to perceive depth.

    Of course "a structure in the mind" is not the same thing as "me." My sub-processes can be great at calculus without me being even passable. This is why you are right to insist that what a person does when they catch a ball involves no calculus whatsoever--because what a person does is distinct from what his functional structures are doing in order to make that happen.

    Surreality, however, seems to be saying that an AI would have access to its own functional systems in a way that we do not: that it would be able to look under the hood and see the oodles of math that go into an intuitive judgment, of, say, depth. I am unsure, however, why we should assume this to be the case, especially if the only way we are able to construct AIs is to explicitly model them off of human cognition.

    Okay, all that kind of makes sense (and, incidentally, I apologize to surreality for my snark; I have an entire an entire lobe in my brain dedicated to being an asshole).

    I wonder, then, how well we could integrate a human-modeled neural-network sort of thing with a more traditional mathematically-based computing structure. Instead of having the "brain" determine the midpoint of a line via some complex intuition-based approximation handled by a dedicated structure, just throw out "(x2-x1)/2, (y2-y1)/2" and there you go.

    I suspect there are reasons why that would be problematic, and this is why I'm somewhat suspicious of the viability of early systems. Seems we'd be effectively using complex mathematical systems to model a complicated structure used to calculate rough approximations to very simple math problems. A little like having your PC run a simulation of a modern supercomputer emulating an NES. Thing is, it seems in trying to make your computer into a human-like intelligence, you're throwing a lot of resources at making it stupid. And while it seems counter-intuitive, I've yet to see any reason to suppose it wouldn't also be necessary.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    ElJeffeElJeffe Not actually a mod. Roaming the streets, waving his gun around.Moderator, ClubPA mod
    edited January 2011
    Modern Man wrote: »
    tbloxham wrote: »
    Also, if the AI is truly a strong, caring and adaptive intelligence smarter than anyone else could possibly be who never makes mistakes then it should be in charge. If AI god truly existed, and was always right, then I'd have no problem with him telling us what to do. Because if AI was god he would know if telling us 'have a democracy and make your own decisions' was the right call.
    This idea has been around since Plato's (?) concept of philosopher-kings ruling over the unenlightened and found a religious home in the idea of the divine right of kings to rule the rest of us plebs.

    It's still as bad an idea as it always has been. Painting it up in scientific gloss doesn't change that.

    Well absolute rule isn't bad in an of itself so much as its bad because ultimately you have to give that power to a person who is still only human with all the weaknesses that go along with it.

    Right. Being supremely ruled by ostensibly well-meaning dictators doesn't work because a single person cannot manage anything beyond a small village with any degree of success; there's simply too much to manage. We address this by increasing the number of representatives per person, so that we get closer to every man having a say, except this introduces all the problems inherent in a modern democracy.

    If the benevolent dictator was effectively omniscient, the problem of human limitations goes away, and we (in theory) wind up with a system in which society benefits everyone to the greatest degree possible. Right now, people vote based on what they think will result in what they want. The imperfection lies in politicians being liars, or inept, or getting caught up in politics at the expensive of governance, or people not really knowing what they want. Just because someone thinks they want a tax cut, for example, doesn't mean they would actually be happier with a world in which that tax cut was realized. Because at the end of the day, he probably doesn't want the tax cut so much as he wants extra time and more money to spend and so on, and he believes the tax cut to be the best road to that end. Add to that the fact that what he's really doing is voting for the guy who says he will push for a tax cut, when said guy could have no intention of doing that at all, and you wind up with a clunky Rube Goldberg system of governance (which, at the end of the day, is still the best we've come up with).

    In theory, a system of omniscient AI routines could factor in the base-level wants, the desires for more time and money and whatnot, and implement a system based on that, fine-tuning it in real-time as data comes in regarding the efficacy of current set-up. The AI could thus create a system in which the most people were the happiest given a certain list of "inalienable rights" and whatnot. We could basically grant ourselves the best possible end result of what we strive for with modern democracy.

    In practice, there are probably about 2.4 zillion reasons why this would fail. But in principle, it sounds kinda cool.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    mythagomythago Registered User regular
    edited January 2011
    Modern Man wrote: »
    The prevalence of polygamy even in places where it is legal is pretty overstated. I've seen figures that polygamy occurs in less than 5% of marriages in Saudi Arabia. In poorer societies, it's just not terribly economically viable for the average guy.

    Much like slavery in the pre-Civil War states where slavery was legal. Even though most whites supported the institution and believed it was proper, most didn't own slaves; they couldn't afford to. Same with polygyny; even if a man has the right to another wife, he still has obligations to his current wife - and will to any future wife - and may not be able to afford it.

    Heinlein was, unsurprisingly, not very good at explaining implementation of his ideas other than "they just work and everybody is happy". Line marriage is a fine idea when a) you don't have to work out the details and b) you like the idea of being able to marry a 20-year-old when you're three times her age. Not so much in real life, though.

    mythago on
    Three lines of plaintext:
    obsolete signature form
    replaced by JPEGs.
  • Options
    ShanadeusShanadeus Registered User regular
    edited January 2011
    mythago wrote: »
    Modern Man wrote: »
    The prevalence of polygamy even in places where it is legal is pretty overstated. I've seen figures that polygamy occurs in less than 5% of marriages in Saudi Arabia. In poorer societies, it's just not terribly economically viable for the average guy.

    Much like slavery in the pre-Civil War states where slavery was legal. Even though most whites supported the institution and believed it was proper, most didn't own slaves; they couldn't afford to. Same with polygyny; even if a man has the right to another wife, he still has obligations to his current wife - and will to any future wife - and may not be able to afford it.

    Heinlein was, unsurprisingly, not very good at explaining implementation of his ideas other than "they just work and everybody is happy". Line marriage is a fine idea when a) you don't have to work out the details and b) you like the idea of being able to marry a 20-year-old when you're three times her age. Not so much in real life, though.

    The idea behind line marriages are pretty interesting though. You're taking the notion of a family that you feel loyalty to and solidarity with and extend it so that it ends up applying to a huge amount of people (potentially, everyone in a country might be interconnected in some way or another through line marriages though that might be improbable) to achieve a more connected country because you end up marrying two people instead of one.

    The neanderthal trilogy by J.Sawyer cover a form of line marriage system where you marry a person of the same sex as well as a person of the opposite sex that I found interesting.

    Shanadeus on
  • Options
    jclastjclast Registered User regular
    edited January 2011
    What's with people not understanding how poly marriages would work in modern society when we have examples of how they work in modern society? They might not be for you (or legal), but fundamentalist Mormons make poly marriage work all the time.

    Yes, it's a sub-set of poly marriage (polygamy instead of generic group marriage), but it's in modern society.

    jclast on
    camo_sig2.png
Sign In or Register to comment.