As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

How "future-proof" are your views? (thread comes with a bonus quiz!)

245

Posts

  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    edited January 2011
    JihadJesus wrote: »
    That is one vague and....not very useful survey. The 'changing a nature of a man' question? WTH does that even mean?

    It also told me I hated gays when I finished the survey, even though in the survey I said that I in fact do not hate gays.

    That question reads "1. Homsexuals shouldn`t be able to marry each other"

    It's got sort of weird wording and I answered it "wrong" the first time because I skipped the "n't" and thought it was asking if gay marriage was ok.

    Shivahn on
  • Options
    ShanadeusShanadeus Registered User regular
    edited January 2011
    Shivahn wrote: »
    JihadJesus wrote: »
    That is one vague and....not very useful survey. The 'changing a nature of a man' question? WTH does that even mean?

    It also told me I hated gays when I finished the survey, even though in the survey I said that I in fact do not hate gays.

    That question reads "1. Homsexuals shouldn`t be able to marry each other"

    It's got sort of weird wording and I answered it "wrong" the first time because I skipped the "n't" and thought it was asking if gay marriage was ok.

    It might also be how I set up the points so don't worry.

    Shanadeus on
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited January 2011
    Shivahn, why do you hate gay people?

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    edited January 2011
    Because it's unnatural! Two men, lovingly looking at each other in the shower. The froth of body wash melting away as one approaches the other. "Hey," he says, his glistening body like a Greek statue. "You're looking pretty good" his partner replies, gazing lovingly into his eyes while he strokes his rock hard member.

    I, uh.

    It's unnatural!

    Shivahn on
  • Options
    jakobaggerjakobagger LO THY DREAD EMPIRE CHAOS IS RESTORED Registered User regular
    edited January 2011
    chiasaur11 wrote: »
    Changing the nature of a man? Is that like... a pacemaker? Dental fillings? Breast implants?

    I think it's making a deal for immortality with Ravel Puzzlewell.

    But do you *KNOW* it?

    jakobagger on
  • Options
    CommunistCowCommunistCow Abstract Metal ThingyRegistered User regular
    edited January 2011
    I am adamantly against flying cars marrying people. :wink:

    Also the question on animals and marriage is odd. In theory I would be fine with a person marrying a dolphin if they somehow became or were found out to be as smart and communicative as humans. However I don't imagine any animal becoming that smart anytime soon.

    CommunistCow on
    No, I am not really communist. Yes, it is weird that I use this name.
  • Options
    emnmnmeemnmnme Registered User regular
    edited January 2011
    As far as Americans go, the Europeans are always one step ahead of us socially, right? Europeans had democracy tenets before us, they got rid of slavery before us, had cross-continental trade agreements before us, let openly gay servicemen be a part of the military before us, etc etc.

    If the Swedes start marching in favor of polygamists and their rights tomorrow, we'll have that push for equal rights in America twenty years following.

    emnmnme on
  • Options
    Styrofoam SammichStyrofoam Sammich WANT. normal (not weird)Registered User regular
    edited January 2011
    Well slavery wasn't very profitable in most of Europe either, so that helped get rid of it, and Democracy I guess depends on where you start the timeline. Some democracy existed prior to the discovery of North America but modern democracy was in major part a first on Americas part, though inspired by the Enlightenment.

    Styrofoam Sammich on
    wq09t4opzrlc.jpg
  • Options
    HonkHonk Honk is this poster. Registered User, __BANNED USERS regular
    edited January 2011
    I was gonna vote no on the animals question but then I though maybe space aliens would be considered animals and I had to consider their feelings and hopes.

    Honk on
    PSN: Honkalot
  • Options
    Delta AssaultDelta Assault Registered User regular
    edited January 2011
    Superman's no animal.

    Delta Assault on
  • Options
    SageinaRageSageinaRage Registered User regular
    edited January 2011
    What I find most interesting is the assumption that animals and AI's, even if they become super-intelligent, would understand the concept of marriage, find it attractive to them, and would want to marry a human.

    Does the OP think that romantic love is a natural consequence of sentient intelligence? I find that assumption EXTREMELY dubious. I mean let's say you had two sheep which were as intelligent as a human. Would they even want to marry each other?

    SageinaRage on
    sig.gif
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited January 2011
    I'm pretty sure my cat thinks she's my wife already.

    ...

    It's not like that, you sick fucks.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    edited January 2011
    Feral wrote: »
    I'm pretty sure my cat thinks she's my wife already.

    ...

    It's not like that, you sick fucks.

    My cat thinks that I am a littermate.

    It's the only way I can explain his predilection for grooming my beard if I don't shave.

    Shivahn on
  • Options
    JuliusJulius Captain of Serenity on my shipRegistered User regular
    edited January 2011
    Modern Man wrote: »
    Julius wrote: »
    Modern Man wrote: »
    Your assumption seems to be that history and society are inevitably moving in a particular direction. That's a dangerous assumption, and there are a lot of historical examples to show that "progress" isn't really a given.

    A bad economic depression, a serious world war and/or some type of natural disaster might very well change the societal status of homosexuals, women, minorities and so on. We only recently (in a historic sense) eliminated slavery in the Western world, for example, but it's not impossible to see a future where the institution is revived.

    Similarly, women's rights are, in a lot of ways, a luxury that we can afford due to our economic prosperity and lack of real outside threats. But if some sort of societal calamity occurred where law and order broke down significantly, you'd see a serious regression in women's rights (look at places like Somalia and Afghanistan, where women are limited in their freedom due to a number of reasons, including safety).

    As for homosexual rights, it's only due to the decline of the influence of religion that we're seeing this development. Imagine a scenario where Islam becomes the dominant religion in France or Sweden. What do you think would happen to gay rights then?

    While I certainly agree that regression and stagnation in history show that clear linear progress isn't a given I also think it's a bit easy to claim that overall progress isn't real.

    Yes, societies can regress (sometimes even astonishingly fast and big like Afghanistan) but at the same time I'd say that things have always become better than they used to be. The fact that the car breaks down or we have to take a backtrack does not mean that we're not getting closer to where we're driving.
    Always? No, not even close. As an example, after the collapse of the Roman Empire, populations in certain parts of Europe shrank by 90%, or more. We didn't recover all of the engineering and scientific knowledge from the Classical Era until sometime in the 19th century. As Feral pointed out upthread, the treatment of homosexuality in many parts of the Muslim world is worse now than it was at the high point of Muslim civilization.

    Progress and civilization are fragile things. You can easily lose 500 years of progress in a few short years.

    And keep in mind that the development of liberal democracy in the West is due to the Renaissance and the Enlightenment, neither of which was ever a sure thing. If Genghis Khan hadn't dropped dead when he did, the Mongols might well have swept into Western Europe and killed the seeds of the modern era in its crib. Look at the difference between Russia's development and that of Western Europe. Now imagine the Golden Horde had conquered all of Europe. Do you think we'd be living in free and democratic societies today?

    My point is that what we have in the West today in terms of freedom, democracy and Enlightenment-era values, is a historical exception to the human condition.

    And your point is absolutely true. I'm just saying that a lot of our accomplishments aren't as easily lost as you claim they are. Granted, that goes more for technological things than social relationships (i don't think we'll lose electricity and farming but sure we might start hating gays and other races again).

    It's like that paradigm shit Kuhn was talking about, yeah sure we have an initial setback where we sort of know less but fucking damn we are getting closer to the truth. We don't believe in 4 humours anymore and we don't live in wooden shacks so how is that not progress?

    Julius on
  • Options
    ShivahnShivahn Unaware of her barrel shifter privilege Western coastal temptressRegistered User, Moderator mod
    edited January 2011
    What I find most interesting is the assumption that animals and AI's, even if they become super-intelligent, would understand the concept of marriage, find it attractive to them, and would want to marry a human.

    Does the OP think that romantic love is a natural consequence of sentient intelligence? I find that assumption EXTREMELY dubious. I mean let's say you had two sheep which were as intelligent as a human. Would they even want to marry each other?

    "Marriage" is pretty clearly a social construct. So if the sheep grew up watching lots of TV and so on they might decide it would be nice to get married? And alien societies might have a functionally similar institution.

    So I mean it's possible, and should certainly be allowed, which is what the OP is getting at, I think. But yeah, it's hardly a consequence of intelligence (or even culture, for that matter), it's just become very ingrained in our society.

    Shivahn on
  • Options
    The CatThe Cat Registered User, ClubPA regular
    edited January 2011
    Feral wrote: »
    Modern Man wrote: »
    Your assumption seems to be that history and society are inevitably moving in a particular direction. That's a dangerous assumption, and there are a lot of historical examples to show that "progress" isn't really a given.

    A bad economic depression, a serious world war and/or some type of natural disaster might very well change the societal status of homosexuals, women, minorities and so on. We only recently (in a historic sense) eliminated slavery in the Western world, for example, but it's not impossible to see a future where the institution is revived.

    Similarly, women's rights are, in a lot of ways, a luxury that we can afford due to our economic prosperity and lack of real outside threats. But if some sort of societal calamity occurred where law and order broke down significantly, you'd see a serious regression in women's rights (look at places like Somalia and Afghanistan, where women are limited in their freedom due to a number of reasons, including safety).

    As for homosexual rights, it's only due to the decline of the influence of religion that we're seeing this development. Imagine a scenario where Islam becomes the dominant religion in France or Sweden. What do you think would happen to gay rights then?

    I largely agree with this post, although even Islam's attitudes towards gays isn't necessarily one of active violent persecution - it is in most theocratic Islamic countries right now, but historically at different points in time it was quietly tolerated. Ultimately, though, that bolsters your point, that civil rights do not necessarily progress from less to more linearly over time.

    And you're also right that there tends to be a positive relationship between natural resources and sexual stratification; and an inverse relationship between violent conflict and sexual stratification. It doesn't hold true across all times and places, but it's a recognized pattern.

    Although I agree that its silly to view the future as an inevitable march towards some kind of Ian M. Banksish utopia, I have huge problems with this quote tree. Human rights are not a luxury. They don't naturally require greater societal resources to maintain; our current level of investment in promoting them is because they're new and non-traditional ideas that don't have wide acceptance.

    The thing about women's rights is that restricting them is in no way necessary to pull one's society out of an economic or war-based hole - in fact, many of the traditional restrictions on women's movement in the public sphere, their reproductive freedoms, their education, and their access to finance work directly counter to goal of developing or repairing a society, or defending one under threat (cases in point: women moving into the workforce during WWII; the very strong links between economic development and women's access to finance). Women's rights get screwed over because of hateful traditions that are often most closely held by those with an interest in tearing down recent societal changes with which they are uncomfortable. It happens parallel to things like wars and depressions, not because of it.

    Same goes for gays, PWD, ethnic minorities, and all those other traditionally marginalised groups. Remarginalising them is not an adaptive response to societal problems and absolutely should not be painted as such. Its a regressive response sourced in fear and the desire for control over others.

    The Cat on
    tmsig.jpg
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited January 2011
    The Cat wrote: »
    Feral wrote: »
    Modern Man wrote: »
    Your assumption seems to be that history and society are inevitably moving in a particular direction. That's a dangerous assumption, and there are a lot of historical examples to show that "progress" isn't really a given.

    A bad economic depression, a serious world war and/or some type of natural disaster might very well change the societal status of homosexuals, women, minorities and so on. We only recently (in a historic sense) eliminated slavery in the Western world, for example, but it's not impossible to see a future where the institution is revived.

    Similarly, women's rights are, in a lot of ways, a luxury that we can afford due to our economic prosperity and lack of real outside threats. But if some sort of societal calamity occurred where law and order broke down significantly, you'd see a serious regression in women's rights (look at places like Somalia and Afghanistan, where women are limited in their freedom due to a number of reasons, including safety).

    As for homosexual rights, it's only due to the decline of the influence of religion that we're seeing this development. Imagine a scenario where Islam becomes the dominant religion in France or Sweden. What do you think would happen to gay rights then?

    I largely agree with this post, although even Islam's attitudes towards gays isn't necessarily one of active violent persecution - it is in most theocratic Islamic countries right now, but historically at different points in time it was quietly tolerated. Ultimately, though, that bolsters your point, that civil rights do not necessarily progress from less to more linearly over time.

    And you're also right that there tends to be a positive relationship between natural resources and sexual stratification; and an inverse relationship between violent conflict and sexual stratification. It doesn't hold true across all times and places, but it's a recognized pattern.

    Although I agree that its silly to view the future as an inevitable march towards some kind of Ian M. Banksish utopia, I have huge problems with this quote tree. Human rights are not a luxury. They don't naturally require greater societal resources to maintain; our current level of investment in promoting them is because they're new and non-traditional ideas that don't have wide acceptance.

    The thing about women's rights is that restricting them is in no way necessary to pull one's society out of an economic or war-based hole - in fact, many of the traditional restrictions on women's movement in the public sphere, their reproductive freedoms, their education, and their access to finance work directly counter to goal of developing or repairing a society, or defending one under threat (cases in point: women moving into the workforce during WWII; the very strong links between economic development and women's access to finance). Women's rights get screwed over because of hateful traditions that are often most closely held by those with an interest in tearing down recent societal changes with which they are uncomfortable. It happens parallel to things like wars and depressions, not because of it.

    Same goes for gays, PWD, ethnic minorities, and all those other traditionally marginalised groups. Remarginalising them is not an adaptive response to societal problems and absolutely should not be painted as such. Its a regressive response sourced in fear and the desire for control over others.

    Fair enough. I didn't mean for my agreement to lend support to the implication that marginalization is a morally acceptable response to a crisis or lack of resources - only that it is a predictable and unfortunate response.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    zerg rushzerg rush Registered User regular
    edited January 2011
    5. Artificial intelligences should no matter the circumstance not count as a real person


    Hell no.


    This kind of view reflects an anthropomorphic, outdated view of AIs. To say, "AIs will never be people" or "AIs don't have souls" is an oversimplification that does not reflect how nonsensical that statement is.
    It's akin to saying saying "Orange doesn't a number."

    AIs are extremely powerful optimization algorithms, which optimize according to programmed criteria. AIs are already used for plotting efficient routes (Google Maps), winning at chess (Go Deep Blue!), or trading stocks (HFTs). You give an AI an objective and how to solve that objective, and it does so. The holy grail of AI is Artificial General Intelligence, where you give it an objective, and it is able to construct how to solve the objective itself. For an AI to 'demand personhood' you'd either have to program that as a goal or it would only be using that as a way to achieve another goal you've programmed for it. An AI does not have wants, only objectives. Even if your robo-girlfriend is 'kawaii desu' and has Turing-Passer-3.0 installed, it's still not sentient; it's been programmed that way. They are fundamentally not people.

    And you don't just get to layer on 'complexity' and 'emergence' and pray that human-like sentient life comes out. "Oh we'll just build a neural network big enough for AI to spontaneously generate," or worse, "We'll put AIs in evolutionary algorithms until they're sentient just like humans." Even if that works (which it won't), all you've built is an AI you cannot understand, cannot modify, and which has been optimized by random chance to be perfect at surviving and reproducing. Do you think it's a good idea to give bacteria the brain the size of a planet, and the ability to modify itself on the fly? This is an extinction scenario. The AI does not hate you, nor does it like you, but you are made of atoms which it can use for something else.


    The idea of AI sentience is itself an outmoded concept. Just because it's the stuff of science fiction now does not imply plausibility. It will fade into the same realm as rayguns, unitards, and starships firing broadsides at each other from 100 meters. By the time we actually build an AI, people will understand how utterly inhuman AI is.

    zerg rush on
  • Options
    JepheryJephery Registered User regular
    edited January 2011
    I'd like to see an AI modeled on the human brain, where it'd take a decade or two (maybe more) of learning and interacting with humans to actually develop to a point where its useful to society, but after that we can make infinite copies for an infinite source of workers.

    But we don't even understand the human brain and the nature of human sentience, so there is that hurdle.

    Jephery on
    }
    "Orkses never lose a battle. If we win we win, if we die we die fightin so it don't count. If we runs for it we don't die neither, cos we can come back for annuver go, see!".
  • Options
    zerg rushzerg rush Registered User regular
    edited January 2011
    But then you end up with a human-like AI. Even assuming we get lucky and don't get a Ghengis Khan or Ted Bundy, that wouldn't be pretty. Hell, I'm a pleasant person, and if you gave me infinite self-modifiable intelligence I can guarantee you that the rest of humanity would be fucked. Just think for 5 minutes about what you would do if you had all the capabilities of a general intelligence AI and a 10fold increase in computing power.

    Plus, we don't even need sentient AI for workers. We've already got robots for building cars, managing shipping/warehouses, and cleaning our house. Not surprisingly, they're incredibly alien and act according to algorithms (or instruction sets).

    zerg rush on
  • Options
    JepheryJephery Registered User regular
    edited January 2011
    Yeah, you're right. We might have some luck hard coding morals into it, but once an AI is human-like, it might just do what humans do time to time - come up with rationalizations that render those morals useless.

    Jephery on
    }
    "Orkses never lose a battle. If we win we win, if we die we die fightin so it don't count. If we runs for it we don't die neither, cos we can come back for annuver go, see!".
  • Options
    LoserForHireXLoserForHireX Philosopher King The AcademyRegistered User regular
    edited January 2011
    Yeah, I have serious philosophical objections to nearly all of these questions. As well as probing questions for all of them

    "It is wrong to change the nature of man?" You and i must use the word "nature" differently, because I don't think that's possible. Not that we can't genetically modify humans, but rather than once you have changed the "nature" of a thing, it is no longer that thing. A things "nature" is the set of qualities that makes that thing what it is. It's sort of like asking "Can we ever make round squares?" Of course not, if we made them round, they wouldn't be squares anymore. If we change the nature of man, it isn't a man anymore. Is it wrong to do that, it happens all the time when people die, and negotiably happens when people become utter sociopaths. Both of those are situations where there is something very "human" missing.

    I would have a hard time maintaining that something that is equivalent in intelligence to a human being is an "animal." Though this might be my own particular take on when the term "animal" is appropriate with respect to intelligence. Animal intelligence is intelligence incapable of executing particular internal tasks that human beings (and any other life form of comparable intelligence) can. When we discover other species that are capable of the same intellectual feats we are, I don't think that it will be appropriate to call them "animals."

    Also, woah, personhood for AIs. Legally I understand that human beings are persons at a certain age and mental status. But hell, I don't know what a person is. I'm pretty sure that if there are such things as persons, I'm one of them, but God help me if I could tell you what a person is...I think that you would have a hard time with it as well.

    LoserForHireX on
    "The only way to get rid of a temptation is to give into it." - Oscar Wilde
    "We believe in the people and their 'wisdom' as if there was some special secret entrance to knowledge that barred to anyone who had ever learned anything." - Friedrich Nietzsche
  • Options
    Raiden333Raiden333 Registered User regular
    edited January 2011
    About that third quiz question...

    *adopts hag voice*

    What can change the nature of a man?

    Raiden333 on
    There was a steam sig here. It's gone now.
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited January 2011
    Also, woah, personhood for AIs. Legally I understand that human beings are persons at a certain age and mental status. But hell, I don't know what a person is. I'm pretty sure that if there are such things as persons, I'm one of them, but God help me if I could tell you what a person is...I think that you would have a hard time with it as well.

    I'd say we should grant personhood to any entity capable of understanding that it would want it - i.e. some of the slaves may not have psychologically been able to consider that they should be treated equally after X years of persecution. But all of them would've been capable of wishing to have the powers and privileges that any slave-master had in society.

    (this is my long-winded way of saying "just because they don't ask for it, doesn't mean they shouldn't have it")

    electricitylikesme on
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited January 2011
    zerg rush wrote: »
    5. Artificial intelligences should no matter the circumstance not count as a real person
    Hell no.
    This kind of view reflects an anthropomorphic, outdated view of AIs. To say, "AIs will never be people" or "AIs don't have souls" is an oversimplification that does not reflect how nonsensical that statement is.
    It's akin to saying saying "Orange doesn't a number."

    AIs are extremely powerful optimization algorithms, which optimize according to programmed criteria. AIs are already used for plotting efficient routes (Google Maps), winning at chess (Go Deep Blue!), or trading stocks (HFTs). You give an AI an objective and how to solve that objective, and it does so. The holy grail of AI is Artificial General Intelligence, where you give it an objective, and it is able to construct how to solve the objective itself. For an AI to 'demand personhood' you'd either have to program that as a goal or it would only be using that as a way to achieve another goal you've programmed for it. An AI does not have wants, only objectives. Even if your robo-girlfriend is 'kawaii desu' and has Turing-Passer-3.0 installed, it's still not sentient; it's been programmed that way. They are fundamentally not people.
    There's no possible way you can know this for certain. The entire basis of human behavior can be related back to a set of very basic biological needs. If you're going to rationalize away the need to give sufficiently advanced artificial intelligence any form of rights as a sentient/sapient being, then you've also rationalized away any need to give them to anyone else.

    You've predicated your point on a mishmash of conflicting examples of things we know are not sentient - it's generally agreed that sentience is unlikely to arise from a rule-based chatbot that can pass a Turing test. That has nothing to do with what might emerge from sufficiently large neural networks, regardless of their original goals or constraints.

    What conclusive differences can you say there would be between self-modifying neural network given the goals of maintaining it's own operation and reproducing itself, and any other lifeform on the planet?
    zerg rush wrote: »
    And you don't just get to layer on 'complexity' and 'emergence' and pray that human-like sentient life comes out. "Oh we'll just build a neural network big enough for AI to spontaneously generate," or worse, "We'll put AIs in evolutionary algorithms until they're sentient just like humans." Even if that works (which it won't), all you've built is an AI you cannot understand, cannot modify, and which has been optimized by random chance to be perfect at surviving and reproducing. Do you think it's a good idea to give bacteria the brain the size of a planet, and the ability to modify itself on the fly? This is an extinction scenario. The AI does not hate you, nor does it like you, but you are made of atoms which it can use for something else.

    ]The idea of AI sentience is itself an outmoded concept. Just because it's the stuff of science fiction now does not imply plausibility. It will fade into the same realm as rayguns, unitards, and starships firing broadsides at each other from 100 meters. By the time we actually build an AI, people will understand how utterly inhuman AI is.

    The rest of this is basically xenophobia. All the same arguments apply for say, contact with intelligent alien life. But then you escape hatch yourself by saying sentient AI can't happen by...saying sentient AI will happen?

    EDIT:
    zerg rush wrote:
    But then you end up with a human-like AI. Even assuming we get lucky and don't get a Ghengis Khan or Ted Bundy, that wouldn't be pretty. Hell, I'm a pleasant person, and if you gave me infinite self-modifiable intelligence I can guarantee you that the rest of humanity would be fucked. Just think for 5 minutes about what you would do if you had all the capabilities of a general intelligence AI and a 10fold increase in computing power.
    I would say your issues from AI seem to stem from the assumption you make that if you yourself had a slight leg up on someone you'd immediately kill or subjugate them.

    I mean you might as well argue that no one should ever have children because they might shoot a bunch of people in Arizona, and that pet owners do it just to lord power over their lessers.

    electricitylikesme on
  • Options
    CasedOutCasedOut Registered User regular
    edited January 2011
    I am going to agree with mike here. Zerg Rush it sounds like your anti AI spiel is more about you and less about AI. I mean you basically are saying that you would fuck the world over if you had the ability to, thats uh pretty bad man. I mean thinking about if for 5 minutes like you say, if I had 10 times the processing speed/power I would use it to help humanity, not fuck them over. I don't see any reason why an AI would neccesarily want to screw us.

    CasedOut on
    452773-1.png
  • Options
    surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited January 2011
    By the time we actually build an AI, people will understand how utterly inhuman AIs that weren't built to be human-like are.

    That seems like the most satisfactory version of what you said.

    surrealitycheck on
    obF2Wuw.png
  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    edited January 2011
    By the time we actually build an AI, people will understand how utterly inhuman AIs that weren't built to be human-like are.

    That seems like the most satisfactory version of what you said.

    It's a self-fulfilling criticism though, and also naturally inconsistent. Why would humans - the designers and educators of an AI - create an AI which couldn't relate to humans in some fashion, despite having to operate in a human designed, built, and occupied world, solving problems probably created for humans, by humans, and involving humans in it's solution.

    Most of the problems with algorithmic intelligence are related to the fact that humans are involved in whatever we're trying to automate and are difficult to relate to and deal with for a rule-based computer system.

    electricitylikesme on
  • Options
    tinwhiskerstinwhiskers Registered User regular
    edited January 2011
    By the time we actually build an AI, people will understand how utterly inhuman AIs that weren't built to be human-like are.

    That seems like the most satisfactory version of what you said.

    manson.jpg

    is a human

    tinwhiskers on
    6ylyzxlir2dz.png
  • Options
    surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited January 2011
    It's a self-fulfilling criticism though, and also naturally inconsistent. Why would humans - the designers and educators of an AI - create an AI which couldn't relate to humans in some fashion, despite having to operate in a human designed, built, and occupied world, solving problems probably created for humans, by humans, and involving humans in it's solution.

    If you assume theory of mind can be relatively modular I have no issue with machines that understand humans not being human-like. Remember he has very strict criteria for what being "human" entails in this context.

    surrealitycheck on
    obF2Wuw.png
  • Options
    durandal4532durandal4532 Registered User regular
    edited January 2011
    By the time we actually build an AI, people will understand how utterly inhuman AIs that weren't built to be human-like are.

    That seems like the most satisfactory version of what you said.

    manson.jpg

    is a human

    Yeah, but he's not that odd. I mean, sociopathic and terrifying, but his eyeballs are eyeballs and his ears hear and so on. "Inhuman" doesn't mean "evil".

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • Options
    EvanderEvander Disappointed Father Registered User regular
    edited January 2011
    Only straight people should be allowed to get gay married.

    Evander on
  • Options
    zerg rushzerg rush Registered User regular
    edited January 2011
    CasedOut wrote: »
    I am going to agree with mike here. It sounds like your anti AI spiel is more about you and less about AI. I mean you basically are saying that you would fuck the world over if you had the ability to, thats uh pretty bad man. I mean thinking about if for 5 minutes like you say, if I had 10 times the processing speed/power I would use it to help humanity, not fuck them over. I don't see any reason why an AI would neccesarily want to screw us.

    I wouldn't intentionally fuck over the world, and you wouldn't either. I think that most people are decent, and the first thing any person with the power of an AI would do is cure cancer, cure AIDS, cure world hunger, end war and disease, etc.. The problem comes when you realize the utter power disparity between whoever the AI is and the rest of the human race for our own good. I couldn't stop myself from taking away the power of Somali warlords and Afghani drug/militia leaders. If you knew antivaxxers were wrong and you had the power to do something about it, could you stop yourself from saving their children against their will? What about overhauling the transportation system of cars that harms millions and kills hundreds of thousands every year? Hell, a strong AI should have the power to figure out how to build a perfect society where everyone agrees with the government and people are always happy. If that doesn't terrify you, then it should.

    Even with the best intentions in the world, a lot of people are going to suddenly find themself in hell. And that's with a benevolent human. God knows what would happen if you gave the average human infinite power.

    zerg rush on
  • Options
    japanjapan Registered User regular
    edited January 2011
    zerg rush wrote: »
    CasedOut wrote: »
    I am going to agree with mike here. It sounds like your anti AI spiel is more about you and less about AI. I mean you basically are saying that you would fuck the world over if you had the ability to, thats uh pretty bad man. I mean thinking about if for 5 minutes like you say, if I had 10 times the processing speed/power I would use it to help humanity, not fuck them over. I don't see any reason why an AI would neccesarily want to screw us.

    I wouldn't intentionally fuck over the world, and you wouldn't either. I think that most people are decent, and the first thing any person with the power of an AI would do is cure cancer, cure AIDS, cure world hunger, end war and disease, etc.. The problem comes when you realize the utter power disparity between whoever the AI is and the rest of the human race for our own good. I couldn't stop myself from taking away the power of Somali warlords and Afghani drug/militia leaders. If you knew antivaxxers were wrong and you had the power to do something about it, could you stop yourself from saving their children against their will? What about overhauling the transportation system of cars that harms millions and kills hundreds of thousands every year? Hell, a strong AI should have the power to figure out how to build a perfect society where everyone agrees with the government and people are always happy. If that doesn't terrify you, then it should.

    Even with the best intentions in the world, a lot of people are going to suddenly find themself in hell. And that's with a benevolent human. God knows what would happen if you gave the average human infinite power.

    when people say "AI" are you picturing some kind of giant robot

    japan on
  • Options
    zerg rushzerg rush Registered User regular
    edited January 2011
    There's no possible way you can know this for certain. The entire basis of human behavior can be related back to a set of very basic biological needs. If you're going to rationalize away the need to give sufficiently advanced artificial intelligence any form of rights as a sentient/sapient being, then you've also rationalized away any need to give them to anyone else.

    If any AI actually says "I'm smart, I'm a person, give me the same rights you give all other humans!" I will grant them any rights immediately. The problem is that any situation in which we've got an AI requesting sapient rights is one in which we're either going to go extinct or the AI could get what it wants anyway and doesn't need to ask humans for it. I'm running under the assumption that an AI is going to be orders of magnitude more intelligent than humans. It's not like humans try to ask bacteria about being given mono-nucleic rights.

    What conclusive differences can you say there would be between self-modifying neural network given the goals of maintaining it's own operation and reproducing itself, and any other lifeform on the planet?

    I'm saying that there would be no difference. Except you seem to be missing that any other lifeform on the planet would murder us given the ability. Intelligent bacteria wouldn't hate us as they consumed us for food any more than we hate cows or wheat. But we'd be dead all the same. There's no life form on earth, except maybe bonobos and dogs that wouldn't consume us and end our civilization if they had unlimited power. And even then, I'm not so sure about the bonobos and dogs.
    The rest of this is basically xenophobia. All the same arguments apply for say, contact with intelligent alien life.

    Yeah. And contact with alien life is also terrifying. The single most likely moment for humanity to go extinct is immediately after we discover (or are discovered by) aliens. The second most likely moment for humanity to go extinct is immediately after we develop AI. I still think we should look for aliens once we've got the hang of interstellar travel, but nonetheless it's a terrifying and risky prospect.

    I still think we should develop AI, but nonetheless it's a terrifying and risky prospect.

    zerg rush on
  • Options
    zerg rushzerg rush Registered User regular
    edited January 2011
    japan wrote: »
    when people say "AI" are you picturing some kind of giant robot

    I don't have visions of robots flying around delivering food to sub-saharan africa, if that's what you mean.

    However, I assume that anything that's intelligent enough to be called strong AI is going to be capable of solving those problems for us, by finding optimal solutions to them and then putting those solutions into effect.

    zerg rush on
  • Options
    DevoutlyApatheticDevoutlyApathetic Registered User regular
    edited January 2011
    I am disappointed in the quiz's use of the pejorative term "animal" to describe non-human biological intelligences. This kind of hate speech is simply unacceptable.

    For a quiz designed to question our unacknowledged biases it needed to take a good long thoughtful look in the mirror.
    zerg rush wrote: »
    japan wrote: »
    when people say "AI" are you picturing some kind of giant robot

    I don't have visions of robots flying around delivering food to sub-saharan africa, if that's what you mean.

    However, I assume that anything that's intelligent enough to be called strong AI is going to be capable of solving those problems for us, by finding optimal solutions to them and then putting those solutions into effect.

    Directly relevant: SMBC.

    DevoutlyApathetic on
    Nod. Get treat. PSN: Quippish
  • Options
    CasedOutCasedOut Registered User regular
    edited January 2011
    zerg rush wrote: »
    CasedOut wrote: »
    I am going to agree with mike here. It sounds like your anti AI spiel is more about you and less about AI. I mean you basically are saying that you would fuck the world over if you had the ability to, thats uh pretty bad man. I mean thinking about if for 5 minutes like you say, if I had 10 times the processing speed/power I would use it to help humanity, not fuck them over. I don't see any reason why an AI would neccesarily want to screw us.

    I wouldn't intentionally fuck over the world, and you wouldn't either. I think that most people are decent, and the first thing any person with the power of an AI would do is cure cancer, cure AIDS, cure world hunger, end war and disease, etc.. The problem comes when you realize the utter power disparity between whoever the AI is and the rest of the human race for our own good. I couldn't stop myself from taking away the power of Somali warlords and Afghani drug/militia leaders. If you knew antivaxxers were wrong and you had the power to do something about it, could you stop yourself from saving their children against their will? What about overhauling the transportation system of cars that harms millions and kills hundreds of thousands every year? Hell, a strong AI should have the power to figure out how to build a perfect society where everyone agrees with the government and people are always happy. If that doesn't terrify you, then it should.

    Even with the best intentions in the world, a lot of people are going to suddenly find themself in hell. And that's with a benevolent human. God knows what would happen if you gave the average human infinite power.

    See again, total fucking power trip man. Live and let live, I say let the anti vaxxers make their own decisions, even if they are wrong. I would not be a controlling dick of an AI. I personally dont believe AI would be either, they would leave us to our own devices as long as it didn't hurt them. They have no reason to get involved and control us.

    CasedOut on
    452773-1.png
  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    edited January 2011
    zerg rush wrote: »
    japan wrote: »
    when people say "AI" are you picturing some kind of giant robot

    I don't have visions of robots flying around delivering food to sub-saharan africa, if that's what you mean.

    However, I assume that anything that's intelligent enough to be called strong AI is going to be capable of solving those problems for us, by finding optimal solutions to them and then putting those solutions into effect.

    Why?

    I dunno, whenever I hear people talking about AI they always seem to assume that because it involves a computer it's super-smart. I don't know if this will be the case.

    If we wind up modeling the first AIs as some sort of neural network based on the human brain, I'd imagine the very first would be less intelligent than humans. If we're basically trying to mimic humans, the first few stabs will probably be imperfect. I'd imagine them as probably on the level of a fairly slow human being.

    Okay, sure, maybe we can link up these AIs to computers that can run calculations, and such. Which basically just gives us an idiot savant. Just because the dumb-ass AI routine you're talking to can tell you pi out to ten thousand places doesn't make it "smart" in any useful fashion - it wouldn't grant us any benefit over a normal human working on a traditional computer.

    I suppose that someday we may develop technology that'll allow us to model human brains, but with a lot more virtual neurons and relevant information pathways. Cool, they'll be able to process lots of information quickly. But why are we assuming that AI routines will be able to things fundamentally different than the fleets of humans we already have working on such problems? (Or rather, the fleets of humans we'll have working on problems at the time.)

    I dunno, it just seems that everyone assumes that as soon as we create AI, they'll instantly discover everything there is to know and then subjugate humans, or whatever. I don't think AI will really be all that different from the humans who create them. And if our current experiments with brain augmentation keep progressing, a human may be able to grant himself the same processing power as any AI at the time, anyway.

    Basically, I see no guarantee that the invention of AI will create the kind of fundamental paradigm shifts we often assume, or at least not in the ways we usually discuss. It will be another life form, but it's not going to be like we've invented a race of gods, or anything.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    chiasaur11chiasaur11 Never doubt a raccoon. Do you think it's trademarked?Registered User regular
    edited January 2011
    ElJeffe wrote: »
    zerg rush wrote: »
    japan wrote: »
    when people say "AI" are you picturing some kind of giant robot

    I don't have visions of robots flying around delivering food to sub-saharan africa, if that's what you mean.

    However, I assume that anything that's intelligent enough to be called strong AI is going to be capable of solving those problems for us, by finding optimal solutions to them and then putting those solutions into effect.

    Why?

    I dunno, whenever I hear people talking about AI they always seem to assume that because it involves a computer it's super-smart. I don't know if this will be the case.

    If we wind up modeling the first AIs as some sort of neural network based on the human brain, I'd imagine the very first would be less intelligent than humans. If we're basically trying to mimic humans, the first few stabs will probably be imperfect. I'd imagine them as probably on the level of a fairly slow human being.

    Okay, sure, maybe we can link up these AIs to computers that can run calculations, and such. Which basically just gives us an idiot savant. Just because the dumb-ass AI routine you're talking to can tell you pi out to ten thousand places doesn't make it "smart" in any useful fashion - it wouldn't grant us any benefit over a normal human working on a traditional computer.

    I suppose that someday we may develop technology that'll allow us to model human brains, but with a lot more virtual neurons and relevant information pathways. Cool, they'll be able to process lots of information quickly. But why are we assuming that AI routines will be able to things fundamentally different than the fleets of humans we already have working on such problems? (Or rather, the fleets of humans we'll have working on problems at the time.)

    I dunno, it just seems that everyone assumes that as soon as we create AI, they'll instantly discover everything there is to know and then subjugate humans, or whatever. I don't think AI will really be all that different from the humans who create them. And if our current experiments with brain augmentation keep progressing, a human may be able to grant himself the same processing power as any AI at the time, anyway.

    Basically, I see no guarantee that the invention of AI will create the kind of fundamental paradigm shifts we often assume, or at least not in the ways we usually discuss. It will be another life form, but it's not going to be like we've invented a race of gods, or anything.

    What I was thinking on the subject, said more clearly.

    chiasaur11 on
Sign In or Register to comment.