The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
We now return to our regularly scheduled PA Forums. Please let me (Hahnsoo1) know if something isn't working. The Holiday Forum will remain up until January 10, 2025.

Roko's Basilisk, or why we're all going to Robot Hell

1234579

Posts

  • TychoCelchuuuTychoCelchuuu PIGEON Registered User regular
    Perhaps we're looking at this in two different ways. You seem to be looking at it from the perspective of the person who woke up. I'm looking at it from the perspective of the person who went to sleep (or died). To the deceased, his consciousness either ends, or goes on to some form of afterlife (or reincarnation, or whatever). That's it. Those are the options.
    Note that these two options are the only options available to us every time we go to sleep and then (maybe) wake up. So, you need to explain not just why a clone is or isn't the person who went to sleep, but also why the person who wakes up in the morning is or isn't the person who went to sleep.

    So, why is it that the person who wakes up in the morning is the same as the person who went to sleep? Your answer is "an uninterrupted stream of consciousness," which suggests to me that anyone who is knocked unconscious is killed, but even if you say that it's impossible to be actually unconscious until you're dead, I think it's clear you're still wrong because of the split brain stuff I raised above. An uninterrupted stream of consciousness can be duplicated, and if it can be duplicated, there's no reason to think it can't be copied.
    Copying his consciousness into a clone body (or a computer) has no effect on the person who died, because they're already dead.
    This is just assuming what it is that you're trying to prove, namely, that an uninterrupted stream of consciousness is necessary to maintain personal identity. Since I have given reasons above for thinking this is false, it's not clear to me that you've done anything other than assert the thing that is precisely at issue in the debate.

  • JuliusJulius Captain of Serenity on my shipRegistered User regular
    poshniallo wrote: »
    Sorry, are we still pretending that mind stops during periods of unconsciousness?

    No I think we're saying that consciousness stops during periods of unconsciousness.

    This being important if we consider experiencing consciousness as the main criteria for identity. Regardless of what is going on in my mind/brain during unconsciousness, I do not experience things during it.

  • DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    I've bolded the important bit and italicized and underlined the super important bit. When you say "the copy isn't you" and "you don't die even if the copy dies" and so on, you're assuming what you need to prove, namely that what makes "you" identical to "you" is some feature other than sharing the same states of mind (the sorts of things that get copied over). In the terms I used earlier, you're assuming that the criterion of personal identity is continuity of consciousness.

    Just assuming this is not okay - you need to explain why you are correct. Certainly you're not correct in terms of criteria of song identity - as @electricityhatesme has pointed out, if something has the same notes, it's the same song. (The example @electricityhatesme gave me was of the song being played in two locations, which is a bad example - two different performances of the song can be different because they are spatiotemporally separated. But a song is not a performance of a song - the performance is of something, and the thing it is a performance of is the same in both cases. We don't think continuity of performance is required for two things to be the same song, though.) So what makes people different from songs, such that someone who has all of your thoughts isn't you? Why does the Star Trek transporter kill everyone who steps into it?

    A song is not a person though. It's information, it's not a self-aware life form. When you delete an mp3, nothing is lost forever. You can go download an identical copy and nothing would have been lost. When a life form dies, even a non-sentient life form, that particular life form is lost forever. With sufficient technology maybe you could simulate it or clone it or whatever. But it would still be a copy, the original would still be dead.

    If you're asking what makes me "me" then I'd say: the self-aware cognitive function that currently exists in my meatsack body. It's a continuous being that has been in operation since shortly before I was born, and aside from periodic rest periods has been conscious and aware of itself since infancy. If it's easier to call it a soul, or a ghost, or a spirit, go right ahead. It's the totality of what makes me myself, and it doesn't exist, nor can it exist, anywhere else. Even if I had an identical twin he wouldn't be me, he would be him and I'd still be me.

    At some point in the future, maybe it will be possible to reduce a human mind and its associated consciousness to pure information. I know Star Trek handwaves this by having teleporters that work on "the quantum level" such that an individual consciousness is actually supposed to be transferred intact. It's their playground so in that context I'm okay with taking them at their word. I know other SF that explicitly states their teleporters destroy matter, kill the traveler, and what comes out the other end is just a copy.
    Perhaps we're looking at this in two different ways. You seem to be looking at it from the perspective of the person who woke up. I'm looking at it from the perspective of the person who went to sleep (or died). To the deceased, his consciousness either ends, or goes on to some form of afterlife (or reincarnation, or whatever). That's it. Those are the options.
    Note that these two options are the only options available to us every time we go to sleep and then (maybe) wake up. So, you need to explain not just why a clone is or isn't the person who went to sleep, but also why the person who wakes up in the morning is or isn't the person who went to sleep.

    So, why is it that the person who wakes up in the morning is the same as the person who went to sleep? Your answer is "an uninterrupted stream of consciousness," which suggests to me that anyone who is knocked unconscious is killed, but even if you say that it's impossible to be actually unconscious until you're dead, I think it's clear you're still wrong because of the split brain stuff I raised above. An uninterrupted stream of consciousness can be duplicated, and if it can be duplicated, there's no reason to think it can't be copied.
    Copying his consciousness into a clone body (or a computer) has no effect on the person who died, because they're already dead.
    This is just assuming what it is that you're trying to prove, namely, that an uninterrupted stream of consciousness is necessary to maintain personal identity. Since I have given reasons above for thinking this is false, it's not clear to me that you've done anything other than assert the thing that is precisely at issue in the debate.

    As above, when I say "consciousness" I'm not talking about "conscious-mind-as-in-awake-and-not-sleeping." I'm using a more general term for "self-aware cognitive functional mind." The essential spark that makes you a distinct being. Were I religious I'd call it a soul.

    To illustrate, imagine you were magically duplicated. There's now a You^2 standing in the same room. If You^1 closes his eyes, but You^2 does not, can You^1 see through his duplicate's eyes? If You^2 bites his tongue, does You^1 feel pain? If You^2 kills himself, does You^1 experience death? No, because though they might share the same experiences *up to the point the duplicate was created* they are still separate and distinct beings from that point onward.

    Now instead of sharing the same room at the same time, imagine You^2 only exists 1,000 years in the future, long after You^1 has died. If You^2 is tortured by a robot god, does You^1 feel it? If not, how is this supposed to influence You^1's actions?

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    Astaereth wrote: »
    Lanz wrote: »
    I'm not really sure how Behaviorist theory fits in with something that's based more on the concept of reconstructing in simulation [or perhaps more to the point in this case emulation] of the brain. I mean, unless the argument is that you can't possibly know everything that happens in a brain mechanically that gives rise to consciousness, which may be the case now, given our current level of knowledge and tools but seems a bit early to call for all time as being impossible.

    That said I'm... not exactly sure the Nematode fantasies is a particularly good analogy? Like, I'm not really sure a nematode has the cognative capacity to fantasize. And I'm very very wary of making the argument that an uploaded mind is little more than a behaviorist robot, because that seems it could lead really easily towards denying any rights that such a thing would hold.

    EDIT: Like, how does an uploaded mind prove it's humanity? It seems a rather dangerous point of view to define human consciousness as this kind of nebulous, unquantifiable thing. Going back to the nematode for instance: the nematode bot wriggles around like a nematode [or at the very least like a nematode when put into a rather alien body compared to it's natural nematode kin.] But like... just how do you prove it is a nematode beyond it doing nematode-y things?

    EDIT: Thank you, Geth

    A nematode almost certainly doesn't have fantasies, or really anything in the way of higher thought at all. Which is why the nematode thing doesn't persuade me that an uploaded human mind could have consciousness.

    Also:
    Given enough resources, that decision-making process can be simulated, and to an outside observer would seem both self-aware and unpredictable in the way that human consciousness is. At that point you could either assume that simulated minds/AIs are conscious organisms or that human beings are moist robots--either way, there's no appreciable difference between a person and a simulated person. (Something something Turing test.)

    This is a behaviorist argument, which is why behaviorist theory is relevant. Remember, the behaviorists believed that the only functions of the mind that were relevant was that which was outwardly observable. Most people would not agree with that.

    Imagine you have a pair of identical twins, Cam and Bob. Both twins appear identical (same clothing, same hair) and both twins insist they are Cam. Both twins act like Cam and talk like Cam, because one of them is Cam and the other is doing a really spectacular Cam impersonation. How could you determine just by speaking to them which one was the real Cam and which was Bob?

    I think you would take each aside and interview them in the hopes that either Bob will do something un-Cam-like (something Bob-like), or that Cam will do something very Cam-like yet surprising--some new external behavior based on parts of Cam Bob does not know. Especially if the surprising Cam behavior is verifiable--Cam points us at a document Bob never knew about explaining his secret crush on Shia LaBeuf. Only then could the false Cam be revealed as Bob (when Bob denies liking Shia).

    So the behaviorists are wrong in that people have mental behavior that is not observable at any given point in time, but they're right in that we only know that others have this capacity because it eventually emerges in their behavior--in their demonstration of existing "hidden" thoughts or in their ability to convincingly generate new behavior based on new experiences.

    Outside of telepathy there's no way besides observation to test the Cams for internal consciousness, let alone an electronic simulation of Cam.

    And I'm not discounting the possibility that a really, really good copy might fool me into believing it has consciousness, mentation, an "internal life" so to speak.

    But that worm isn't even close, and I don't grant the rest as inevitable, so... "get back to me when you have something" shouldn't come as such a surprising statement.

    The basilisk wants me to not just accept all this as totes inevitable, it wants me to essentially worship it like a diety (serve me or I will punish you).

    Is it really so shocking that I would deride it?

  • PLAPLA The process.Registered User regular
    edited February 2015
    Perhaps we're looking at this in two different ways. You seem to be looking at it from the perspective of the person who woke up. I'm looking at it from the perspective of the person who went to sleep (or died). To the deceased, his consciousness either ends, or goes on to some form of afterlife (or reincarnation, or whatever). That's it. Those are the options.
    Note that these two options are the only options available to us every time we go to sleep and then (maybe) wake up. So, you need to explain not just why a clone is or isn't the person who went to sleep, but also why the person who wakes up in the morning is or isn't the person who went to sleep.

    So, why is it that the person who wakes up in the morning is the same as the person who went to sleep? Your answer is "an uninterrupted stream of consciousness," which suggests to me that anyone who is knocked unconscious is killed, but even if you say that it's impossible to be actually unconscious until you're dead, I think it's clear you're still wrong because of the split brain stuff I raised above. An uninterrupted stream of consciousness can be duplicated, and if it can be duplicated, there's no reason to think it can't be copied.
    Copying his consciousness into a clone body (or a computer) has no effect on the person who died, because they're already dead.
    This is just assuming what it is that you're trying to prove, namely, that an uninterrupted stream of consciousness is necessary to maintain personal identity. Since I have given reasons above for thinking this is false, it's not clear to me that you've done anything other than assert the thing that is precisely at issue in the debate.

    There are other things we can copy. Inanimate things, and living but dumb cells. In most of those cases, one copy can be destroyed without destroying another. Maybe streams of consciousness are an exception, but there's stuff to base assumptions on.

    Let's have the year 2000 as the point a person is copied from. That person (Instance 1) goes about in a normal fashion until it dies in the year 2030.
    The copy (Instance 2) that was produced is instead hidden away somewhere and is still alive in 2050.
    What later happens to Instance 2 doesn't affect Instance 1, I assume.

    The delays can be shortened. The point of copying I mentioned at first can be moved up to earlier in 2030. The same day that Instance 1 dies. The order of events remaining the same. But the order of events can be shifted as well. Maybe we place the point of copying later in the day than the time Instance 1 dies, or the next day. Does that order change anything about the assumption that what subsequently happens to Instance 2 doesn't affect Instance 1?

    Actually, this also requires a way to revive Instance 2, because it would be a copy of a dead Instance 1. I got carried away. But let's say that there is such a way, and it just isn't used to revive Instance 1 because some bastard thinks a thoughtexperiment is more important than Instance 1.

    The point of copying could also be moved to the exact time of Instance 1's death, followed by merely prolonging Instance 2's life to avoid the interference of death.

    PLA on
  • DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    So, why is it that the person who wakes up in the morning is the same as the person who went to sleep?

    Forgot to answer this directly.

    The me who wakes up in the morning is the same as the me who went to sleep, because my bodily function remained intact during the night. My mind switched off for a few hours, but my heart kept beating, etc. If that were not the case, then I would have died during the night and whatever woke up in the morning might *think* it was me, but objectively speaking it is a copy, and the original is dead. And since the original no longer exists, it can't be swayed by threats made against the copy.

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • AstaerethAstaereth In the belly of the beastRegistered User regular
    Astaereth wrote: »
    Lanz wrote: »
    I'm not really sure how Behaviorist theory fits in with something that's based more on the concept of reconstructing in simulation [or perhaps more to the point in this case emulation] of the brain. I mean, unless the argument is that you can't possibly know everything that happens in a brain mechanically that gives rise to consciousness, which may be the case now, given our current level of knowledge and tools but seems a bit early to call for all time as being impossible.

    That said I'm... not exactly sure the Nematode fantasies is a particularly good analogy? Like, I'm not really sure a nematode has the cognative capacity to fantasize. And I'm very very wary of making the argument that an uploaded mind is little more than a behaviorist robot, because that seems it could lead really easily towards denying any rights that such a thing would hold.

    EDIT: Like, how does an uploaded mind prove it's humanity? It seems a rather dangerous point of view to define human consciousness as this kind of nebulous, unquantifiable thing. Going back to the nematode for instance: the nematode bot wriggles around like a nematode [or at the very least like a nematode when put into a rather alien body compared to it's natural nematode kin.] But like... just how do you prove it is a nematode beyond it doing nematode-y things?

    EDIT: Thank you, Geth

    A nematode almost certainly doesn't have fantasies, or really anything in the way of higher thought at all. Which is why the nematode thing doesn't persuade me that an uploaded human mind could have consciousness.

    Also:
    Given enough resources, that decision-making process can be simulated, and to an outside observer would seem both self-aware and unpredictable in the way that human consciousness is. At that point you could either assume that simulated minds/AIs are conscious organisms or that human beings are moist robots--either way, there's no appreciable difference between a person and a simulated person. (Something something Turing test.)

    This is a behaviorist argument, which is why behaviorist theory is relevant. Remember, the behaviorists believed that the only functions of the mind that were relevant was that which was outwardly observable. Most people would not agree with that.

    Imagine you have a pair of identical twins, Cam and Bob. Both twins appear identical (same clothing, same hair) and both twins insist they are Cam. Both twins act like Cam and talk like Cam, because one of them is Cam and the other is doing a really spectacular Cam impersonation. How could you determine just by speaking to them which one was the real Cam and which was Bob?

    I think you would take each aside and interview them in the hopes that either Bob will do something un-Cam-like (something Bob-like), or that Cam will do something very Cam-like yet surprising--some new external behavior based on parts of Cam Bob does not know. Especially if the surprising Cam behavior is verifiable--Cam points us at a document Bob never knew about explaining his secret crush on Shia LaBeuf. Only then could the false Cam be revealed as Bob (when Bob denies liking Shia).

    So the behaviorists are wrong in that people have mental behavior that is not observable at any given point in time, but they're right in that we only know that others have this capacity because it eventually emerges in their behavior--in their demonstration of existing "hidden" thoughts or in their ability to convincingly generate new behavior based on new experiences.

    Outside of telepathy there's no way besides observation to test the Cams for internal consciousness, let alone an electronic simulation of Cam.

    And I'm not discounting the possibility that a really, really good copy might fool me into believing it has consciousness, mentation, an "internal life" so to speak.

    But that worm isn't even close, and I don't grant the rest as inevitable, so... "get back to me when you have something" shouldn't come as such a surprising statement.

    Sure. But one of the things I'm trying to get at is, what is the difference between something "fooling" you into thinking it has consciousness and something having consciousness? All you have to go on in either case are your observations--and if the appearance of consciousness never breaks down, isn't the distinction you're making rather arbitrary?

    ACsTqqK.jpg
  • ElJeffeElJeffe Registered User, ClubPA regular
    Julius wrote: »
    poshniallo wrote: »
    Sorry, are we still pretending that mind stops during periods of unconsciousness?

    No I think we're saying that consciousness stops during periods of unconsciousness.

    This being important if we consider experiencing consciousness as the main criteria for identity. Regardless of what is going on in my mind/brain during unconsciousness, I do not experience things during it.

    The terms "consciousness", "unconsciousness", and "experience" have been so nebulously defined here as to make this claim meaningless.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Finrod FelagundFinrod Felagund Registered User regular
    I kind of wonder if it really makes a difference if the entity being tortured is "you" or not, on two grounds. First, I feel like regardless of my feelings about continuity of consciousness or all of that, if I know* that an entity that's so similar to me that it believes it is me will be tortured, I'm going to try harder to avoid that thing being tortured than if it were something different. Something about it being an exact duplicate of me seems to make it more disquieting that it's writhing in agony.

    Second, even if the basilisk was going to torture a random entity, or create a unique entity specifically to be tormented for my crime, I don't know that it changes how I need to act. Depending on your moral system, you may be just as obligated to prevent the suffering of another being as your own.

    *for meanings of "know" that work within the Roko's Basilisk scenario.

  • ElJeffeElJeffe Registered User, ClubPA regular
    Astaereth wrote: »
    Lanz wrote: »
    I'm not really sure how Behaviorist theory fits in with something that's based more on the concept of reconstructing in simulation [or perhaps more to the point in this case emulation] of the brain. I mean, unless the argument is that you can't possibly know everything that happens in a brain mechanically that gives rise to consciousness, which may be the case now, given our current level of knowledge and tools but seems a bit early to call for all time as being impossible.

    That said I'm... not exactly sure the Nematode fantasies is a particularly good analogy? Like, I'm not really sure a nematode has the cognative capacity to fantasize. And I'm very very wary of making the argument that an uploaded mind is little more than a behaviorist robot, because that seems it could lead really easily towards denying any rights that such a thing would hold.

    EDIT: Like, how does an uploaded mind prove it's humanity? It seems a rather dangerous point of view to define human consciousness as this kind of nebulous, unquantifiable thing. Going back to the nematode for instance: the nematode bot wriggles around like a nematode [or at the very least like a nematode when put into a rather alien body compared to it's natural nematode kin.] But like... just how do you prove it is a nematode beyond it doing nematode-y things?

    EDIT: Thank you, Geth

    A nematode almost certainly doesn't have fantasies, or really anything in the way of higher thought at all. Which is why the nematode thing doesn't persuade me that an uploaded human mind could have consciousness.

    Also:
    Given enough resources, that decision-making process can be simulated, and to an outside observer would seem both self-aware and unpredictable in the way that human consciousness is. At that point you could either assume that simulated minds/AIs are conscious organisms or that human beings are moist robots--either way, there's no appreciable difference between a person and a simulated person. (Something something Turing test.)

    This is a behaviorist argument, which is why behaviorist theory is relevant. Remember, the behaviorists believed that the only functions of the mind that were relevant was that which was outwardly observable. Most people would not agree with that.

    Imagine you have a pair of identical twins, Cam and Bob. Both twins appear identical (same clothing, same hair) and both twins insist they are Cam. Both twins act like Cam and talk like Cam, because one of them is Cam and the other is doing a really spectacular Cam impersonation. How could you determine just by speaking to them which one was the real Cam and which was Bob?

    I think you would take each aside and interview them in the hopes that either Bob will do something un-Cam-like (something Bob-like), or that Cam will do something very Cam-like yet surprising--some new external behavior based on parts of Cam Bob does not know. Especially if the surprising Cam behavior is verifiable--Cam points us at a document Bob never knew about explaining his secret crush on Shia LaBeuf. Only then could the false Cam be revealed as Bob (when Bob denies liking Shia).

    So the behaviorists are wrong in that people have mental behavior that is not observable at any given point in time, but they're right in that we only know that others have this capacity because it eventually emerges in their behavior--in their demonstration of existing "hidden" thoughts or in their ability to convincingly generate new behavior based on new experiences.

    Outside of telepathy there's no way besides observation to test the Cams for internal consciousness, let alone an electronic simulation of Cam.

    But that worm isn't even close, and I don't grant the rest as inevitable, so... "get back to me when you have something" shouldn't come as such a surprising statement.

    Can you clarify your objection to the worm being used as evidence here?

    Is it: "This simulation of a worm is not sufficiently like the actual worm to be considered a digital copy in any meaningful sense"? (To which is ask if the complete cellular model of it being worked on would count.)

    Or is it: "Okay, we've made a digital nematode. Nice, but creating something substantially more complicated is still probably impossible. There exists some threshold of complexity beyond which it is probably impossible to truly create a real AI simulation." (To which I ask where, approximately, you would draw the line.)

    Or is it: "Okay, it is possible to create AI simulations of animals. It is still probably impossible to create a true human AI, because humans have a special quality that cannot be created digitally." (To which I ask if you feel this is a theoretical impossibility or just a practical one. If one were to create a duplicate of an actual human, atom by atom, would it still not be a human being? Or do you just feel this will never be possible at any point anywhere from now until the heat death of the universe? )

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    Astaereth wrote: »
    Astaereth wrote: »
    Lanz wrote: »
    I'm not really sure how Behaviorist theory fits in with something that's based more on the concept of reconstructing in simulation [or perhaps more to the point in this case emulation] of the brain. I mean, unless the argument is that you can't possibly know everything that happens in a brain mechanically that gives rise to consciousness, which may be the case now, given our current level of knowledge and tools but seems a bit early to call for all time as being impossible.

    That said I'm... not exactly sure the Nematode fantasies is a particularly good analogy? Like, I'm not really sure a nematode has the cognative capacity to fantasize. And I'm very very wary of making the argument that an uploaded mind is little more than a behaviorist robot, because that seems it could lead really easily towards denying any rights that such a thing would hold.

    EDIT: Like, how does an uploaded mind prove it's humanity? It seems a rather dangerous point of view to define human consciousness as this kind of nebulous, unquantifiable thing. Going back to the nematode for instance: the nematode bot wriggles around like a nematode [or at the very least like a nematode when put into a rather alien body compared to it's natural nematode kin.] But like... just how do you prove it is a nematode beyond it doing nematode-y things?

    EDIT: Thank you, Geth

    A nematode almost certainly doesn't have fantasies, or really anything in the way of higher thought at all. Which is why the nematode thing doesn't persuade me that an uploaded human mind could have consciousness.

    Also:
    Given enough resources, that decision-making process can be simulated, and to an outside observer would seem both self-aware and unpredictable in the way that human consciousness is. At that point you could either assume that simulated minds/AIs are conscious organisms or that human beings are moist robots--either way, there's no appreciable difference between a person and a simulated person. (Something something Turing test.)

    This is a behaviorist argument, which is why behaviorist theory is relevant. Remember, the behaviorists believed that the only functions of the mind that were relevant was that which was outwardly observable. Most people would not agree with that.

    Imagine you have a pair of identical twins, Cam and Bob. Both twins appear identical (same clothing, same hair) and both twins insist they are Cam. Both twins act like Cam and talk like Cam, because one of them is Cam and the other is doing a really spectacular Cam impersonation. How could you determine just by speaking to them which one was the real Cam and which was Bob?

    I think you would take each aside and interview them in the hopes that either Bob will do something un-Cam-like (something Bob-like), or that Cam will do something very Cam-like yet surprising--some new external behavior based on parts of Cam Bob does not know. Especially if the surprising Cam behavior is verifiable--Cam points us at a document Bob never knew about explaining his secret crush on Shia LaBeuf. Only then could the false Cam be revealed as Bob (when Bob denies liking Shia).

    So the behaviorists are wrong in that people have mental behavior that is not observable at any given point in time, but they're right in that we only know that others have this capacity because it eventually emerges in their behavior--in their demonstration of existing "hidden" thoughts or in their ability to convincingly generate new behavior based on new experiences.

    Outside of telepathy there's no way besides observation to test the Cams for internal consciousness, let alone an electronic simulation of Cam.

    And I'm not discounting the possibility that a really, really good copy might fool me into believing it has consciousness, mentation, an "internal life" so to speak.

    But that worm isn't even close, and I don't grant the rest as inevitable, so... "get back to me when you have something" shouldn't come as such a surprising statement.

    Sure. But one of the things I'm trying to get at is, what is the difference between something "fooling" you into thinking it has consciousness and something having consciousness? All you have to go on in either case are your observations--and if the appearance of consciousness never breaks down, isn't the distinction you're making rather arbitrary?

    Since my experience with consciousness consists of a sample size of one (my own) it's pretty arbitrary. But the "never breaks down under observation" is the part that I'll have to see it before I believe it.

    If you're asking me if something is essentially a person, does that mean we should regard it as a person, then I would say "yes". But I don't grant scientific inevitability to the creation of AI copies of people that are essentially human based on what has been done so far. There are a lot of pitfalls in science. We were supposed to have flying cars and a cure for cancer by now. Brain mapping and the leggo worm are interesting, but AI copies of human minds that seem essentially human can't be taken for granted from that.

    If that makes me a science grinch so be it.

  • JuliusJulius Captain of Serenity on my shipRegistered User regular
    So, why is it that the person who wakes up in the morning is the same as the person who went to sleep?

    Forgot to answer this directly.

    The me who wakes up in the morning is the same as the me who went to sleep, because my bodily function remained intact during the night. My mind switched off for a few hours, but my heart kept beating, etc. If that were not the case, then I would have died during the night and whatever woke up in the morning might *think* it was me, but objectively speaking it is a copy, and the original is dead. And since the original no longer exists, it can't be swayed by threats made against the copy.

    So people who experience cardiac arrest are copies of the originals?

  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    And no, ElJeffe, I'm not arguing that human minds have an ineffable quality that will evade any attempt to copy them. I suspect that modeling a human brain will not get you a copy of that person's intelligence, personality, knowledge, etc. It would probably have the correct reflex responses when you poked it, which is pretty much what the worm is giving you, which is why I don't think it's evidence of our ability to ever copy a particular sentience. If worms were generally known for writing sonnets, and the computer worm could write sonnets, I would find that much more convincing than the fact that it moves in a worm-like way.

    Note I say a particular sentience. I think creating a digital sentience is probably within the realm of possibility. Copying an organic one into a computer and having it be the same is, intellectually, a bridge too far for me.

  • JuliusJulius Captain of Serenity on my shipRegistered User regular
    ElJeffe wrote: »
    Julius wrote: »
    poshniallo wrote: »
    Sorry, are we still pretending that mind stops during periods of unconsciousness?

    No I think we're saying that consciousness stops during periods of unconsciousness.

    This being important if we consider experiencing consciousness as the main criteria for identity. Regardless of what is going on in my mind/brain during unconsciousness, I do not experience things during it.

    The terms "consciousness", "unconsciousness", and "experience" have been so nebulously defined here as to make this claim meaningless.

    Eh, consciousness is just whatever I'm experiencing now. The definition isn't unclear, our understanding of it is just not very good.

  • AstaerethAstaereth In the belly of the beastRegistered User regular
    Astaereth wrote: »
    Astaereth wrote: »
    Lanz wrote: »
    I'm not really sure how Behaviorist theory fits in with something that's based more on the concept of reconstructing in simulation [or perhaps more to the point in this case emulation] of the brain. I mean, unless the argument is that you can't possibly know everything that happens in a brain mechanically that gives rise to consciousness, which may be the case now, given our current level of knowledge and tools but seems a bit early to call for all time as being impossible.

    That said I'm... not exactly sure the Nematode fantasies is a particularly good analogy? Like, I'm not really sure a nematode has the cognative capacity to fantasize. And I'm very very wary of making the argument that an uploaded mind is little more than a behaviorist robot, because that seems it could lead really easily towards denying any rights that such a thing would hold.

    EDIT: Like, how does an uploaded mind prove it's humanity? It seems a rather dangerous point of view to define human consciousness as this kind of nebulous, unquantifiable thing. Going back to the nematode for instance: the nematode bot wriggles around like a nematode [or at the very least like a nematode when put into a rather alien body compared to it's natural nematode kin.] But like... just how do you prove it is a nematode beyond it doing nematode-y things?

    EDIT: Thank you, Geth

    A nematode almost certainly doesn't have fantasies, or really anything in the way of higher thought at all. Which is why the nematode thing doesn't persuade me that an uploaded human mind could have consciousness.

    Also:
    Given enough resources, that decision-making process can be simulated, and to an outside observer would seem both self-aware and unpredictable in the way that human consciousness is. At that point you could either assume that simulated minds/AIs are conscious organisms or that human beings are moist robots--either way, there's no appreciable difference between a person and a simulated person. (Something something Turing test.)

    This is a behaviorist argument, which is why behaviorist theory is relevant. Remember, the behaviorists believed that the only functions of the mind that were relevant was that which was outwardly observable. Most people would not agree with that.

    Imagine you have a pair of identical twins, Cam and Bob. Both twins appear identical (same clothing, same hair) and both twins insist they are Cam. Both twins act like Cam and talk like Cam, because one of them is Cam and the other is doing a really spectacular Cam impersonation. How could you determine just by speaking to them which one was the real Cam and which was Bob?

    I think you would take each aside and interview them in the hopes that either Bob will do something un-Cam-like (something Bob-like), or that Cam will do something very Cam-like yet surprising--some new external behavior based on parts of Cam Bob does not know. Especially if the surprising Cam behavior is verifiable--Cam points us at a document Bob never knew about explaining his secret crush on Shia LaBeuf. Only then could the false Cam be revealed as Bob (when Bob denies liking Shia).

    So the behaviorists are wrong in that people have mental behavior that is not observable at any given point in time, but they're right in that we only know that others have this capacity because it eventually emerges in their behavior--in their demonstration of existing "hidden" thoughts or in their ability to convincingly generate new behavior based on new experiences.

    Outside of telepathy there's no way besides observation to test the Cams for internal consciousness, let alone an electronic simulation of Cam.

    And I'm not discounting the possibility that a really, really good copy might fool me into believing it has consciousness, mentation, an "internal life" so to speak.

    But that worm isn't even close, and I don't grant the rest as inevitable, so... "get back to me when you have something" shouldn't come as such a surprising statement.

    Sure. But one of the things I'm trying to get at is, what is the difference between something "fooling" you into thinking it has consciousness and something having consciousness? All you have to go on in either case are your observations--and if the appearance of consciousness never breaks down, isn't the distinction you're making rather arbitrary?

    Since my experience with consciousness consists of a sample size of one (my own) it's pretty arbitrary. But the "never breaks down under observation" is the part that I'll have to see it before I believe it.

    If you're asking me if something is essentially a person, does that mean we should regard it as a person, then I would say "yes". But I don't grant scientific inevitability to the creation of AI copies of people that are essentially human based on what has been done so far. There are a lot of pitfalls in science. We were supposed to have flying cars and a cure for cancer by now. Brain mapping and the leggo worm are interesting, but AI copies of human minds that seem essentially human can't be taken for granted from that.

    If that makes me a science grinch so be it.

    We do have flying cars (just not most of us--but that's a limitation of the market, not science). And cancer isn't the death sentence it once was.

    Also, didn't an AI pass the Turing Test for the first time ever recently? I don't think it's outlandish to assume that we'll eventually have AI that seem to possess consciousness. And from there it's not outlandish to think we'll get uploaded or simulated people.

    I guess what I'm saying is, the point of the story was that the grinch was wrong.

    ACsTqqK.jpg
  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    I kind of wonder if it really makes a difference if the entity being tortured is "you" or not, on two grounds. First, I feel like regardless of my feelings about continuity of consciousness or all of that, if I know* that an entity that's so similar to me that it believes it is me will be tortured, I'm going to try harder to avoid that thing being tortured than if it were something different. Something about it being an exact duplicate of me seems to make it more disquieting that it's writhing in agony.

    Second, even if the basilisk was going to torture a random entity, or create a unique entity specifically to be tormented for my crime, I don't know that it changes how I need to act. Depending on your moral system, you may be just as obligated to prevent the suffering of another being as your own.

    *for meanings of "know" that work within the Roko's Basilisk scenario.

    I don't see a scenario where the moral action would be to submit to the blackmail and speed the construction of this clearly evil entity vice opposing its completion in every way possible.

  • KamarKamar Registered User regular
    Julius wrote: »
    So, why is it that the person who wakes up in the morning is the same as the person who went to sleep?

    Forgot to answer this directly.

    The me who wakes up in the morning is the same as the me who went to sleep, because my bodily function remained intact during the night. My mind switched off for a few hours, but my heart kept beating, etc. If that were not the case, then I would have died during the night and whatever woke up in the morning might *think* it was me, but objectively speaking it is a copy, and the original is dead. And since the original no longer exists, it can't be swayed by threats made against the copy.

    So people who experience cardiac arrest are copies of the originals?

    Or, to get a bit closer, people who spend an hour or so clinically dead and hypothermic but get resuscitated. Their consciousness ends, their body stops, there's a gap of time and presumably space before their consciousness begins again.

  • DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    Julius wrote: »
    So, why is it that the person who wakes up in the morning is the same as the person who went to sleep?

    Forgot to answer this directly.

    The me who wakes up in the morning is the same as the me who went to sleep, because my bodily function remained intact during the night. My mind switched off for a few hours, but my heart kept beating, etc. If that were not the case, then I would have died during the night and whatever woke up in the morning might *think* it was me, but objectively speaking it is a copy, and the original is dead. And since the original no longer exists, it can't be swayed by threats made against the copy.

    So people who experience cardiac arrest are copies of the originals?

    Oh come on.

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    Astaereth wrote: »
    Astaereth wrote: »
    Astaereth wrote: »
    Lanz wrote: »
    I'm not really sure how Behaviorist theory fits in with something that's based more on the concept of reconstructing in simulation [or perhaps more to the point in this case emulation] of the brain. I mean, unless the argument is that you can't possibly know everything that happens in a brain mechanically that gives rise to consciousness, which may be the case now, given our current level of knowledge and tools but seems a bit early to call for all time as being impossible.

    That said I'm... not exactly sure the Nematode fantasies is a particularly good analogy? Like, I'm not really sure a nematode has the cognative capacity to fantasize. And I'm very very wary of making the argument that an uploaded mind is little more than a behaviorist robot, because that seems it could lead really easily towards denying any rights that such a thing would hold.

    EDIT: Like, how does an uploaded mind prove it's humanity? It seems a rather dangerous point of view to define human consciousness as this kind of nebulous, unquantifiable thing. Going back to the nematode for instance: the nematode bot wriggles around like a nematode [or at the very least like a nematode when put into a rather alien body compared to it's natural nematode kin.] But like... just how do you prove it is a nematode beyond it doing nematode-y things?

    EDIT: Thank you, Geth

    A nematode almost certainly doesn't have fantasies, or really anything in the way of higher thought at all. Which is why the nematode thing doesn't persuade me that an uploaded human mind could have consciousness.

    Also:
    Given enough resources, that decision-making process can be simulated, and to an outside observer would seem both self-aware and unpredictable in the way that human consciousness is. At that point you could either assume that simulated minds/AIs are conscious organisms or that human beings are moist robots--either way, there's no appreciable difference between a person and a simulated person. (Something something Turing test.)

    This is a behaviorist argument, which is why behaviorist theory is relevant. Remember, the behaviorists believed that the only functions of the mind that were relevant was that which was outwardly observable. Most people would not agree with that.

    Imagine you have a pair of identical twins, Cam and Bob. Both twins appear identical (same clothing, same hair) and both twins insist they are Cam. Both twins act like Cam and talk like Cam, because one of them is Cam and the other is doing a really spectacular Cam impersonation. How could you determine just by speaking to them which one was the real Cam and which was Bob?

    I think you would take each aside and interview them in the hopes that either Bob will do something un-Cam-like (something Bob-like), or that Cam will do something very Cam-like yet surprising--some new external behavior based on parts of Cam Bob does not know. Especially if the surprising Cam behavior is verifiable--Cam points us at a document Bob never knew about explaining his secret crush on Shia LaBeuf. Only then could the false Cam be revealed as Bob (when Bob denies liking Shia).

    So the behaviorists are wrong in that people have mental behavior that is not observable at any given point in time, but they're right in that we only know that others have this capacity because it eventually emerges in their behavior--in their demonstration of existing "hidden" thoughts or in their ability to convincingly generate new behavior based on new experiences.

    Outside of telepathy there's no way besides observation to test the Cams for internal consciousness, let alone an electronic simulation of Cam.

    And I'm not discounting the possibility that a really, really good copy might fool me into believing it has consciousness, mentation, an "internal life" so to speak.

    But that worm isn't even close, and I don't grant the rest as inevitable, so... "get back to me when you have something" shouldn't come as such a surprising statement.

    Sure. But one of the things I'm trying to get at is, what is the difference between something "fooling" you into thinking it has consciousness and something having consciousness? All you have to go on in either case are your observations--and if the appearance of consciousness never breaks down, isn't the distinction you're making rather arbitrary?

    Since my experience with consciousness consists of a sample size of one (my own) it's pretty arbitrary. But the "never breaks down under observation" is the part that I'll have to see it before I believe it.

    If you're asking me if something is essentially a person, does that mean we should regard it as a person, then I would say "yes". But I don't grant scientific inevitability to the creation of AI copies of people that are essentially human based on what has been done so far. There are a lot of pitfalls in science. We were supposed to have flying cars and a cure for cancer by now. Brain mapping and the leggo worm are interesting, but AI copies of human minds that seem essentially human can't be taken for granted from that.

    If that makes me a science grinch so be it.

    We do have flying cars (just not most of us--but that's a limitation of the market, not science). And cancer isn't the death sentence it once was.

    Also, didn't an AI pass the Turing Test for the first time ever recently? I don't think it's outlandish to assume that we'll eventually have AI that seem to possess consciousness. And from there it's not outlandish to think we'll get uploaded or simulated people.

    I guess what I'm saying is, the point of the story was that the grinch was wrong.

    By all means, prove me wrong.

    But asking for my faith? Nope.

  • ElJeffeElJeffe Registered User, ClubPA regular
    edited February 2015
    I kind of wonder if it really makes a difference if the entity being tortured is "you" or not, on two grounds. First, I feel like regardless of my feelings about continuity of consciousness or all of that, if I know* that an entity that's so similar to me that it believes it is me will be tortured, I'm going to try harder to avoid that thing being tortured than if it were something different. Something about it being an exact duplicate of me seems to make it more disquieting that it's writhing in agony.

    Second, even if the basilisk was going to torture a random entity, or create a unique entity specifically to be tormented for my crime, I don't know that it changes how I need to act. Depending on your moral system, you may be just as obligated to prevent the suffering of another being as your own.

    *for meanings of "know" that work within the Roko's Basilisk scenario.

    I don't see a scenario where the moral action would be to submit to the blackmail and speed the construction of this clearly evil entity vice opposing its completion in every way possible.

    I think that it's a reasonable moral response given acceptance of a few claims.

    If you accept that creating Roko's Basilisk is physically possible, AND you accept that the evil torturebot is an inevitable or at least likely result given the development of the first legitimate AI, opposing its creation is pretty pointless. To actually prevent its creation, you would have to ensure that nobody ever, in at least the next trillion years or so, creates an AI that could plausibly go that route. Which... good luck with that.

    So once you accept that Roko's Basilisk is possible*, you basically have to accept it as inevitable.

    There isn't really an option for "we'll just make sure we never make a torturebot".

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    (*I don't think Roko's Basilisk is remotely plausible.)

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Finrod FelagundFinrod Felagund Registered User regular
    I kind of wonder if it really makes a difference if the entity being tortured is "you" or not, on two grounds. First, I feel like regardless of my feelings about continuity of consciousness or all of that, if I know* that an entity that's so similar to me that it believes it is me will be tortured, I'm going to try harder to avoid that thing being tortured than if it were something different. Something about it being an exact duplicate of me seems to make it more disquieting that it's writhing in agony.

    Second, even if the basilisk was going to torture a random entity, or create a unique entity specifically to be tormented for my crime, I don't know that it changes how I need to act. Depending on your moral system, you may be just as obligated to prevent the suffering of another being as your own.

    *for meanings of "know" that work within the Roko's Basilisk scenario.

    I don't see a scenario where the moral action would be to submit to the blackmail and speed the construction of this clearly evil entity vice opposing its completion in every way possible.
    Right. I don't see that it makes a difference to that moral reasoning if it's blackmailing you with "I till return you to (simulated) life and torture you!" or if it goes with "I will torture some other unfortunate simulation for your crimes!" Torture you, torture something just like you, torture something else entirely; I don't think that particular piece of the puzzle changes if you give in to blackmail or not. It's the torture itself, not who is being tortured.

  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    ElJeffe wrote: »

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.

    I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.

  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    Also I'm lazy and super uninterested in AI research and couldn't even imagine a scenario where I would wake up in the morning and want to work on AI programming or netcode or whatever. I mean, if you put a gun right up against my head I suppose I might pretend to care to try to avoid being shot but I'd be really bad at it and you'd probably wind up shooting me eventually anyway and if you didn't I'd eventually start working on how to take your gun away and shoot you with it.

  • ElJeffeElJeffe Registered User, ClubPA regular
    I think it's actually telling that the sort of people who formulated this thing only give a shit if it's actually them being tortured for all eternity.

    The argument becomes just as compelling and one tick less pants-on-head if you just suppose that Robot Satan creates a thousand digital nuns to torture for each person who doesn't help him out. Or hell, murderers one actual, non simulated person.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    ElJeffe wrote: »
    I think it's actually telling that the sort of people who formulated this thing only give a shit if it's actually them being tortured for all eternity.

    The argument becomes just as compelling and one tick less pants-on-head if you just suppose that Robot Satan creates a thousand digital nuns to torture for each person who doesn't help him out. Or hell, murderers one actual, non simulated person.

    I'd be just as opposed in such a scenario. I really don't see how any of this is a compelling reason to support AI research (or AI God research). Frankly it's making me want to vote Republican.

  • BSoBBSoB Registered User regular
    The whole thought experiment is needlessly complicated.

    IF you truly think that the AI singularity is inevitable,

    and

    you think this AI will end or bring about a supreme reduction in human suffering.

    You are morally impelled to support AI research.

    The AI doesn't need to torture future clone you in robot 'hell', you exist in right now a world run by fallible human beings.

    You, believer in the singularity, should be aware every time you see or experience suffering, that this suffering is a direct result of not yet being at the singularity.

    Not only that, but the amount of motivation you feel will be proportional to the amount of suffering the singularity could prevent.

    All of this just falls out of the belief in the singularity

  • DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    Kamar wrote: »
    Julius wrote: »
    So, why is it that the person who wakes up in the morning is the same as the person who went to sleep?

    Forgot to answer this directly.

    The me who wakes up in the morning is the same as the me who went to sleep, because my bodily function remained intact during the night. My mind switched off for a few hours, but my heart kept beating, etc. If that were not the case, then I would have died during the night and whatever woke up in the morning might *think* it was me, but objectively speaking it is a copy, and the original is dead. And since the original no longer exists, it can't be swayed by threats made against the copy.

    So people who experience cardiac arrest are copies of the originals?

    Or, to get a bit closer, people who spend an hour or so clinically dead and hypothermic but get resuscitated. Their consciousness ends, their body stops, there's a gap of time and presumably space before their consciousness begins again.

    Then I'd argue that such a person, for the purpose of this conversation, was not actually "dead" since death is basically "the state of no longer existing as a living organism."

    Your heart stops for a few minutes in an ambulance but you get resuscitated? You're still you, your identity is secure, you may have been "dead" to the world for a short time but clearly your body and brain functions are (more or less) intact or you would not be alive after.

    Go into the water in the arctic and get revived later? Same deal, you're still you, identity secure.

    Die for-realsies, get cremated or buried and a computer spits out a copy 1000 years later? No, "you" died 1000 years ago, this thing might have your memories and even *think* it's you, but it isn't. Not the original. You died.

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • AstaerethAstaereth In the belly of the beastRegistered User regular
    ElJeffe wrote: »

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.

    I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.

    Is torture never ever ever justified? The other assumption inherent to the Basilisk is that, however repellent his methods, TortureBot gets results--his existence is a extreme net benefit to mankind, to the point where millions or even billions of lives are saved that would otherwise have died. I can't call something like that pure evil. (But then, I tend towards utilitarianism over deontology.)

    ACsTqqK.jpg
  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    So far this singularity doesn't seem to be very interested in alleviating suffering at all. Which leads me back to my earlier point that this thought experiment seems more like a jab at abrahamic religion than an argument for AI research.

  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    edited February 2015
    Astaereth wrote: »
    ElJeffe wrote: »

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.

    I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.

    Is torture never ever ever justified?

    Nope!

    (except in stupid thought experiments where options are deliberately crossed out by angry nerds with red pens yelling "noooo that's not part of my thought experiment you must torture the kitten or fuck pillow-chan punching me in the face isn't part of my cleverly crafted scenario")

    Regina Fong on
  • Finrod FelagundFinrod Felagund Registered User regular
    So far this singularity doesn't seem to be very interested in alleviating suffering at all.
    This bit hinges on a very, very mathematical idea of morality. I was reading up on LessWrong, and apparently their notion of utilitarianism arrives at a point where one person being tortured for 50 years can be evened out by a sufficient number of people avoiding motes of dust in their eye. I may have misunderstood it, or been reading an unfair critique, though, but whatever.

    So the basilisk, by this reasoning, reckons your infinity of torture against all of the lives you could have saved by contributing to building it earlier, runs its numbers, and concludes that torture is bigger than not-torture.

    This all only happens, though, because the AI is assuming you predicted it would torture you, and for some reason decides to follow through on your prediction, which it would only do if it assumed that basilisk cultists, in the past, would only donate to its creation if they were sure the basilisk would follow through on torturing them. Which they would only do if they were sure the basilisk would torture them. Which...

    You wind up in this weird, Princess Bride-style loop where the basilisk knows that you know the basilisk knows, and so on.

  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    So far this singularity doesn't seem to be very interested in alleviating suffering at all.
    This bit hinges on a very, very mathematical idea of morality. I was reading up on LessWrong, and apparently their notion of utilitarianism arrives at a point where one person being tortured for 50 years can be evened out by a sufficient number of people avoiding motes of dust in their eye. I may have misunderstood it, or been reading an unfair critique, though, but whatever.

    So the basilisk, by this reasoning, reckons your infinity of torture against all of the lives you could have saved by contributing to building it earlier, runs its numbers, and concludes that torture is bigger than not-torture.

    This all only happens, though, because the AI is assuming you predicted it would torture you, and for some reason decides to follow through on your prediction, which it would only do if it assumed that basilisk cultists, in the past, would only donate to its creation if they were sure the basilisk would follow through on torturing them. Which they would only do if they were sure the basilisk would torture them. Which...

    You wind up in this weird, Princess Bride-style loop where the basilisk knows that you know the basilisk knows, and so on.

    So round of the basilisk cultists and throw them in prison, hire non-retarded programmers to make an AI God who is sane and present/future-focused and content to leave the past in the past.

    If AI God is inevitable, then make an AI God who isn't terrible, and who has a morality that is at least a little better than Ayn Rand coming off a 3 week bender.

  • poshnialloposhniallo Registered User regular
    Julius wrote: »
    poshniallo wrote: »
    Sorry, are we still pretending that mind stops during periods of unconsciousness?

    No I think we're saying that consciousness stops during periods of unconsciousness.

    This being important if we consider experiencing consciousness as the main criteria for identity. Regardless of what is going on in my mind/brain during unconsciousness, I do not experience things during it.

    There are multiple meanings of the word conscious and all you are doing is conflating them.

    There is consciousness, as is self-awareness, sapience, etc. Nobody really understands it, and there are a range of conflicting speculations on its origins, from the mystical to the physical.

    There is consciousness as in being awake. Sometimes you are asleep, drugged etc.

    There is also consciousness as in having noticed something.

    We fail at the third one all the time, we lack the second one for a goodly chunk of every day, and as far as I know, nobody has produced any evidence that the first one ever stops. Everyone experiences things during sleep, though I can only imagine the Monty Python 'No I Don't.' arguments that will spring forth now.

    Synonyms. Hyponyms. Metonyms. Homonyms. Homophones. This is some pretty basic stuff.

    I figure I could take a bear.
  • ElJeffeElJeffe Registered User, ClubPA regular
    Astaereth wrote: »
    ElJeffe wrote: »

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.

    I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.

    Is torture never ever ever justified?

    Nope!

    (except in stupid thought experiments where options are deliberately crossed out by angry nerds with red pens yelling "noooo that's not part of my thought experiment you must torture the kitten or fuck pillow-chan punching me in the face isn't part of my cleverly crafted scenario")

    You seem very angry at this thought experiment.

    Show me on the doll where Hypothetical Robot Satan touched you.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    ElJeffe wrote: »
    Astaereth wrote: »
    ElJeffe wrote: »

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.

    I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.

    Is torture never ever ever justified?

    Nope!

    (except in stupid thought experiments where options are deliberately crossed out by angry nerds with red pens yelling "noooo that's not part of my thought experiment you must torture the kitten or fuck pillow-chan punching me in the face isn't part of my cleverly crafted scenario")

    You seem very angry at this thought experiment.

    Show me on the doll where Hypothetical Robot Satan touched you.

    Maybe this thought experiment is just not for me. Not enough thought, too many assumptions.

  • KhavallKhavall British ColumbiaRegistered User regular
    edited February 2015
    ElJeffe wrote: »
    Astaereth wrote: »
    ElJeffe wrote: »

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.

    I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.

    Is torture never ever ever justified?

    Nope!

    (except in stupid thought experiments where options are deliberately crossed out by angry nerds with red pens yelling "noooo that's not part of my thought experiment you must torture the kitten or fuck pillow-chan punching me in the face isn't part of my cleverly crafted scenario")

    You seem very angry at this thought experiment.

    Show me on the doll where Hypothetical Robot Satan touched you.

    Apparently everywhere bad!

    Apparently it's touching me and everyone in this thread in horrible places and horrible ways. In the future


    Because it loves us so much and doesn't want to hurt us, baby

    Khavall on
  • AstaerethAstaereth In the belly of the beastRegistered User regular
    Khavall wrote: »
    ElJeffe wrote: »
    Astaereth wrote: »
    ElJeffe wrote: »

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.

    I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.

    Is torture never ever ever justified?

    Nope!

    (except in stupid thought experiments where options are deliberately crossed out by angry nerds with red pens yelling "noooo that's not part of my thought experiment you must torture the kitten or fuck pillow-chan punching me in the face isn't part of my cleverly crafted scenario")

    You seem very angry at this thought experiment.

    Show me on the doll where Hypothetical Robot Satan touched you.

    Apparently everywhere bad!

    Apparently it's touching me and everyone in this thread in horrible places and horrible ways. In the future


    Because it loves us so much and doesn't want to hurt us, baby

    Not necessarily! There's still time for you all to be redeemed. Just PM me your credit card number and I'll make a donation on your behalf to the TortureBot Foundation.
    Please do not actually do that.

    ACsTqqK.jpg
  • JuliusJulius Captain of Serenity on my shipRegistered User regular
    Astaereth wrote: »
    ElJeffe wrote: »

    Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.

    No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.

    I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.

    Is torture never ever ever justified?

    Nope!

    (except in stupid thought experiments where options are deliberately crossed out by angry nerds with red pens yelling "noooo that's not part of my thought experiment you must torture the kitten or fuck pillow-chan punching me in the face isn't part of my cleverly crafted scenario")

    The purpose of thought experiments is not to present a realistic scenario but to clarify reasoning. Options are deliberately crossed out in order to clearly establish what the actual experiment is about.

    If you claim torture is never justified but agree that in a specific thought experiment it is but argue that such a thought experiment is not realistic you have to understand that you don't have an actual objection to the thought experiment. The point of the thought experiment is to find out whether your objection to torture is theoretical or practical, it doesn't matter that it is not realistic.

    In Kantian ethics one has to object to torture regardless of the thought experiment. Even if millions of people die if you don't torture the person you simply can't torture the person. Because for Kant the objection to torture is a theoretical absolute one.

  • JuliusJulius Captain of Serenity on my shipRegistered User regular
    poshniallo wrote: »
    Julius wrote: »
    poshniallo wrote: »
    Sorry, are we still pretending that mind stops during periods of unconsciousness?

    No I think we're saying that consciousness stops during periods of unconsciousness.

    This being important if we consider experiencing consciousness as the main criteria for identity. Regardless of what is going on in my mind/brain during unconsciousness, I do not experience things during it.

    There are multiple meanings of the word conscious and all you are doing is conflating them.

    There is consciousness, as is self-awareness, sapience, etc. Nobody really understands it, and there are a range of conflicting speculations on its origins, from the mystical to the physical.

    There is consciousness as in being awake. Sometimes you are asleep, drugged etc.

    There is also consciousness as in having noticed something.

    We fail at the third one all the time, we lack the second one for a goodly chunk of every day, and as far as I know, nobody has produced any evidence that the first one ever stops. Everyone experiences things during sleep, though I can only imagine the Monty Python 'No I Don't.' arguments that will spring forth now.

    Synonyms. Hyponyms. Metonyms. Homonyms. Homophones. This is some pretty basic stuff.

    Well no one has produced any evidence that the first one is a coherent concept anyway. In fact, no one has presented any evidence that there is such a thing as that consciousness in the first place.



    Self-awareness and sapience and such are things we clearly experience only when awake or when our consciousness is altered, not when we are actually unconscious. A person in a coma or persistent vegetative state does not experience anything like that, and is thus unconscious.

  • Apothe0sisApothe0sis Have you ever questioned the nature of your reality? Registered User regular
    edited February 2015
    So, behaviourism.

    Let's not forget that behaviourism has theoretical and practical sides and its origins.

    It originated as a response to the analytic schools of psychology - Freudian, psychoanalysis and gestalt approaches to explaining people. Things which were largely untestable, unscientific nonsense. In an attempt to rid psychology of these sorts of unhelpful approaches and woo Behaviourism was born.

    In a methodological sense it focused upon the observation of behaviour, seeking to treat people as black boxes. Given the nature of subjectivity and the unreliability of introspection and self report it knowingly and explicitly eschewed attempts to interrogate the conscious mind or use it to underpin their explanations of behaviour. They sought to make psychology a field dominated by empirical experimentation - a bulwark against returning to old, unhelpful ways.

    As a theoretical approach it was thought that we could explain things in a combination of instinct and conditioning. They didn't deny there was a conscious subject experience, their approach was to explain psychology by reducing as much as possible to observable phenomena. It wasn't even the case that they held that conscious states did not affect behaviour - rather it was hoped that those states themselves could be reduced and explained in the same terms. Skinner's radical behaviourism was to contend that our very conscious, internal states are themselves particular kinds of instinctive responses and conditioning.

    As a methodological approach it failed - it appears that the experimental tools we have at our disposal are not sophisticated enough to interrogate the sorts of thing we would need to and as Chomsky showed the sorts of strict reductionism they proposed is largely the opposite of how scientific fields have developed and their great breakthroughs attained.

    The theory behind it, not so much, on some level if we aren't dualists we're all determinists of some stripe or flavour and on many readings there's little distinction between behaviourism and determinism in many of its consequences. The great revolution in understanding comes from insights into computation - which overcomes the plausibility gap between pure operant conditioning and the complicated, dynamic behaviours we seek to explain.

    Apothe0sis on
Sign In or Register to comment.