The new forums will be named Coin Return (based on the most recent
vote)! You can check on the status and timeline of the transition to the new forums
here.
We now return to our regularly scheduled PA Forums. Please let me (Hahnsoo1) know if something isn't working. The Holiday Forum will remain up until January 10, 2025.
Roko's Basilisk, or why we're all going to Robot Hell
Posts
So, why is it that the person who wakes up in the morning is the same as the person who went to sleep? Your answer is "an uninterrupted stream of consciousness," which suggests to me that anyone who is knocked unconscious is killed, but even if you say that it's impossible to be actually unconscious until you're dead, I think it's clear you're still wrong because of the split brain stuff I raised above. An uninterrupted stream of consciousness can be duplicated, and if it can be duplicated, there's no reason to think it can't be copied.
This is just assuming what it is that you're trying to prove, namely, that an uninterrupted stream of consciousness is necessary to maintain personal identity. Since I have given reasons above for thinking this is false, it's not clear to me that you've done anything other than assert the thing that is precisely at issue in the debate.
No I think we're saying that consciousness stops during periods of unconsciousness.
This being important if we consider experiencing consciousness as the main criteria for identity. Regardless of what is going on in my mind/brain during unconsciousness, I do not experience things during it.
A song is not a person though. It's information, it's not a self-aware life form. When you delete an mp3, nothing is lost forever. You can go download an identical copy and nothing would have been lost. When a life form dies, even a non-sentient life form, that particular life form is lost forever. With sufficient technology maybe you could simulate it or clone it or whatever. But it would still be a copy, the original would still be dead.
If you're asking what makes me "me" then I'd say: the self-aware cognitive function that currently exists in my meatsack body. It's a continuous being that has been in operation since shortly before I was born, and aside from periodic rest periods has been conscious and aware of itself since infancy. If it's easier to call it a soul, or a ghost, or a spirit, go right ahead. It's the totality of what makes me myself, and it doesn't exist, nor can it exist, anywhere else. Even if I had an identical twin he wouldn't be me, he would be him and I'd still be me.
At some point in the future, maybe it will be possible to reduce a human mind and its associated consciousness to pure information. I know Star Trek handwaves this by having teleporters that work on "the quantum level" such that an individual consciousness is actually supposed to be transferred intact. It's their playground so in that context I'm okay with taking them at their word. I know other SF that explicitly states their teleporters destroy matter, kill the traveler, and what comes out the other end is just a copy.
As above, when I say "consciousness" I'm not talking about "conscious-mind-as-in-awake-and-not-sleeping." I'm using a more general term for "self-aware cognitive functional mind." The essential spark that makes you a distinct being. Were I religious I'd call it a soul.
To illustrate, imagine you were magically duplicated. There's now a You^2 standing in the same room. If You^1 closes his eyes, but You^2 does not, can You^1 see through his duplicate's eyes? If You^2 bites his tongue, does You^1 feel pain? If You^2 kills himself, does You^1 experience death? No, because though they might share the same experiences *up to the point the duplicate was created* they are still separate and distinct beings from that point onward.
Now instead of sharing the same room at the same time, imagine You^2 only exists 1,000 years in the future, long after You^1 has died. If You^2 is tortured by a robot god, does You^1 feel it? If not, how is this supposed to influence You^1's actions?
And I'm not discounting the possibility that a really, really good copy might fool me into believing it has consciousness, mentation, an "internal life" so to speak.
But that worm isn't even close, and I don't grant the rest as inevitable, so... "get back to me when you have something" shouldn't come as such a surprising statement.
The basilisk wants me to not just accept all this as totes inevitable, it wants me to essentially worship it like a diety (serve me or I will punish you).
Is it really so shocking that I would deride it?
There are other things we can copy. Inanimate things, and living but dumb cells. In most of those cases, one copy can be destroyed without destroying another. Maybe streams of consciousness are an exception, but there's stuff to base assumptions on.
Let's have the year 2000 as the point a person is copied from. That person (Instance 1) goes about in a normal fashion until it dies in the year 2030.
The copy (Instance 2) that was produced is instead hidden away somewhere and is still alive in 2050.
What later happens to Instance 2 doesn't affect Instance 1, I assume.
The delays can be shortened. The point of copying I mentioned at first can be moved up to earlier in 2030. The same day that Instance 1 dies. The order of events remaining the same. But the order of events can be shifted as well. Maybe we place the point of copying later in the day than the time Instance 1 dies, or the next day. Does that order change anything about the assumption that what subsequently happens to Instance 2 doesn't affect Instance 1?
Actually, this also requires a way to revive Instance 2, because it would be a copy of a dead Instance 1. I got carried away. But let's say that there is such a way, and it just isn't used to revive Instance 1 because some bastard thinks a thoughtexperiment is more important than Instance 1.
The point of copying could also be moved to the exact time of Instance 1's death, followed by merely prolonging Instance 2's life to avoid the interference of death.
Forgot to answer this directly.
The me who wakes up in the morning is the same as the me who went to sleep, because my bodily function remained intact during the night. My mind switched off for a few hours, but my heart kept beating, etc. If that were not the case, then I would have died during the night and whatever woke up in the morning might *think* it was me, but objectively speaking it is a copy, and the original is dead. And since the original no longer exists, it can't be swayed by threats made against the copy.
Sure. But one of the things I'm trying to get at is, what is the difference between something "fooling" you into thinking it has consciousness and something having consciousness? All you have to go on in either case are your observations--and if the appearance of consciousness never breaks down, isn't the distinction you're making rather arbitrary?
The terms "consciousness", "unconsciousness", and "experience" have been so nebulously defined here as to make this claim meaningless.
Second, even if the basilisk was going to torture a random entity, or create a unique entity specifically to be tormented for my crime, I don't know that it changes how I need to act. Depending on your moral system, you may be just as obligated to prevent the suffering of another being as your own.
*for meanings of "know" that work within the Roko's Basilisk scenario.
Can you clarify your objection to the worm being used as evidence here?
Is it: "This simulation of a worm is not sufficiently like the actual worm to be considered a digital copy in any meaningful sense"? (To which is ask if the complete cellular model of it being worked on would count.)
Or is it: "Okay, we've made a digital nematode. Nice, but creating something substantially more complicated is still probably impossible. There exists some threshold of complexity beyond which it is probably impossible to truly create a real AI simulation." (To which I ask where, approximately, you would draw the line.)
Or is it: "Okay, it is possible to create AI simulations of animals. It is still probably impossible to create a true human AI, because humans have a special quality that cannot be created digitally." (To which I ask if you feel this is a theoretical impossibility or just a practical one. If one were to create a duplicate of an actual human, atom by atom, would it still not be a human being? Or do you just feel this will never be possible at any point anywhere from now until the heat death of the universe? )
Since my experience with consciousness consists of a sample size of one (my own) it's pretty arbitrary. But the "never breaks down under observation" is the part that I'll have to see it before I believe it.
If you're asking me if something is essentially a person, does that mean we should regard it as a person, then I would say "yes". But I don't grant scientific inevitability to the creation of AI copies of people that are essentially human based on what has been done so far. There are a lot of pitfalls in science. We were supposed to have flying cars and a cure for cancer by now. Brain mapping and the leggo worm are interesting, but AI copies of human minds that seem essentially human can't be taken for granted from that.
If that makes me a science grinch so be it.
So people who experience cardiac arrest are copies of the originals?
Note I say a particular sentience. I think creating a digital sentience is probably within the realm of possibility. Copying an organic one into a computer and having it be the same is, intellectually, a bridge too far for me.
Eh, consciousness is just whatever I'm experiencing now. The definition isn't unclear, our understanding of it is just not very good.
We do have flying cars (just not most of us--but that's a limitation of the market, not science). And cancer isn't the death sentence it once was.
Also, didn't an AI pass the Turing Test for the first time ever recently? I don't think it's outlandish to assume that we'll eventually have AI that seem to possess consciousness. And from there it's not outlandish to think we'll get uploaded or simulated people.
I guess what I'm saying is, the point of the story was that the grinch was wrong.
I don't see a scenario where the moral action would be to submit to the blackmail and speed the construction of this clearly evil entity vice opposing its completion in every way possible.
Or, to get a bit closer, people who spend an hour or so clinically dead and hypothermic but get resuscitated. Their consciousness ends, their body stops, there's a gap of time and presumably space before their consciousness begins again.
Oh come on.
By all means, prove me wrong.
But asking for my faith? Nope.
I think that it's a reasonable moral response given acceptance of a few claims.
If you accept that creating Roko's Basilisk is physically possible, AND you accept that the evil torturebot is an inevitable or at least likely result given the development of the first legitimate AI, opposing its creation is pretty pointless. To actually prevent its creation, you would have to ensure that nobody ever, in at least the next trillion years or so, creates an AI that could plausibly go that route. Which... good luck with that.
So once you accept that Roko's Basilisk is possible*, you basically have to accept it as inevitable.
There isn't really an option for "we'll just make sure we never make a torturebot".
Also, the option for "even if I knew this was definitely the case, I would refuse to help it just to spite its stupid robot face and laugh as it tortured me for all eternity" is hilarious Internet Tough Guy-itry.
(*I don't think Roko's Basilisk is remotely plausible.)
No it isn't, and you're shitting on everyone who has ever been martyred or otherwise died for a moral stance which they knew was going to get them killed but believed in nevertheless.
I'd still oppose torture-bot no matter what, because it's pure evil. I don't want its heaven, and I'm not scared enough of its hell to serve it.
The argument becomes just as compelling and one tick less pants-on-head if you just suppose that Robot Satan creates a thousand digital nuns to torture for each person who doesn't help him out. Or hell, murderers one actual, non simulated person.
I'd be just as opposed in such a scenario. I really don't see how any of this is a compelling reason to support AI research (or AI God research). Frankly it's making me want to vote Republican.
IF you truly think that the AI singularity is inevitable,
and
you think this AI will end or bring about a supreme reduction in human suffering.
You are morally impelled to support AI research.
The AI doesn't need to torture future clone you in robot 'hell', you exist in right now a world run by fallible human beings.
You, believer in the singularity, should be aware every time you see or experience suffering, that this suffering is a direct result of not yet being at the singularity.
Not only that, but the amount of motivation you feel will be proportional to the amount of suffering the singularity could prevent.
All of this just falls out of the belief in the singularity
Then I'd argue that such a person, for the purpose of this conversation, was not actually "dead" since death is basically "the state of no longer existing as a living organism."
Your heart stops for a few minutes in an ambulance but you get resuscitated? You're still you, your identity is secure, you may have been "dead" to the world for a short time but clearly your body and brain functions are (more or less) intact or you would not be alive after.
Go into the water in the arctic and get revived later? Same deal, you're still you, identity secure.
Die for-realsies, get cremated or buried and a computer spits out a copy 1000 years later? No, "you" died 1000 years ago, this thing might have your memories and even *think* it's you, but it isn't. Not the original. You died.
Is torture never ever ever justified? The other assumption inherent to the Basilisk is that, however repellent his methods, TortureBot gets results--his existence is a extreme net benefit to mankind, to the point where millions or even billions of lives are saved that would otherwise have died. I can't call something like that pure evil. (But then, I tend towards utilitarianism over deontology.)
Nope!
(except in stupid thought experiments where options are deliberately crossed out by angry nerds with red pens yelling "noooo that's not part of my thought experiment you must torture the kitten or fuck pillow-chan punching me in the face isn't part of my cleverly crafted scenario")
So the basilisk, by this reasoning, reckons your infinity of torture against all of the lives you could have saved by contributing to building it earlier, runs its numbers, and concludes that torture is bigger than not-torture.
This all only happens, though, because the AI is assuming you predicted it would torture you, and for some reason decides to follow through on your prediction, which it would only do if it assumed that basilisk cultists, in the past, would only donate to its creation if they were sure the basilisk would follow through on torturing them. Which they would only do if they were sure the basilisk would torture them. Which...
You wind up in this weird, Princess Bride-style loop where the basilisk knows that you know the basilisk knows, and so on.
So round of the basilisk cultists and throw them in prison, hire non-retarded programmers to make an AI God who is sane and present/future-focused and content to leave the past in the past.
If AI God is inevitable, then make an AI God who isn't terrible, and who has a morality that is at least a little better than Ayn Rand coming off a 3 week bender.
There are multiple meanings of the word conscious and all you are doing is conflating them.
There is consciousness, as is self-awareness, sapience, etc. Nobody really understands it, and there are a range of conflicting speculations on its origins, from the mystical to the physical.
There is consciousness as in being awake. Sometimes you are asleep, drugged etc.
There is also consciousness as in having noticed something.
We fail at the third one all the time, we lack the second one for a goodly chunk of every day, and as far as I know, nobody has produced any evidence that the first one ever stops. Everyone experiences things during sleep, though I can only imagine the Monty Python 'No I Don't.' arguments that will spring forth now.
Synonyms. Hyponyms. Metonyms. Homonyms. Homophones. This is some pretty basic stuff.
You seem very angry at this thought experiment.
Show me on the doll where Hypothetical Robot Satan touched you.
Maybe this thought experiment is just not for me. Not enough thought, too many assumptions.
Apparently everywhere bad!
Apparently it's touching me and everyone in this thread in horrible places and horrible ways. In the future
Because it loves us so much and doesn't want to hurt us, baby
Not necessarily! There's still time for you all to be redeemed. Just PM me your credit card number and I'll make a donation on your behalf to the TortureBot Foundation.
The purpose of thought experiments is not to present a realistic scenario but to clarify reasoning. Options are deliberately crossed out in order to clearly establish what the actual experiment is about.
If you claim torture is never justified but agree that in a specific thought experiment it is but argue that such a thought experiment is not realistic you have to understand that you don't have an actual objection to the thought experiment. The point of the thought experiment is to find out whether your objection to torture is theoretical or practical, it doesn't matter that it is not realistic.
In Kantian ethics one has to object to torture regardless of the thought experiment. Even if millions of people die if you don't torture the person you simply can't torture the person. Because for Kant the objection to torture is a theoretical absolute one.
Well no one has produced any evidence that the first one is a coherent concept anyway. In fact, no one has presented any evidence that there is such a thing as that consciousness in the first place.
Self-awareness and sapience and such are things we clearly experience only when awake or when our consciousness is altered, not when we are actually unconscious. A person in a coma or persistent vegetative state does not experience anything like that, and is thus unconscious.
Let's not forget that behaviourism has theoretical and practical sides and its origins.
It originated as a response to the analytic schools of psychology - Freudian, psychoanalysis and gestalt approaches to explaining people. Things which were largely untestable, unscientific nonsense. In an attempt to rid psychology of these sorts of unhelpful approaches and woo Behaviourism was born.
In a methodological sense it focused upon the observation of behaviour, seeking to treat people as black boxes. Given the nature of subjectivity and the unreliability of introspection and self report it knowingly and explicitly eschewed attempts to interrogate the conscious mind or use it to underpin their explanations of behaviour. They sought to make psychology a field dominated by empirical experimentation - a bulwark against returning to old, unhelpful ways.
As a theoretical approach it was thought that we could explain things in a combination of instinct and conditioning. They didn't deny there was a conscious subject experience, their approach was to explain psychology by reducing as much as possible to observable phenomena. It wasn't even the case that they held that conscious states did not affect behaviour - rather it was hoped that those states themselves could be reduced and explained in the same terms. Skinner's radical behaviourism was to contend that our very conscious, internal states are themselves particular kinds of instinctive responses and conditioning.
As a methodological approach it failed - it appears that the experimental tools we have at our disposal are not sophisticated enough to interrogate the sorts of thing we would need to and as Chomsky showed the sorts of strict reductionism they proposed is largely the opposite of how scientific fields have developed and their great breakthroughs attained.
The theory behind it, not so much, on some level if we aren't dualists we're all determinists of some stripe or flavour and on many readings there's little distinction between behaviourism and determinism in many of its consequences. The great revolution in understanding comes from insights into computation - which overcomes the plausibility gap between pure operant conditioning and the complicated, dynamic behaviours we seek to explain.