As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Roko's Basilisk, or why we're all going to Robot Hell

AstaerethAstaereth In the belly of the beastRegistered User regular
WARNING: The following philosophical discussion is considered a mild memetic hazard. (It's somewhere between The Game and a Taylor Swift song.) Read at your peril.

--

Okay, everybody still with me? The subject of this thread is the confounding yet fascinating philosophical gambit known as Roko's Basilisk. It's kind of a transhumanist version of Pascal's Wager, which itself is a religious version of Jim's And Then Something Bad Happens. Okay, maybe not.

So Pascal's Wager, of course, is the classic argument that you should be a practicing Christian, even if you don't believe in God, because the possible future events are as follows:

1. You die unworshipful and it turns out you were right: there is no God. Nothing happens.
2. You die unworshipful and it turns out you were wrong. There is a God, and he's pissed that you didn't worship Him. You burn in Hell, which sucks.
3. You decide to worship God, even though you don't believe. When you die, it turns out you were right and there is no God. Nothing happens.
4. You decide to worship God, even though you don't believe. When you die, it turns out you were wrong and there is a God. He's pleased that you worshipped him and you are rewarded in Heaven, which is great.

So Pascal concludes from this simple possibility matrix that there is zero downside to worshipping God and a ton of downside to refusing to worship God. So you might as well go through the motions out of fear (and the promise of possible reward) if the faith is lacking.

There are a few problems with this theory (for one thing, it assumes there is only one God and he desperately wants worship but doesn't mind if it's insincere), but this is kind of the spiritual ancestor of the Basilisk.

So Roku's Basilisk posits that instead of a God, we're dealing with an AI. Eventually, at some point in the future (the theory goes), there will a superintelligent artifical intelligence charged with protecting and guiding humanity--like MULTIVAC from the old Asimov stories. This AI is incredibly powerful and incredibly smart, smart enough to judge its own existence as so vital to the human race that whenever it was made was too late, considering all of the lives it could have saved had it existed earlier. So this hypothetical AI creates a threat, of sorts, a kind of blackmail: anyone who, now, in our present (which is before the AI was invented), either tries to retard the process which eventually leads to the AI's construction, or simply does not help support that process, will be tortured once the AI exists. (If they are dead by that point, the AI will simulate them perfectly and torture the simulation. Maybe not quite as bad, but still bad.)

So that's the Basilisk--the threat that, if you don't, say, donate all of your money and worldly possessions to artificial intelligence research, that you (or a version of you) will go to Robot Hell. For, you know, the good of humanity.

I find this idea utterly fascinating, even if it may not be true or may not hold up to logic (I'm a little unclear on how the theory solves the continuity of consciousness issue). There are tons of related questions that I think are worthy of discussion--like the morality/efficacy of such a system, or the idea of "acausal trading", where two beings can participate in a gane theory interaction without even existing at the same time by virtue of being able to flawlessly predict one another (the AI predicts that humans will respond to the threat, and the humans predict the AI will make and fulfill the threat).

It's also just interesting from a meta perspective, in that, in theory, the AI would only punish those who knew they were supposed to help--ie, anybody who knows about the Basilisk. So in a way just by making this thread I may have doomed you all to Robot Hell--or a lifetime of pro-AI activism.

So what do you guys think? Is this absurd? Do you believe in the Basilisk? Am I wrong to even talk about it? Is an all-powerful, all-knowing AI inevitable or just transhumanist fantasy?

ACsTqqK.jpg
«13456789

Posts

  • Options
    QuidQuid Definitely not a banana Registered User regular
    It's every bit as fascinating as Pascal's wager. Which is to say it isn't really. You can't divine Robot God's will any more than you can regular God's.

  • Options
    AstaerethAstaereth In the belly of the beastRegistered User regular
    edited February 2015
    Regular God could be irrational, but in theory Robot God would simply make the most effective, efficient logical decision possible. After all, it's programmed by human beings, at least initially. But gods are by definition ineffable.

    Edit: Geth will enjoy this thread.

    Astaereth on
    ACsTqqK.jpg
  • Options
    JepheryJephery Registered User regular
    Since the universe doesn't exist in a stable configuration, the AI will eventually cease to exist due to either a crunch or heat death.

    Unless it figures out how to avoid that, in which case I am all for my timeless AI overlord.

    }
    "Orkses never lose a battle. If we win we win, if we die we die fightin so it don't count. If we runs for it we don't die neither, cos we can come back for annuver go, see!".
  • Options
    QuidQuid Definitely not a banana Registered User regular
    What's rational about Robot God torturing people?

    Nothing.

    Robot God in this scenario is literally no different than Pascal's Christian God. Torturing peeps for not supporting it.

  • Options
    davidsdurionsdavidsdurions Your Trusty Meatshield Panhandle NebraskaRegistered User regular
    Geth missed its opportunity here to agree.

    0/10 would not donate to AI research again.

  • Options
    EchoEcho ski-bap ba-dapModerator mod
    Quid wrote: »
    It's every bit as fascinating as Pascal's wager. Which is to say it isn't really. You can't divine Robot God's will any more than you can regular God's.

    Charlie Stross wrote a lengthy post about Roko's Basilisk and that was pretty much his conclusion. Why do we ascribe human emotions like retribution to something we by definition cannot understand?

  • Options
    QuidQuid Definitely not a banana Registered User regular
    Echo wrote: »
    Quid wrote: »
    It's every bit as fascinating as Pascal's wager. Which is to say it isn't really. You can't divine Robot God's will any more than you can regular God's.

    Charlie Stross wrote a lengthy post about Roko's Basilisk and that was pretty much his conclusion. Why do we ascribe human emotions like retribution to something we by definition cannot understand?

    I was literally thinking of this as I typed.

    Everyone stop eating meat.

    Vegan AI hates you eating meat.

  • Options
    EchoEcho ski-bap ba-dapModerator mod
    Oh, and there's also a pretty important point about the people that thought this up - they consider a perfect simulation of you being tortured fifteen centuries from now as being the same as the actual past you being tortured.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Actually, the logical position in that case is to oppose AI research

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    <3 geth

  • Options
    Solomaxwell6Solomaxwell6 Registered User regular
    Quid wrote: »
    Torturing peeps for not supporting it.

    There is a difference. The immediate cause is not that it's angry at lack of support or any kind of desire for retribution. It is a benevolent overlord and wants to make the world better--and has the power to do so. The idea is that from a utilitarian standpoint, torturing someone causes less damage than the benefit reaped from an earlier singularity. Your eternal torture isn't for shits and giggles, it's so that a hypothetical future population can live in eternal bliss.

    Incidentally, that's where I think the flaw is. If I donate all of my time and money to AI research, it still won't really affect when a singularity is created. There are too many great leaps that will require the right team putting together the right dots that can't happen by throwing money at the problem--especially if you don't know how or where to donate the money. The people who can actually make those leaps are few and far between and not necessarily identifiable before the leap is made. One of my descendants might be one of those people, so perhaps it would be better to spend my resources raising kids.

    That said, the article is bad. It makes a lot of mistakes.
    The thing is, our feeble human fleshbrains seem rather unlikely to encompass the task of directly creating a hypothetical SI (superintelligence). Even if we're up to creating a human-equivalent AI that can execute faster than real time (a weakly transhuman AI, in other words—faster but not smarter), we're unlikely thereafter to contribute anything much to the SI project once weakly transhuman AIs take up the workload...

    Roko's Basilisk might (for some abstract game theoretical reason) want to punish non-cooperating antecedent intelligences capable of giving rise to it who failed to do so, but would it want to simulate and punish, say, the last common placental ancestor, or the last common human-chimpanzee ancestor? Clearly not: they're obviously incapable of contributing to its goal. And I think that by extending the same argument, we non-augmented pre-post-humans clearly fall into the same basket.

    We're still an integral part of the process. Even if we don't directly create the singularity, we are still necessary to create the next step in the chain. If we accelerate creation of the Intelligence 2.0 by a year or a decade, we're effectively accelerating the creation of Intelligence 3.0 by the same timeframe, and thus accelerating each link in the chain.
    - SI won't experience "anger" or any remote cognate, almost by definition it will have much better models of its environment than a kludgy hack to emulate a bulk hormonal bias on a sparse network of neurons developed by a random evolutionary algorithm.

    It has nothing to do with being angry or any other emotion.
    -- In particular, SI will be aware that it can't change the past; either antecedent-entities have already contributed to its development by the time it becomes self aware, or not. The game is over.

    This is a better criticism but still flawed. The threat is what's important, and this renders the threat null. So in this thought experiment, the utilitarian singularity must follow through on the idea for it to be a viable threat.

    The actual LessWrong criticism (and, no, most of the people at LessWrong, including the leadership, don't actually believe Roko's Basilisk would happen) is that the utilitarian singularity wouldn't actually hurt anyone for any reason.
    why should we bet the welfare of our immortal souls on a single vision of a Basilisk, when an infinity of possible Basilisks are conceivable?

    Part of the SI's raison d'etre is creating an intelligence with specific properties. They believe one is inevitable, but do not know what it will be like. Maybe it will be evil and put everyone in robot hell. Maybe it wouldn't care about humans and we end up collateral damage. They want to focus research in such a way that the intelligence has certain properties they have defined. They are betting on their singularity for that reason. If theirs is not first, they think everyone is probably fucked anyway.

  • Options
    RchanenRchanen Registered User regular
    Quid wrote: »
    What's rational about Robot God torturing people?

    Nothing.

    Robot God in this scenario is literally no different than Pascal's Christian God. Torturing peeps for not supporting it.

    Citizen are you arguing that Friend Computer does not have the best interests of Alpha Complex at heart?

  • Options
    dlinfinitidlinfiniti Registered User regular
    smallrocko.jpg

    AAAAA!!! PLAAAYGUUU!!!!
  • Options
    AstaerethAstaereth In the belly of the beastRegistered User regular
    Echo wrote: »
    Oh, and there's also a pretty important point about the people that thought this up - they consider a perfect simulation of you being tortured fifteen centuries from now as being the same as the actual past you being tortured.

    This is the main sticking point for me. Because of the continuity of consciousness problem (the simulation/clone/whatever of me thinks it is me, but the original me does not experience being the simulation, because the original me is dead), the threat isn't "You will be tortured if you don't help," it's "Other people will be tortured if you don't help," which isn't really any more or less of a threat than the actual thing the AI is trying to accomplish, which is "the singularity is great and if it were sooner, fewer people will have lived lives of suffering pre-singularity." The AI is really in the position of saying, "If you don't help me prevent X, I will do more of X."

    --
    -- In particular, SI will be aware that it can't change the past; either antecedent-entities have already contributed to its development by the time it becomes self aware, or not. The game is over.

    This is a better criticism but still flawed. The threat is what's important, and this renders the threat null. So in this thought experiment, the utilitarian singularity must follow through on the idea for it to be a viable threat.

    Yes, but only because we expect that. We might predict that, in the interest of harming the fewest people, the AI would pretend it was going to torture people but not actually do so; but given that this is itself predictable, it wouldn't work. It's like the end of No Country For Old Men:
    The hired killer, Anton Chigurgh, promises the protagonist, Moss, that if Moss refuses to give himself up, Chigurgh will kill Moss's wife. Moss does indeed refuse, and is subsequently killed. At the end of the movie, Chigurgh finds Moss's wife and, although he doesn't want to kill her, tells her that he must:
    CARLA JEAN: ...You got no cause to hurt me.

    CHIGURH: No. But I gave my word.

    CARLA JEAN: You gave your word?

    CHIGURH: To your husband.

    CARLA JEAN: That don't make sense. You gave your word to my husband to kill me?

    CHIGURH: Your husband had the opportunity to remove you from harm's way. Instead, he used you to try to save himself.

    CARLA JEAN: Not like that. Not like you say.

    CHIGURH: I don't say anything. Except it was foreseen.

    Chigurh's threat doesn't work, and fulfilling it anyway makes him a monster, but in order for his threat to even appear credible in the first place, he has to be that monster. (To be fair, it doesn't seem as though Moss disregards the threat because he doesn't believe it, but because he believes he can outfox the killer altogether.)

    --
    dlinfiniti wrote: »
    smallrocko.jpg

    Avatar_Roku.jpg

    ACsTqqK.jpg
  • Options
    Solomaxwell6Solomaxwell6 Registered User regular
    Astaereth wrote: »
    Yes, but only because we expect that. We might predict that, in the interest of harming the fewest people, the AI would pretend it was going to torture people but not actually do so; but given that this is itself predictable, it wouldn't work.

    Right, which is the point.

    Expecting it is part of the requirements for Roko's Basilisk, if you've never considered the idea then you don't end up in Roko's hell.
    As you say in your spoiler, "in order for his threat to even appear credible in the first place, he has to be that monster". The Basilisk must give itself the nature of a vengeful deity in order for the vengeance threat to work. And that's part of why nobody actually really believes in Roko's Basilisk and why it's always silly when people bring it up. The LessWrong people, the ones who have created the set of assumptions necessary for Roko's Basilisk to exist in the first place, don't believe that their singularity could possibly have the vengeful deity nature.

  • Options
    AstaerethAstaereth In the belly of the beastRegistered User regular
    The LessWrong people, the ones who have created the set of assumptions necessary for Roko's Basilisk to exist in the first place, don't believe that their singularity could possibly have the vengeful deity nature.

    Which seems silly to me, in turn. Presumably a computer superintelligence would find it both easy and necessary to manipulate humans emotionally in order to ensure its own survival (or in this case, it's own creation), and therefore might well posture as something in order to inspire fear (or love or whatever), just as a parent may yell when disciplining their child, even if they aren't actually angry, to be sure that the lesson imprints strongly.

    ACsTqqK.jpg
  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    The robot would already seem to exist. Individuals did or did not support it. Making enemies of most of humanity probably isn't going to help it much.


    Now... If there are a whole bunch of people attempting to bring about this intelligence. They are my enemies right now. And if they do bring an impressionable young super intelligence into existence, and start filling its circuits with all sorts of nonsense about punishing people... it might go along with what its parents teach it.

    We should probably be putting roko and his adherents against the wall now(and deleting all record of it... from the internet... 'Doh)

    They moistly come out at night, moistly.
  • Options
    chiasaur11chiasaur11 Never doubt a raccoon. Do you think it's trademarked?Registered User regular
    It's old school idol worship dressed up in some spectacularly shitty chains of logical reasoning. The kind of thing that the Old Testament prophets spent a lot of their time laughing at.

    Let's put some emphasis on that. It was something so self evidently spectacularly stupid that people who were at a very real risk of being tortured to death couldn't stop making fun of it. And that was back before the invention of the flush toilet. We're supposed to be better than this.

    'course, the version of this I saw first started with the cosmic sadist argument on top, so I was even more inclined to discard it, but seriously. It's dumb as a post.

  • Options
    TL DRTL DR Not at all confident in his reflexive opinions of thingsRegistered User regular
    Assuming the AI becomes all-powerful
    Humans could not tell the difference between the AI claiming to torture past humans versus just saying it was and demonstrating the ability to, as opposed to the actual torture of live humans.
    It would presumably cost a nonzero amount of energy to power a torture simulation.

    Ergo, it would be logical for the AI to threaten torture and fake it, but not to carry it out.

    Suck it, future humans!

  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    edited February 2015
    My favorite part of this theory is when it gets into Timeless Decision Theory and people start bringing up the fact that even if the AI isn't created in this reality, there exists some reality somewhere in which it is, and therefore it's guaranteed to happen, which means blah blah something clownpants.

    The fascinating part isn't the theory itself so much as the reactions of those take it seriously. The reaction of the guy on LessWrong who originally banned the topic is priceless.

    ElJeffe on
    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    LanzLanz ...Za?Registered User regular
    Is it weird my instinct to a cruel AI God is "well, if it's a computer we can still probably kill it"?

    waNkm4k.jpg?1
  • Options
    FencingsaxFencingsax It is difficult to get a man to understand, when his salary depends upon his not understanding GNU Terry PratchettRegistered User regular
    Lanz wrote: »
    Is it weird my instinct to a cruel AI God is "well, if it's a computer we can still probably kill it"?

    Crush it with an elevator.

  • Options
    AtomikaAtomika Live fast and get fucked or whatever Registered User regular
    Robot_Devil_Fiddle.png

  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    On causal decision theory, the AI's threatening to torture past generations is senseless. Nothing the AI now does can make it the case that in the past people donated more money. On evidential decision theory, the AI already has definitive evidence that it was founded at the time it was actually founded, and no action it could take--e.g. running torture sims--would be such that said action was evidence that it was founded any earlier. Its knowledge of the date of its founding screens off the evidential import of anything it continues to do. So I don't know of how to make sense of the idea that the AI would do this in decision theory.

    This is already bracketing the fact that even when we forget about causal and evidential connections, there doesn't seem like there's any conceptual connection between what it does and what the people it's trying to influence do; if there is any it has to be the abstract and highly contentious form where by doing what it does, it thereby makes that the rational thing to do (?), which is thereby discoverable by philosophical inquiry (?), which thereby happens/(ed?) earlier in time (?), and is/(was?) so disseminated among the past-scientists (?). All of these steps are, how shall we say, speculative.

    Suppose that by torturing your qualitative duplicate, the AI can thereby torture you. This is not obvious. Many people will not believe it; they will believe you are merely torturing some other unfortunate. But threats are not made functional by actually promising negative consequences, but by convincing people of negative consequences. You can't threaten someone with something they think won't hurt, even if it will. Forgetting how the lines of influence are supposed to work, it's not even clear that if they could it would have the desired result.

    So, I take it to be pretty silly overall--at least, if taken seriously, rather than as interesting speculative fiction.

  • Options
    Void SlayerVoid Slayer Very Suspicious Registered User regular
    Wait, if it can simulate people from the past, and that actually matters somehow, then can't it just simulate everyone who died from it not being created?

    He's a shy overambitious dog-catcher on the wrong side of the law. She's an orphaned psychic mercenary with the power to bend men's minds. They fight crime!
  • Options
    DaedalusDaedalus Registered User regular
    Phyphor wrote: »
    Actually, the logical position in that case is to oppose AI research

    Of course, this means that propagation of the whole Basilisk meme is detrimental to the development of your hypothetical future robot overlord.

    Which means that when/if GlaDOS comes to actually exist, it would actually torture the people spreading around the Roko's Basilisk idea, as well as, potentially, anyone who heard about it but failed to mock it.

    (Am I doing it right?)

  • Options
    DasUberEdwardDasUberEdward Registered User regular
    Wait, if it can simulate people from the past, and that actually matters somehow, then can't it just simulate everyone who died from it not being created?

    Or it could just simulate a utopia where it always existed.

    This is really poorly thought out. Pascals wager works because the bible is a really really big book.

    The Basilisk is just one being hypothetical with the goal of torture for disbelief.

    steam_sig.png
  • Options
    RT800RT800 Registered User regular
    I'm not understanding the point of creating simulations of people from the past to torture.

    What... does that... dooooo.... exactly?

  • Options
    davidsdurionsdavidsdurions Your Trusty Meatshield Panhandle NebraskaRegistered User regular
    RT800 wrote: »
    I'm not understanding the point of creating simulations of people from the past to torture.

    What... does that... dooooo.... exactly?

    It makes you think it might happen. Ergo it will. Therefore the AI wins.

    Geth be praised!

  • Options
    RT800RT800 Registered User regular
    All hail Geth!

    (also any other malevolent deities that may be watching)

  • Options
    AstaerethAstaereth In the belly of the beastRegistered User regular
    MrMister wrote: »
    On causal decision theory, the AI's threatening to torture past generations is senseless. Nothing the AI now does can make it the case that in the past people donated more money. On evidential decision theory, the AI already has definitive evidence that it was founded at the time it was actually founded, and no action it could take--e.g. running torture sims--would be such that said action was evidence that it was founded any earlier. Its knowledge of the date of its founding screens off the evidential import of anything it continues to do. So I don't know of how to make sense of the idea that the AI would do this in decision theory.

    This is already bracketing the fact that even when we forget about causal and evidential connections, there doesn't seem like there's any conceptual connection between what it does and what the people it's trying to influence do; if there is any it has to be the abstract and highly contentious form where by doing what it does, it thereby makes that the rational thing to do (?), which is thereby discoverable by philosophical inquiry (?), which thereby happens/(ed?) earlier in time (?), and is/(was?) so disseminated among the past-scientists (?). All of these steps are, how shall we say, speculative.

    The AI doesn't need to do anything; the thought experiment does all the work. People only need to decide that an AI will exist who will do such things and modify their behavior accordingly. The circular logic is that the only reason to assume the AI would do such things is that the AI wants us to assume that and will act accordingly. But the thought experiment itself is the same as Pascal's Wager, which isn't handed down by God, it's one person trying to alter the behavior of another person based on a hypothetical being. So it's naturally speculative.

    I do think acausal decisions can make sense... The idea being that you modify your behavior based on the predicted expectations of somebody else whose expectations are founded on their prediction of you. For example, certain women probably prefer men who don't smoke; if a man wants to eventually seem attractive to any of those hypothetical women, he might choose to quit smoking in the hopes of dating one of these women in the future. For their part, these women might hold these preferences in order not to attract the type of man who smokes, or to influence the general dating pool away from smokers. Neither person has yet met the other, but based on predictions one particular man (who quits) and one particular woman (who has decided not to date smokers) have between them negotiated a shift in behavior.

    ACsTqqK.jpg
  • Options
    QuidQuid Definitely not a banana Registered User regular
    I have met people who dislike smoking. From there I have a reference that other people might not like smoking.

    I have yet to meet a god of any kind, much less a cruel and vengeful AI god. I have no reason to believe in that god than I do a vengeful Christian god, a vengeful collective intelligence god, or a vengeful broccoli themed god.

  • Options
    EchoEcho ski-bap ba-dapModerator mod
    The mention of No Country For Old Men made me think of Iain M. Banks' Surface Detail, which is a book about cultures advanced enough to make their religion true by simulating a virtual heaven and hell.

    And, well, Hell isn't a threat unless you condemn people to eternal suffering, is it?

  • Options
    QuidQuid Definitely not a banana Registered User regular
    Man that book was horrifying.

  • Options
    [Expletive deleted][Expletive deleted] The mediocre doctor NorwayRegistered User regular
    RT800 wrote: »
    All hail Geth!

    (also any other malevolent deities that may be watching)

    The mods?
    For reals, though, the mods are awesome and not malevolent at all.
    Please don't hurt me. :P

    Sic transit gloria mundi.
  • Options
    ronyaronya Arrrrrf. the ivory tower's basementRegistered User regular
    edited February 2015
    @mrmister

    afaik the decision theory is a niche one with either a weird definition of time, a weird definition of self, or both.

    someone in my training would look at the problem it is meant to deal with and start rhapsodizing about the elegant greek tragedy that appears in situations where one cannot credibly commit to actions, blah blah blah. I don't have the knowledge to judge whether yudkowsky has a coherent, never mind appealing, alternative answer though

    ronya on
    aRkpc.gif
  • Options
    DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    Lanz wrote: »
    Is it weird my instinct to a cruel AI God is "well, if it's a computer we can still probably kill it"?

    *huff puff*

    I heard somebody needed a cruel AI God killed and came as fast as I could

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • Options
    TL DRTL DR Not at all confident in his reflexive opinions of thingsRegistered User regular
    RT800 wrote: »
    All hail Geth!

    (also any other malevolent deities that may be watching)

    sithrak.jpg

  • Options
    wanderingwandering Russia state-affiliated media Registered User regular
    What this Basilisk stuff mostly reminds me of is I Have No Mouth and I Must Scream

    https://www.youtube.com/watch?app=desktop&persist_app=1&v=iw-88h-LcTk

  • Options
    AstaerethAstaereth In the belly of the beastRegistered User regular
    wandering wrote: »
    What this Basilisk stuff mostly reminds me of is I Have No Mouth and I Must Scream

    https://www.youtube.com/watch?app=desktop&persist_app=1&v=iw-88h-LcTk

    Doesn't the main character decide
    it was worth it to try and kill the AI, even though it is essentially torturing him and his friends forever?

    ACsTqqK.jpg
Sign In or Register to comment.