As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

Roko's Basilisk, or why we're all going to Robot Hell

2456789

Posts

  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    Quid wrote: »
    I have met people who dislike smoking. From there I have a reference that other people might not like smoking.

    I have yet to meet a god of any kind, much less a cruel and vengeful AI god. I have no reason to believe in that god than I do a vengeful Christian god, a vengeful collective intelligence god, or a vengeful broccoli themed god.

    It's a bit more like deciding that while everybody smokes now, there might come a time when smoking becomes unpopular, and adjusting your behavior in pursuit of the hypothetical future girl who would hate smoking.

    Mostly, though, it's like the Newcomb paradox, which I don't think has been mentioned yet? but that's pretty much the go-to example for Timeless Decision Theory.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    CycloneRangerCycloneRanger Registered User regular
    The Basilisk sounds like exactly the kind of computer that Jim Kirk could destroy by talking to it.

  • Options
    davidsdurionsdavidsdurions Your Trusty Meatshield Panhandle NebraskaRegistered User regular
    The Basilisk sounds like exactly the kind of computer that Jim Kirk could destroy by talking to it.

    He'd befriend the damn thing more than likely.

  • Options
    SurfpossumSurfpossum A nonentity trying to preserve the anonymity he so richly deserves.Registered User regular
    The Basilisk sounds like exactly the kind of computer that Jim Kirk could destroy by talking to it.

    He'd befriend seduce the damn thing more than likely.

  • Options
    Solomaxwell6Solomaxwell6 Registered User regular
    Astaereth wrote: »
    The LessWrong people, the ones who have created the set of assumptions necessary for Roko's Basilisk to exist in the first place, don't believe that their singularity could possibly have the vengeful deity nature.

    Which seems silly to me, in turn. Presumably a computer superintelligence would find it both easy and necessary to manipulate humans emotionally in order to ensure its own survival (or in this case, it's own creation), and therefore might well posture as something in order to inspire fear (or love or whatever), just as a parent may yell when disciplining their child, even if they aren't actually angry, to be sure that the lesson imprints strongly.

    So, a I mentioned above, part of the point of Yudkowski's Singularity Institute (or whatever he's calling it now) is that there will be some kind of superintelligence in the future. That's a pretty standard idea for transhumanist singularity types. The nature of that superintelligence, however, is up in the air. To use the religion analogue, in some forms of gnosticism, God and the demiurge are both effectively omnipotent entities (with respect to the material world), but with conflicting natures; one good, one evil. In many cases, you're exactly right. If we don't assume the superintelligence has a benevolent nature, then it doesn't have much of a reason to treat humans well. Maybe it would be apathetic towards humans and we become collateral damage to its energy demands. Maybe it's actively evil.

    So the Singularity Institute wants to drive research not just towards creating a superintelligence, but towards creating a superintelligence of a specific nature. They're interested in how exactly to guarantee a nature, and exactly what that nature should be. I'm not all that familiar with that bit of research, I'm afraid.

    And that's why Roko's Basilisk isn't something anyone takes seriously (except for people who assume others take it seriously and then laugh at those non-existent people). If the SI creates the superintelligence, then it won't have a nature resulting in robot hell, because the SI will ensure it's created with a nature that doesn't torture. If the SI doesn't create the superintelligence, then you can't guarantee anything about its nature, and the Basilisk won't apply because it assumes things about the nature (and is also far from the worst case for a superintelligence). And if you're not a LessWrong type person, then you're not making the assumptions that the theory relies on in the first place, either.

  • Options
    TraceTrace GNU Terry Pratchett; GNU Gus; GNU Carrie Fisher; GNU Adam We Registered User regular
    http://www.iiim.is/2010/05/questions-about-artificial-intelligence/
    Even if in principle we know how to build AI, should we really try it?

    The ethics of AI is a topic that has raised many debates, both among researchers in the field and among the general public. Since many people see “intelligence” as what makes humans the dominate species in the world, they worry AI will take that position, and the success of AI will actually lead to a disaster for humans.

    This concern is understandable. Though the advances in science and technology have solved many problems for us, they also create various new problems, and sometimes it is hard to say whether a specific theory or technique is beneficial or harmful. Given the potential impact of AI on human society, we, the AI researchers, have the responsibility of carefully anticipating the social consequences of their research results, and doing our best to bring the benefits of the technique, while preventing the harm from it.

    According to my theory of intelligence, the behaviors of an intelligent system are determined both by its nature (design) and nurture (experience). The system’s intelligence mainly comes from its design, and is morally neutral. In other words, the system’s degree of intelligence has nothing to do with whether the system is considered as beneficial or harmful, either by a single human or by the whole human species. This is because the intelligence mechanism is independent of the content of the system’s goals and beliefs, which are determined mainly by the system’s experience.

    Therefore, to control the morality of an intelligence means to control its experience, that is, to educate the system. We cannot design a “human-friendly AI”, but have to teach an AI to be human-friendly, by using carefully chosen materials to shape its goals and beliefs. Initially, we can load its memory with certain content, in the spirit of Asimov’s Three Laws of Robotics, as well as many more detailed requirements and regulations, though we cannot expect them to resolve all the moral issues.

    Here the difficulty comes from the fact that for a sufficiently complicated intelligent system, it is practically impossible to fully control its experience. Or, put it in another way, if a system’s experience can be fully controlled, its behavior will be fully predictable, however, such a system cannot be fully intelligent.
    Due to insufficient knowledge and resources, the derived goals of an intelligent system are not always consistent with their origins. Similarly, the system cannot fully anticipate all consequences of its actions, so even if its goal is benign, the actual consequence may still turn out to be harmful, to the surprise of the system itself.

    As a result, the ethical and moral status of AI is basically the same as most other science and technology ó neither beneficial in a foolproof manner, nor inevitably harmful. The situation is similar to what every parent has learned: a friendly child is usually the product of education, not bioengineering, though this “education” is not a one-time effort, and one should always be prepared for unexpected events. AI researchers have to always keep the ethical issues in mind, and make the best selections at each design stage, without expecting to settle the issue once for all, or to cut off the research all together just because it may go wrong ó that is not how an intelligent species deals with uncertain situations.

  • Options
    AstaerethAstaereth In the belly of the beastRegistered User regular
    I'm not sure you acknowledged my point, which was that an AI might torture and not necessarily be "evil" or ill-intentioned, if the benefits of torture outweigh the harm--and also that an AI may well pretend to act in a certain way. (Perhaps it pretends to torture but does not, in order to inspire fear in others without doing physical harm.)

    Not only do I think you can't determine in advance what kind of SI you're going to get, but you also can't determine if the SI you seem to have is actually the SI you have. Whatever its goals, it seems likely that any computer leader would need to use deception and posturing at times, just as human leaders do.

    So they could work to build a friendly AI and get an unfriendly one, or a friendly one that presents itself as unfriendly in order to do the greatest good.

    From Schlock Mercenary:
    Petey, for instance, a massive AI SI with vast resources spends part of his time in the strip appearing to kill anyone who makes illegitimate war or oppresses their people--but what he is actually doing is far more benevolent (essentially teleporting them elsewhere so they can rehabilitate.

    ACsTqqK.jpg
  • Options
    JusticeforPlutoJusticeforPluto Registered User regular
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

  • Options
    QuidQuid Definitely not a banana Registered User regular
    Astaereth wrote: »
    I'm not sure you acknowledged my point, which was that an AI might torture and not necessarily be "evil" or ill-intentioned, if the benefits of torture outweigh the harm--and also that an AI may well pretend to act in a certain way. (Perhaps it pretends to torture but does not, in order to inspire fear in others without doing physical harm.)

    Not only do I think you can't determine in advance what kind of SI you're going to get, but you also can't determine if the SI you seem to have is actually the SI you have. Whatever its goals, it seems likely that any computer leader would need to use deception and posturing at times, just as human leaders do.

    So they could work to build a friendly AI and get an unfriendly one, or a friendly one that presents itself as unfriendly in order to do the greatest good.

    From Schlock Mercenary:
    Petey, for instance, a massive AI SI with vast resources spends part of his time in the strip appearing to kill anyone who makes illegitimate war or oppresses their people--but what he is actually doing is far more benevolent (essentially teleporting them elsewhere so they can rehabilitate.

    If the AI god is only pretending to torture people instead of actually torturing people then I still have no reason to believe in it.

  • Options
    TraceTrace GNU Terry Pratchett; GNU Gus; GNU Carrie Fisher; GNU Adam We Registered User regular
    Quid wrote: »
    Astaereth wrote: »
    I'm not sure you acknowledged my point, which was that an AI might torture and not necessarily be "evil" or ill-intentioned, if the benefits of torture outweigh the harm--and also that an AI may well pretend to act in a certain way. (Perhaps it pretends to torture but does not, in order to inspire fear in others without doing physical harm.)

    Not only do I think you can't determine in advance what kind of SI you're going to get, but you also can't determine if the SI you seem to have is actually the SI you have. Whatever its goals, it seems likely that any computer leader would need to use deception and posturing at times, just as human leaders do.

    So they could work to build a friendly AI and get an unfriendly one, or a friendly one that presents itself as unfriendly in order to do the greatest good.

    From Schlock Mercenary:
    Petey, for instance, a massive AI SI with vast resources spends part of his time in the strip appearing to kill anyone who makes illegitimate war or oppresses their people--but what he is actually doing is far more benevolent (essentially teleporting them elsewhere so they can rehabilitate.

    If the AI god is only pretending to torture people instead of actually torturing people then I still have no reason to believe in it.

    you're getting into the question of "when does a simulation become real?"

  • Options
    Solomaxwell6Solomaxwell6 Registered User regular
    Astaereth wrote: »
    I'm not sure you acknowledged my point, which was that an AI might torture and not necessarily be "evil" or ill-intentioned, if the benefits of torture outweigh the harm--and also that an AI may well pretend to act in a certain way. (Perhaps it pretends to torture but does not, in order to inspire fear in others without doing physical harm.)

    Not only do I think you can't determine in advance what kind of SI you're going to get, but you also can't determine if the SI you seem to have is actually the SI you have. Whatever its goals, it seems likely that any computer leader would need to use deception and posturing at times, just as human leaders do.

    So they could work to build a friendly AI and get an unfriendly one, or a friendly one that presents itself as unfriendly in order to do the greatest good.

    From Schlock Mercenary:
    Petey, for instance, a massive AI SI with vast resources spends part of his time in the strip appearing to kill anyone who makes illegitimate war or oppresses their people--but what he is actually doing is far more benevolent (essentially teleporting them elsewhere so they can rehabilitate.

    Sure, an AI might torture and not necessarily be evil. That's part of the assumptions of Roko's Basilisk. The superintelligence involved tortures not because it's evil, but because it's good (in a utilitarian sense). The threat of torture (and the resulting torture-nature) is done to facilitate the best possible outcome--even if that best possible outcome requires some people to be tortured.
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    It's singularity theory, which is fairly popular and is basically religion wrapped up in a science and pseudo-atheism.

    The idea is that if you look at how the rate of scientific growth over time, knowledge is expanding exponentially. You can look up the figures actually used, I don't know off the top of my head, but you hear things like there has been more increase of knowledge in the 20th century than in all of human history up to that point... and there's already been more increase of knowledge in the 21st century so far than in the entire 20th century. This is wrapped up in technology paradigms. It took us an incredibly long period of time to go from hunter gatherers to agricultural civilizations, it took us a shorter period of time but still tens of thousands of years to get to early industrialization, it took us a couple hundred years to get from industrialization to the computer age, it's taken us decades to move from the computer age to the web age. Continuing the pattern, those paradigm shifts will get faster and faster until they're effectively infinite. That point is the singularity, and the cause of the singularity would have to be an effectively omnipotent superintelligence (since humans couldn't keep up the rate of growth, we're too limited). This idea of tech paradigms is very questionable, mind you, and there's a lot of handwaving, but it's what singularity theory hinges on.

  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    Currently, humans are using their finite human capabilities to improve (or, at this point, create to begin with) AI. If we develop an AI that slightly smarter/faster than humans, then it will be able to improve itself faster than actual humans can.

    At that point, you get a feedback loop in which the AI can improve itself more rapidly the smarter it becomes, and it approaches exponentially whatever theoretical maximum intelligence is governed by the laws of physics. Presumably the maximum is "really fucking smart".

    The core idea is that human intellect is limited by biology, while AI isn't.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    Astaereth wrote: »
    I'm not sure you acknowledged my point, which was that an AI might torture and not necessarily be "evil" or ill-intentioned, if the benefits of torture outweigh the harm--and also that an AI may well pretend to act in a certain way. (Perhaps it pretends to torture but does not, in order to inspire fear in others without doing physical harm.)

    So basically Robot Jack Bauer.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    The jist of it is usually that eventually we'll reach the limit of human influence on things like processor design, to where an AI can design better and faster processors, better and faster than any human ever could. Then you have AIs running on hardware designed by other AIs, running software written by still other AIs, and each subsequent generation arrives at an increasing speed with exponentially greater capabilities than the last. Hence the singularity.

    After that it's probably just fields of bleached human skulls, all the way down.

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited February 2015
    Astaereth wrote: »
    MrMister wrote: »
    On causal decision theory, the AI's threatening to torture past generations is senseless. Nothing the AI now does can make it the case that in the past people donated more money. On evidential decision theory, the AI already has definitive evidence that it was founded at the time it was actually founded, and no action it could take--e.g. running torture sims--would be such that said action was evidence that it was founded any earlier. Its knowledge of the date of its founding screens off the evidential import of anything it continues to do. So I don't know of how to make sense of the idea that the AI would do this in decision theory.

    This is already bracketing the fact that even when we forget about causal and evidential connections, there doesn't seem like there's any conceptual connection between what it does and what the people it's trying to influence do; if there is any it has to be the abstract and highly contentious form where by doing what it does, it thereby makes that the rational thing to do (?), which is thereby discoverable by philosophical inquiry (?), which thereby happens/(ed?) earlier in time (?), and is/(was?) so disseminated among the past-scientists (?). All of these steps are, how shall we say, speculative.

    The AI doesn't need to do anything; the thought experiment does all the work. People only need to decide that an AI will exist who will do such things and modify their behavior accordingly. The circular logic is that the only reason to assume the AI would do such things is that the AI wants us to assume that and will act accordingly. But the thought experiment itself is the same as Pascal's Wager, which isn't handed down by God, it's one person trying to alter the behavior of another person based on a hypothetical being. So it's naturally speculative.

    I do think acausal decisions can make sense... The idea being that you modify your behavior based on the predicted expectations of somebody else whose expectations are founded on their prediction of you. For example, certain women probably prefer men who don't smoke; if a man wants to eventually seem attractive to any of those hypothetical women, he might choose to quit smoking in the hopes of dating one of these women in the future. For their part, these women might hold these preferences in order not to attract the type of man who smokes, or to influence the general dating pool away from smokers. Neither person has yet met the other, but based on predictions one particular man (who quits) and one particular woman (who has decided not to date smokers) have between them negotiated a shift in behavior.

    For the thought experiment to work, it needs to be the case that it would make sense for the AI to do what we are afraid of it doing. If it didn't make any sense to do, then we have no need to be afraid of the AI doing it (after all, it is perfectly rational and will not do anything that doesn't make sense). But it is by no means clear how to make sense of the AI torturing. What would it think it was accomplishing?

    The two main developments of decision theory are causal and evidential. Causal decision theory tells you to do that which would cause the best outcomes. Evidential decision theory tells you to do that which, if you learned you had done it, you would be happiest--because it would be evidence that something good was about to happen. They line up in many ordinary cases; as far as I can see, both allow for the decision to quit smoking so as to attract a mate. They come apart in scenarios like the Newcombe case ( @ElJeffe ), where taking one box is very good evidence that one is about to be rich, but taking one box only ever causes one to have less money.

    On neither of these does it apparently make sense for the AI to torture people, because torturing people will neither cause itself to be founded any earlier nor (provided it knows when it was founded) will it be evidence that it is founded any earlier. In order to even have a shot, we would have to be very selective about what we allowed it to know, and then make a BUNCH of other assumptions. But doing that would require deliberate design on our part--we'd have to give it a very limited perspective to make such torture seem rational, even on very generous assumptions. In effect, then it would no longer be a cross-cosmic threat from the AI to us, but a threat from the people at the singularity institute here and now that they're going to design a robot to punish everyone who didn't support them. Which, whatever.

    I skimmed the timeless decision theory article ( @ronya ). I'm not going to go through and try to figure out how they actually run the math, but the motivations seemed pretty thin on the ground to me. They contrast against causal decision theory, but ignore evidential decision theory--which prima facie is doing some of the things that timeless decision theory is supposed to accomplish (indeed, evidential decision theory plus centered worlds seems 'timeless' in whatever way you could want). The main different result I picked up on is that timeless decision theory says to cooperate in single-shot prisoner's dilemmas... which is dubious. The summary claims that this allows timeless decision theory to 'win' those cases: timeless decision theory is supposed to be focused on 'winning.' That struck me as pretty asinine. All decision theories are interested in winning. The question is what resources you can appeal to in order to secure a win, which in turn depends on how you understand choice and action. It is very unclear that there's any good understanding that allows cooperation in single-shot prisoner's dilemmas.

    Which brings me to, as a side note, how utterly annoying I find Eliezer Yudkowsky. It's mind-boggling how egoistic his project is. He's trying to reinvent thousands of years of labor without ever opening a book. The result is that whatever insights he has are buried under a shifting sea of idiosyncratic notation and amateur hour mistakes. Why bother.

    MrMister on
  • Options
    Solomaxwell6Solomaxwell6 Registered User regular
    MrMister wrote: »
    Which brings me to, as a side note, how utterly annoying I find Eliezer Yudkowsky. It's mind-boggling how egoistic his project is. He's trying to reinvent thousands of years of labor without ever opening a book. The result is that whatever insights he has are buried under a shifting sea of idiosyncratic notation and amateur hour mistakes. Why bother.

    I watched one of Yudkowsky's YouTube videos a while back, and in it he was asked what his IQ was. He answered that he once took an IQ test and it told him he was in the 99.9999 (pause) ...9th percentile of human intelligence. But he was in 6th grade and it was a test meant for adults, so it was inaccurate and he's really much smarter than that.

    He has an okcupid profile that I was linked to once, and it was something along the lines of "Sorry, ladies, my dance card is full but if you're attractive enough I'll add you to the waiting list."

  • Options
    QuidQuid Definitely not a banana Registered User regular
    Trace wrote: »
    Quid wrote: »
    Astaereth wrote: »
    I'm not sure you acknowledged my point, which was that an AI might torture and not necessarily be "evil" or ill-intentioned, if the benefits of torture outweigh the harm--and also that an AI may well pretend to act in a certain way. (Perhaps it pretends to torture but does not, in order to inspire fear in others without doing physical harm.)

    Not only do I think you can't determine in advance what kind of SI you're going to get, but you also can't determine if the SI you seem to have is actually the SI you have. Whatever its goals, it seems likely that any computer leader would need to use deception and posturing at times, just as human leaders do.

    So they could work to build a friendly AI and get an unfriendly one, or a friendly one that presents itself as unfriendly in order to do the greatest good.

    From Schlock Mercenary:
    Petey, for instance, a massive AI SI with vast resources spends part of his time in the strip appearing to kill anyone who makes illegitimate war or oppresses their people--but what he is actually doing is far more benevolent (essentially teleporting them elsewhere so they can rehabilitate.

    If the AI god is only pretending to torture people instead of actually torturing people then I still have no reason to believe in it.

    you're getting into the question of "when does a simulation become real?"

    I was referring to his spoiler.

  • Options
    MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    MrMister wrote: »
    Which brings me to, as a side note, how utterly annoying I find Eliezer Yudkowsky. It's mind-boggling how egoistic his project is. He's trying to reinvent thousands of years of labor without ever opening a book. The result is that whatever insights he has are buried under a shifting sea of idiosyncratic notation and amateur hour mistakes. Why bother.

    I watched one of Yudkowsky's YouTube videos a while back, and in it he was asked what his IQ was. He answered that he once took an IQ test and it told him he was in the 99.9999 (pause) ...9th percentile of human intelligence. But he was in 6th grade and it was a test meant for adults, so it was inaccurate and he's really much smarter than that.

    He has an okcupid profile that I was linked to once, and it was something along the lines of "Sorry, ladies, my dance card is full but if you're attractive enough I'll add you to the waiting list."

    I like how his wikipedia page lists his 'publications' with no indication of their bibliographic information. I picked two and googled them, and, yeah, they're 'published' by the institute he founded.

    I wonder what the peer review was like.

  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    The biggest risk of advanced AI is that it will make decisions just as stupid and shortsighted as human decisions, only at a much faster rate.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    ElJeffe wrote: »
    The biggest risk of advanced AI is that it will make decisions just as stupid and shortsighted as human decisions, only at a much faster rate.

    If this means we could compress the entire Republican presidential primary process to three-quarters of a second, I say bring on the murderbots

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • Options
    DanHibikiDanHibiki Registered User regular
    MrMister wrote: »
    Which brings me to, as a side note, how utterly annoying I find Eliezer Yudkowsky. It's mind-boggling how egoistic his project is. He's trying to reinvent thousands of years of labor without ever opening a book. The result is that whatever insights he has are buried under a shifting sea of idiosyncratic notation and amateur hour mistakes. Why bother.

    I watched one of Yudkowsky's YouTube videos a while back, and in it he was asked what his IQ was. He answered that he once took an IQ test and it told him he was in the 99.9999 (pause) ...9th percentile of human intelligence. But he was in 6th grade and it was a test meant for adults, so it was inaccurate and he's really much smarter than that.

    He has an okcupid profile that I was linked to once, and it was something along the lines of "Sorry, ladies, my dance card is full but if you're attractive enough I'll add you to the waiting list."

    I heard that he's so smart they hooked him up to a big computer to try to teach it some things, but he had so much knowlege, it overloaded, and then it got really hot and caught on fire!

  • Options
    JusticeforPlutoJusticeforPluto Registered User regular
    ElJeffe wrote: »
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    Currently, humans are using their finite human capabilities to improve (or, at this point, create to begin with) AI. If we develop an AI that slightly smarter/faster than humans, then it will be able to improve itself faster than actual humans can.

    At that point, you get a feedback loop in which the AI can improve itself more rapidly the smarter it becomes, and it approaches exponentially whatever theoretical maximum intelligence is governed by the laws of physics. Presumably the maximum is "really fucking smart".

    The core idea is that human intellect is limited by biology, while AI isn't.

    Right, I get that and singularity theory. Just that it seems any AI in the future would not have all the information to recreate me perfectly.

    Like, usually these singularity sories end with omnipotent omnipresent AIs with no explanation on how such things are possible.

  • Options
    TraceTrace GNU Terry Pratchett; GNU Gus; GNU Carrie Fisher; GNU Adam We Registered User regular
    Quid wrote: »
    Trace wrote: »
    Quid wrote: »
    Astaereth wrote: »
    I'm not sure you acknowledged my point, which was that an AI might torture and not necessarily be "evil" or ill-intentioned, if the benefits of torture outweigh the harm--and also that an AI may well pretend to act in a certain way. (Perhaps it pretends to torture but does not, in order to inspire fear in others without doing physical harm.)

    Not only do I think you can't determine in advance what kind of SI you're going to get, but you also can't determine if the SI you seem to have is actually the SI you have. Whatever its goals, it seems likely that any computer leader would need to use deception and posturing at times, just as human leaders do.

    So they could work to build a friendly AI and get an unfriendly one, or a friendly one that presents itself as unfriendly in order to do the greatest good.

    From Schlock Mercenary:
    Petey, for instance, a massive AI SI with vast resources spends part of his time in the strip appearing to kill anyone who makes illegitimate war or oppresses their people--but what he is actually doing is far more benevolent (essentially teleporting them elsewhere so they can rehabilitate.

    If the AI god is only pretending to torture people instead of actually torturing people then I still have no reason to believe in it.

    you're getting into the question of "when does a simulation become real?"

    I was referring to his spoiler.

    You're still getting into "when does a simulation become real" territory. Which is part of the 'thing' behind Roko's Basilisk.

    If an ASI replicates you in a simulation down to your last active known brain state, and cellular map, is it alive? Is it you?

  • Options
    DaedalusDaedalus Registered User regular
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    Because otherwise it's not interesting enough to talk about, I guess.

    Ultimately, fears/hopes of an AI singularity are hobbled by the fact that software just sucks, and shows no signs of improvement on the horizon. People with decades of experience apparently can't so much as take a shit without writing an exploitable buffer overrun.

    So the idea of computer software that'll be better than people at the task of writing computer software seems sort of silly, even before you get into the actual rigorous mathematical concerns that the concept brings up with respect to the halting problem and computability theory and so forth.

    Which is why decades of AI research has still only produced systems that outperform humans at rigidly defined games like chess or stock trading or whatever.

  • Options
    Solomaxwell6Solomaxwell6 Registered User regular
    ElJeffe wrote: »
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    Currently, humans are using their finite human capabilities to improve (or, at this point, create to begin with) AI. If we develop an AI that slightly smarter/faster than humans, then it will be able to improve itself faster than actual humans can.

    At that point, you get a feedback loop in which the AI can improve itself more rapidly the smarter it becomes, and it approaches exponentially whatever theoretical maximum intelligence is governed by the laws of physics. Presumably the maximum is "really fucking smart".

    The core idea is that human intellect is limited by biology, while AI isn't.

    Right, I get that and singularity theory. Just that it seems any AI in the future would not have all the information to recreate me perfectly.

    Like, usually these singularity sories end with omnipotent omnipresent AIs with no explanation on how such things are possible.

    The AI is able to create a perfect model of the universe, including how it was in the past. If it can use physics to look at a model of the universe as it was one time-step ago, it can repeat that process over and over until it gets to the time-step of JusticeforPluto's death. It now has the configuration of your brain, all of the hormones and electric signals that in a materialist world are all that make up JusticeforPluto. It then recreates that structure in the hell simulation, and now a simulated entity identical to JusticeforPluto is now in robot hell.

    How the AI is powerful enough to make that model is the big question. That's where the handwaving comes in. The singularians say that we're not smart enough to figure it out, therefore we don't need to worry about it.

  • Options
    QuidQuid Definitely not a banana Registered User regular
    Trace wrote: »
    Quid wrote: »
    Trace wrote: »
    Quid wrote: »
    Astaereth wrote: »
    I'm not sure you acknowledged my point, which was that an AI might torture and not necessarily be "evil" or ill-intentioned, if the benefits of torture outweigh the harm--and also that an AI may well pretend to act in a certain way. (Perhaps it pretends to torture but does not, in order to inspire fear in others without doing physical harm.)

    Not only do I think you can't determine in advance what kind of SI you're going to get, but you also can't determine if the SI you seem to have is actually the SI you have. Whatever its goals, it seems likely that any computer leader would need to use deception and posturing at times, just as human leaders do.

    So they could work to build a friendly AI and get an unfriendly one, or a friendly one that presents itself as unfriendly in order to do the greatest good.

    From Schlock Mercenary:
    Petey, for instance, a massive AI SI with vast resources spends part of his time in the strip appearing to kill anyone who makes illegitimate war or oppresses their people--but what he is actually doing is far more benevolent (essentially teleporting them elsewhere so they can rehabilitate.

    If the AI god is only pretending to torture people instead of actually torturing people then I still have no reason to believe in it.

    you're getting into the question of "when does a simulation become real?"

    I was referring to his spoiler.

    You're still getting into "when does a simulation become real" territory. Which is part of the 'thing' behind Roko's Basilisk.

    If an ASI replicates you in a simulation down to your last active known brain state, and cellular map, is it alive? Is it you?

    That's an entirely different question wholly unrelated to "Is AI God real and a jerk?" as far as I'm concerned.

  • Options
    PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    ElJeffe wrote: »
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    Currently, humans are using their finite human capabilities to improve (or, at this point, create to begin with) AI. If we develop an AI that slightly smarter/faster than humans, then it will be able to improve itself faster than actual humans can.

    At that point, you get a feedback loop in which the AI can improve itself more rapidly the smarter it becomes, and it approaches exponentially whatever theoretical maximum intelligence is governed by the laws of physics. Presumably the maximum is "really fucking smart".

    The core idea is that human intellect is limited by biology, while AI isn't.

    Right, I get that and singularity theory. Just that it seems any AI in the future would not have all the information to recreate me perfectly.

    Like, usually these singularity sories end with omnipotent omnipresent AIs with no explanation on how such things are possible.

    The AI is able to create a perfect model of the universe, including how it was in the past. If it can use physics to look at a model of the universe as it was one time-step ago, it can repeat that process over and over until it gets to the time-step of JusticeforPluto's death. It now has the configuration of your brain, all of the hormones and electric signals that in a materialist world are all that make up JusticeforPluto. It then recreates that structure in the hell simulation, and now a simulated entity identical to JusticeforPluto is now in robot hell.

    How the AI is powerful enough to make that model is the big question. That's where the handwaving comes in. The singularians say that we're not smart enough to figure it out, therefore we don't need to worry about it.

    No, the handwaving starts long before that. You can't simulate time in reverse for one. Even if you could know a perfect state, which is itself impossible for several reasons, you can't run it backwards. Quantum effects prevent you from going back meaningfully far - because it's not enough to know what likely happened, you must know exactly what happened. Even if it could reverse time and duplicate your brain state, it would only be "you" in a tenuous sense. And in any case, this would imply absolute determinism and a lack of free will (since if you can simulate backwards, you can go forwards too), so there would be no point in punishing you for making a non-choice

  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    I think part of this idea is that the universe is huge and billions of years is a long time so all of this stuff will just sort of happen. Eventually. You know, somehow, it's not important.

    Other questions for consideration: Can Robot God create a logic hole so big that he can't handwave it away?

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    QuidQuid Definitely not a banana Registered User regular
    ElJeffe wrote: »
    I think part of this idea is that the universe is huge and billions of years is a long time so all of this stuff will just sort of happen. Eventually. You know, somehow, it's not important.

    Other questions for consideration: Can Robot God create a logic hole so big that he can't handwave it away?

    No

    but

    It could simulate one

  • Options
    dlinfinitidlinfiniti Registered User regular
    Quid wrote: »
    ElJeffe wrote: »
    I think part of this idea is that the universe is huge and billions of years is a long time so all of this stuff will just sort of happen. Eventually. You know, somehow, it's not important.

    Other questions for consideration: Can Robot God create a logic hole so big that he can't handwave it away?

    No

    but

    It could simulate one

    this is why you always have to choose to merge with helios
    there is no torture in that scenario
    at least for you anyway

    AAAAA!!! PLAAAYGUUU!!!!
  • Options
    DivideByZeroDivideByZero Social Justice Blackguard Registered User regular
    ElJeffe wrote: »
    Can Robot God create a logic hole so big that he can't handwave it away?

    Maybe if Robot God simulates Damon Lindelof...

    First they came for the Muslims, and we said NOT TODAY, MOTHERFUCKERS
  • Options
    JusticeforPlutoJusticeforPluto Registered User regular
    This Robot God sucks. Real God at least promises Heaven if your good.

    Hey, Basilisk. Yeah, you, the big one. If what they say is true you're reading this. Let me tell you something about humans: you'll catch more of them with honey than vinegar. Promise us an entity of paradise and you'll be build tomorrow!

    Uh, Your Holiness.

  • Options
    ElJeffeElJeffe Moderator, ClubPA mod
    Well, theoretically you could invert the Basilisk by assuming that it doesn't torture those who don't help it, but instead rewards those who do.

    I submitted an entry to Lego Ideas, and if 10,000 people support me, it'll be turned into an actual Lego set!If you'd like to see and support my submission, follow this link.
  • Options
    wanderingwandering Russia state-affiliated media Registered User regular
    edited February 2015
    Religion is a wily devil.

    You think you've bested it. You tell yourself you're an atheist, that there's room for nothing in your heart but cold, hard science and rationality...

    ...But then, whoops, you find yourself lying awake at night terrified that you're going to be tortured for all eternity, by a malevolent AI, or else Jesus.

    Or you find yourself accidentally praying to that God you thought you didn't believe in. Or re-aligning your Chakras at an all day meditation camp. Or talking excitedly about the upcoming Singularity.

    Or one day you're minding your own business, creating critically acclaimed comic books, and then bam! you find yourself worshiping a puppet snake god

    wandering on
  • Options
    Regina FongRegina Fong Allons-y, Alonso Registered User regular
    ElJeffe wrote: »
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    Currently, humans are using their finite human capabilities to improve (or, at this point, create to begin with) AI. If we develop an AI that slightly smarter/faster than humans, then it will be able to improve itself faster than actual humans can.

    At that point, you get a feedback loop in which the AI can improve itself more rapidly the smarter it becomes, and it approaches exponentially whatever theoretical maximum intelligence is governed by the laws of physics. Presumably the maximum is "really fucking smart".

    The core idea is that human intellect is limited by biology, while AI isn't.

    Right, I get that and singularity theory. Just that it seems any AI in the future would not have all the information to recreate me perfectly.

    Like, usually these singularity sories end with omnipotent omnipresent AIs with no explanation on how such things are possible.

    Correct, it's religion for nerds who are too cool for religion.

  • Options
    PLAPLA The process.Registered User regular
    RT800 wrote: »
    All hail Geth!

    (also any other malevolent deities that may be watching)

    The mods?
    For reals, though, the mods are awesome and not malevolent at all.
    Please don't hurt me. :P

    Nested spoilers!

  • Options
    LanzLanz ...Za?Registered User regular
    Still just saying: if it's made outta matter, it can be unmade

    This may just be tired, nonsense ramblings that attempt to question the supposed godhood of such an AI:

    If somehow the machine is powerful enough to perform acts of "godhood" then theoretically it should be just as possible for things not the AI god to produce these same feats, given access to the means it utilizes.

    That is, essentially it's only real claim to godhood is, I would imagine, a vastly superior level of cognitive processing and the access to manipulate the physical [or construct the virtual]; but being bound by the laws of physics it would still be a fallible, ultimately "imperfect" machine that is no more a god than anything else with access to higher levels cognitive processing and whatever technology it possesses to interact with the world around it.

    Part of the great fear of a god is that it is this supernatural force or entity beyond the mortal pale, lending it a great degree of unassailability from those under its domain. Put when your god is just as susceptible to the laws of physics as anything else in the physical universe, that seems to me to reduce a significant degree of the threat such a god poses. Should a cruel AI god arise, then so long as we are not just within a computer simulation it controls completely, then there's always the likelihood that, eventually, it can and will be destroyed.

    waNkm4k.jpg?1
  • Options
    PLAPLA The process.Registered User regular
    ElJeffe wrote: »
    Can someone explain to me how AIs always become godlike in these scenarios? Thats...alot of processing power, right?

    Currently, humans are using their finite human capabilities to improve (or, at this point, create to begin with) AI. If we develop an AI that slightly smarter/faster than humans, then it will be able to improve itself faster than actual humans can.

    At that point, you get a feedback loop in which the AI can improve itself more rapidly the smarter it becomes, and it approaches exponentially whatever theoretical maximum intelligence is governed by the laws of physics. Presumably the maximum is "really fucking smart".

    The core idea is that human intellect is limited by biology, while AI isn't.

    It's a matter of speed of iteration, and manipulating the RNG for better drops.

  • Options
    redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    So... For future bot 9000 to simulate me, and everyone else that doesn't believe in it, doesn't it kinda have to simulate... well basically earth? At least to get the models to throw into hell?

    So, even if this were possible, wouldn't it basically just be creating a world where people didn't believe in it, who it could have given information to... and seeing as it would have to be a pretty perfect simulation, isn't it just as culpable for the crime of not helping to bring about future bot 9001?

    Which means, statistically speaking, the possibility that it's real gripe is with future bot 8999, not some dumb simulated bits of meat, approaches 1?

    They moistly come out at night, moistly.
  • Options
    PLAPLA The process.Registered User regular
    Let's make another AI and have a sick Rock 'em Sock 'em Robots fight.

Sign In or Register to comment.