taken with the DoA comic from yesterday, this is better character development for a dude what seemed like a bro-ish horndog than I thought Willis could do
Ehh, Joe did a shitty thing, and Joyce is understandably upset. But of all the people who can get high-minded at Joe, the person who beat the shit out of him for no reason isn't one of them.
You know what? Nanowrimo's cancelled on account of the world is stupid.
Atomic Robo
At any point in time, Sir Richard Branson is trying to pull many things.
I think I'm out of the loop on Atomic Robo. Is this a guy that's shown up before?
Robo's talked to other celebrity tech / industry people a few times, and I think they mentioned that his new base was near one of Branson's airfields before.
Atomic Robo
At any point in time, Sir Richard Branson is trying to pull many things.
I think I'm out of the loop on Atomic Robo. Is this a guy that's shown up before?
Robo's talked to other celebrity tech / industry people a few times, and I think they mentioned that his new base was near one of Branson's airfields before.
Atomic Robo
At any point in time, Sir Richard Branson is trying to pull many things.
I think I'm out of the loop on Atomic Robo. Is this a guy that's shown up before?
Robo's talked to other celebrity tech / industry people a few times, and I think they mentioned that his new base was near one of Branson's airfields before.
He was previously seen being pissed that Robo 'borrowed' a fuel truck without permission.
For Action Science purposes, but still.
I think he's being set up as the annoying neighbor who starts a ten-year legal battle of a quarter-inch of hedge growing over into their garden, but scaled up for big science industrialists.
I still have issues with May's portrayal in conjunction with the existence of AI's. She was introduced as a felon who funneled an exorbitant amount of money for the purpose of buying a jet so she could inhabit it. The question is, in terms of AI thoughts can this be considered a crime for an existence that is decidedly not human and may in fact possess entirely different morals than humans would learn by dint of being created as-is rather than growing up and learning over time? When May was created or came about or however new AI appear, why did she turn out how she did, personality-wise, and does she bear the blame for something she literally may not have been able to control? Then there's the matter of AI Prison, which is apparently a horrid place, and how it's apparently perfectly okay to confine a sentient mind with no body in a way reminiscent of "And I Must Scream."
I still have issues with May's portrayal in conjunction with the existence of AI's. She was introduced as a felon who funneled an exorbitant amount of money for the purpose of buying a jet so she could inhabit it. The question is, in terms of AI thoughts can this be considered a crime for an existence that is decidedly not human and may in fact possess entirely different morals than humans would learn by dint of being created as-is rather than growing up and learning over time? When May was created or came about or however new AI appear, why did she turn out how she did, personality-wise, and does she bear the blame for something she literally may not have been able to control? Then there's the matter of AI Prison, which is apparently a horrid place, and how it's apparently perfectly okay to confine a sentient mind with no body in a way reminiscent of "And I Must Scream."
I don't know if they're actually created as-is? They're capable of learning, in any case, and their minds are black boxes.
It's all moot since this is a TITAN simulspace sped up a million times faster as a planets worth of egos are being slowly merged into a single gestalt consciousness. The varying forms of transhumans, uplifts and AGIs slowly become more like each other to make the ego merge process easier.
It's all moot since this is a TITAN simulspace sped up a million times faster as a planets worth of egos are being slowly merged into a single gestalt consciousness. The varying forms of transhumans, uplifts and AGIs slowly become more like each other to make the ego merge process easier.
Please, it's clearly the Prometheans doing stuff (one of which presumably showed up as that godbot earlier in the comic)
There's not nearly enough body horror for the Exsurgent virus to be involved
Any fictional robot that functions perfectly like they have a human mind can be assumed to have a brain that does, indeed, function like a human one.
This can also explain fictional robots not possessing traits that a computer does, like the ability to do fast calculations; because they have a human-like robobrain and not a CPU.
I still have issues with May's portrayal in conjunction with the existence of AI's. She was introduced as a felon who funneled an exorbitant amount of money for the purpose of buying a jet so she could inhabit it. The question is, in terms of AI thoughts can this be considered a crime for an existence that is decidedly not human and may in fact possess entirely different morals than humans would learn by dint of being created as-is rather than growing up and learning over time? When May was created or came about or however new AI appear, why did she turn out how she did, personality-wise, and does she bear the blame for something she literally may not have been able to control? Then there's the matter of AI Prison, which is apparently a horrid place, and how it's apparently perfectly okay to confine a sentient mind with no body in a way reminiscent of "And I Must Scream."
How does having different morals excuse an AI (or anyone, really) from repercussions for their actions? "Oh, I'm sorry Mr. Judge, my morals dictate that it is perfectly fine to kill my unwilling spouse, so I guess I'll just go home now?"
The law is a pragmatic technology meant for the protection of society. If an AI was behaving in a destructive manner, even with all the most positive intent in the world, it should be stopped by the authorities.
Children's rights are human rights.
0
Options
Clint EastwoodMy baby's in there someplaceShe crawled right inRegistered Userregular
I still have issues with May's portrayal in conjunction with the existence of AI's. She was introduced as a felon who funneled an exorbitant amount of money for the purpose of buying a jet so she could inhabit it. The question is, in terms of AI thoughts can this be considered a crime for an existence that is decidedly not human and may in fact possess entirely different morals than humans would learn by dint of being created as-is rather than growing up and learning over time? When May was created or came about or however new AI appear, why did she turn out how she did, personality-wise, and does she bear the blame for something she literally may not have been able to control? Then there's the matter of AI Prison, which is apparently a horrid place, and how it's apparently perfectly okay to confine a sentient mind with no body in a way reminiscent of "And I Must Scream."
How does having different morals excuse an AI (or anyone, really) from repercussions for their actions? "Oh, I'm sorry Mr. Judge, my morals dictate that it is perfectly fine to kill my unwilling spouse, so I guess I'll just go home now?"
The law is a pragmatic technology meant for the protection of society. If an AI was behaving in a destructive manner, even with all the most positive intent in the world, it should be stopped by the authorities.
There's something called the McNaughten rule. Simply put, it tests for criminal insanity by determining whether the one who committed the crime recognizes the difference between right and wrong. Normally the vast majority of human beings do distinguish from right and wrong.
But what about AI's? They have minds of their own, but they've stated before that they dedicate a colossal amount of their own personal processing power towards their human-like characteristics. That's not something humans do and it's a clear line between AI and human. A human tends to learn right from wrong at an early age for the most part. So what happens when an AI shows up that does not choose to spend that power to be like a human? Are they capable of recognizing when they're doing something wrong? Are they insane? Maybe by our metrics, which are intended for humans, but they may also be following the most logical path because they are absent of "good and evil" and are in fact quite sane as it pursues the singular goal it's chosen. Is it considered "wrong" for the AI to choose not to be like a human or is making humanity mandatory "right"?
These aren't questions that will ever be addressed in QC. They're the stuff of deep science-fiction novels that plunge headfirst into the concepts of transhumanism and existences that are unlike our own. Certainly, we can impose our own values on those existences, brand them with our laws and enforce the consequences, but is that ethically "right" or "wrong" when again regarding an existence that is unlike humanity but very much sentient?
It's all moot since this is a TITAN simulspace sped up a million times faster as a planets worth of egos are being slowly merged into a single gestalt consciousness. The varying forms of transhumans, uplifts and AGIs slowly become more like each other to make the ego merge process easier.
It's all moot since this is a TITAN simulspace sped up a million times faster as a planets worth of egos are being slowly merged into a single gestalt consciousness. The varying forms of transhumans, uplifts and AGIs slowly become more like each other to make the ego merge process easier.
We are MARTEN
We are many and we are one
+7
Options
CambiataCommander ShepardThe likes of which even GAWD has never seenRegistered Userregular
I still have issues with May's portrayal in conjunction with the existence of AI's. She was introduced as a felon who funneled an exorbitant amount of money for the purpose of buying a jet so she could inhabit it. The question is, in terms of AI thoughts can this be considered a crime for an existence that is decidedly not human and may in fact possess entirely different morals than humans would learn by dint of being created as-is rather than growing up and learning over time? When May was created or came about or however new AI appear, why did she turn out how she did, personality-wise, and does she bear the blame for something she literally may not have been able to control? Then there's the matter of AI Prison, which is apparently a horrid place, and how it's apparently perfectly okay to confine a sentient mind with no body in a way reminiscent of "And I Must Scream."
How does having different morals excuse an AI (or anyone, really) from repercussions for their actions? "Oh, I'm sorry Mr. Judge, my morals dictate that it is perfectly fine to kill my unwilling spouse, so I guess I'll just go home now?"
The law is a pragmatic technology meant for the protection of society. If an AI was behaving in a destructive manner, even with all the most positive intent in the world, it should be stopped by the authorities.
There's something called the McNaughten rule. Simply put, it tests for criminal insanity by determining whether the one who committed the crime recognizes the difference between right and wrong. Normally the vast majority of human beings do distinguish from right and wrong.
But what about AI's? They have minds of their own, but they've stated before that they dedicate a colossal amount of their own personal processing power towards their human-like characteristics. That's not something humans do and it's a clear line between AI and human. A human tends to learn right from wrong at an early age for the most part. So what happens when an AI shows up that does not choose to spend that power to be like a human? Are they capable of recognizing when they're doing something wrong? Are they insane? Maybe by our metrics, which are intended for humans, but they may also be following the most logical path because they are absent of "good and evil" and are in fact quite sane as it pursues the singular goal it's chosen. Is it considered "wrong" for the AI to choose not to be like a human or is making humanity mandatory "right"?
These aren't questions that will ever be addressed in QC. They're the stuff of deep science-fiction novels that plunge headfirst into the concepts of transhumanism and existences that are unlike our own. Certainly, we can impose our own values on those existences, brand them with our laws and enforce the consequences, but is that ethically "right" or "wrong" when again regarding an existence that is unlike humanity but very much sentient?
Any ethics is an imposition of value. (Yes, yes, speak with the outward tongue of violence, KILL SIX BILLION DEMONS, etc etc etc) What ethics is is a way of making decisions. Any consistent ethos will try to make decisions that maximize the chances of achieving one or more goals. The purpose of the concept of the Good is to find a way to "resolve" different, perhaps contradictory goals in a way that satisfies the most goals that the differing ethoses have. Notice I haven't spoken about humanity yet. I don't need to. There is nothing intrinsically human about ethics. Rats have it, ants have it, amoebas have it, but humans are the only ones intelligent enough to comment on it... for now.
That the AI is so different, man from humanity matters not a whit to ethics. If the AI cannot understand that the goals of other beings must be taken into account, then it is either evil (and should be destroyed) or stupid (and not an AI).
Children's rights are human rights.
+1
Options
CambiataCommander ShepardThe likes of which even GAWD has never seenRegistered Userregular
Because he already killed Faye and a second murder would be too obvious.
This is the webcomic reference equivalent of steph curry putting up a 3 pointer and walking away without bothering to see if it went in because he knows it did.
I'll be honest, I did not know there was a Hannalore iPod character before four strips ago
How do all these characters know about him, like Bubbles? I guess Momo makes sense, but the relatively recent war bot arc?
That character has existed in the strip for a long time. There's no reason to assume just because he hasn't appeared in recent stories none of the characters have interacted with him. Jeph often jumps ahead in time between story arcs.
Posts
Ehh, Joe did a shitty thing, and Joyce is understandably upset. But of all the people who can get high-minded at Joe, the person who beat the shit out of him for no reason isn't one of them.
I think I'm out of the loop on Atomic Robo. Is this a guy that's shown up before?
Robo's talked to other celebrity tech / industry people a few times, and I think they mentioned that his new base was near one of Branson's airfields before.
EDIT: Also just in case
https://youtu.be/4tgW8x8HgRU
http://www.cc.com/video-clips/xxwsh0/the-colbert-report-richard-branson
http://www.fallout3nexus.com/downloads/file.php?id=16534
Monster Pulse
He was previously seen being pissed that Robo 'borrowed' a fuel truck without permission.
For Action Science purposes, but still.
I think he's being set up as the annoying neighbor who starts a ten-year legal battle of a quarter-inch of hedge growing over into their garden, but scaled up for big science industrialists.
I don't know if they're actually created as-is? They're capable of learning, in any case, and their minds are black boxes.
Please, it's clearly the Prometheans doing stuff (one of which presumably showed up as that godbot earlier in the comic)
There's not nearly enough body horror for the Exsurgent virus to be involved
Penny Arcade
A Ghost Story
This can also explain fictional robots not possessing traits that a computer does, like the ability to do fast calculations; because they have a human-like robobrain and not a CPU.
How does having different morals excuse an AI (or anyone, really) from repercussions for their actions? "Oh, I'm sorry Mr. Judge, my morals dictate that it is perfectly fine to kill my unwilling spouse, so I guess I'll just go home now?"
The law is a pragmatic technology meant for the protection of society. If an AI was behaving in a destructive manner, even with all the most positive intent in the world, it should be stopped by the authorities.
poot
EDIT: prout
There's something called the McNaughten rule. Simply put, it tests for criminal insanity by determining whether the one who committed the crime recognizes the difference between right and wrong. Normally the vast majority of human beings do distinguish from right and wrong.
But what about AI's? They have minds of their own, but they've stated before that they dedicate a colossal amount of their own personal processing power towards their human-like characteristics. That's not something humans do and it's a clear line between AI and human. A human tends to learn right from wrong at an early age for the most part. So what happens when an AI shows up that does not choose to spend that power to be like a human? Are they capable of recognizing when they're doing something wrong? Are they insane? Maybe by our metrics, which are intended for humans, but they may also be following the most logical path because they are absent of "good and evil" and are in fact quite sane as it pursues the singular goal it's chosen. Is it considered "wrong" for the AI to choose not to be like a human or is making humanity mandatory "right"?
These aren't questions that will ever be addressed in QC. They're the stuff of deep science-fiction novels that plunge headfirst into the concepts of transhumanism and existences that are unlike our own. Certainly, we can impose our own values on those existences, brand them with our laws and enforce the consequences, but is that ethically "right" or "wrong" when again regarding an existence that is unlike humanity but very much sentient?
We are MARTEN
We are many and we are one
uh oh it's the qc police cheese it!
On my sleeve, let the runway start
Any ethics is an imposition of value. (Yes, yes, speak with the outward tongue of violence, KILL SIX BILLION DEMONS, etc etc etc) What ethics is is a way of making decisions. Any consistent ethos will try to make decisions that maximize the chances of achieving one or more goals. The purpose of the concept of the Good is to find a way to "resolve" different, perhaps contradictory goals in a way that satisfies the most goals that the differing ethoses have. Notice I haven't spoken about humanity yet. I don't need to. There is nothing intrinsically human about ethics. Rats have it, ants have it, amoebas have it, but humans are the only ones intelligent enough to comment on it... for now.
That the AI is so different, man from humanity matters not a whit to ethics. If the AI cannot understand that the goals of other beings must be taken into account, then it is either evil (and should be destroyed) or stupid (and not an AI).
Because he already killed Faye and a second murder would be too obvious.
On my sleeve, let the runway start
Demon Street
This is the webcomic reference equivalent of steph curry putting up a 3 pointer and walking away without bothering to see if it went in because he knows it did.
https://www.paypal.me/hobnailtaylor
It just dawned on me that this is still day 2 of Peri's Fairytale.
Holy shit that was five hundred comics ago?
Supernormal Step
Steam // Secret Satan
Steam: YOU FACE JARAXXUS| Twitch.tv: CainLoveless
Peritale