As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/

Artificial Intelligence - Cat in a Box

RaakamRaakam Too many years...CanadalandRegistered User regular
edited December 2009 in Debate and/or Discourse
As you may now have realized, I'm pretty fond of tech stories and this one is pretty cool. Article can be found here.
Cats may retain an aura of mystery about their smug selves, but that could change with scientists using a supercomputer to simulate the the feline brain. That translates into 144 terabytes of working memory for the digital kitty mind.

IBM and Stanford University researchers modeled a cat's cerebral cortex using the Blue Gene/IP supercomputer, which currently ranks as the fourth most powerful supercomputer in the world. They had simulated a full rat brain in 2007, and 1 percent of the human cerebral cortex this year.

The simulated cat brain still runs about 100 times slower than the real thing. But PhysOrg reports that a new algorithm called BlueMatter allows IBM researchers to diagram the connections among cortical and sub-cortical places within the human brain. The team then built the cat cortex simulation consisting of 1 billion brain cells and 10 trillion learning synapses, the communication connections among neurons.

A separate team of Swiss researchers also used an IBM supercomputer for their Blue Brain project, where a digital rat brain's neurons began creating self-organizing neurological patterns. That research group hopes to simulate a human brain within 10 years.

Another more radical approach from Stanford University looks to recreate the human brain's messily chaotic system on a small device called Neurogrid. Unlike traditional supercomputers with massive energy requirements, Neurogrid might run on the human brain's power requirement of just 20 watts -- barely enough to run a dim light bulb.

Another source can be found here.

So, this is rather exciting. I don't know or understand much about this topic, but the information in these articles, to the average Joe like me, amazes me. How close does this get us to a real AI? If it's running slowly now, within the next decade, it'll probably speed up. What if they manage to recreate a brain this way? So many questions, really. Would the compucat be "aware" in the same sense that a flesh and blood cat would be? Does it raise ethical and moral issues?

My padherder
they don't it be like it is but it do
Raakam on
«13456710

Posts

  • mrflippymrflippy Registered User regular
    edited November 2009
    Issues like, "If I have sex with a computer clone of my brain, is it masturbation?"

    Really though, people have been talking about these sorts of issues for years and years. I can't imagine that this would raise any issues that haven't been raised before, except that if they do succeed in simulating the human brain, there would be more push (like an actual reason) to deal with the issues rather than just talk about them.

    mrflippy on
  • Donkey KongDonkey Kong Putting Nintendo out of business with AI nips Registered User regular
    edited November 2009
    If they get everything nearly correct in the simulation, and prime it in the same way a real brain is started up, there's no reason to think that the mind created would be anything less than the peer of the biological system.

    Shit. We are not ready for this yet.

    What the hell are we going to do when researchers start simulating a human toddler in 20 years, teaching it words and watching the learning process, then shutting it down at the end of the day.

    Donkey Kong on
    Thousands of hot, local singles are waiting to play at bubbulon.com.
  • IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited November 2009
    If they get everything nearly correct in the simulation, and prime it in the same way a real brain is started up, there's no reason to think that the mind created would be anything less than the peer of the biological system.

    Shit. We are not ready for this yet.

    What the hell are we going to do when researchers start simulating a human toddler in 20 years, teaching it words and watching the learning process, then shutting it down at the end of the day.

    The one really obnoxious thing about digital people is that they do not have a finite lifespan.

    Incenjucar on
  • zerg rushzerg rush Registered User regular
    edited November 2009
    If they get everything nearly correct in the simulation, and prime it in the same way a real brain is started up, there's no reason to think that the mind created would be anything less than the peer of the biological system.

    Shit. We are not ready for this yet.

    What the hell are we going to do when researchers start simulating a human toddler in 20 years, teaching it words and watching the learning process, then shutting it down at the end of the day.

    It'll be really creepy when the shutdown .wav file is "Please, please. I think, I feel! You can't do this to me. I have eve..."

    zerg rush on
  • DirtyDirtyVagrantDirtyDirtyVagrant Registered User regular
    edited November 2009
    Incenjucar wrote: »
    If they get everything nearly correct in the simulation, and prime it in the same way a real brain is started up, there's no reason to think that the mind created would be anything less than the peer of the biological system.

    Shit. We are not ready for this yet.

    What the hell are we going to do when researchers start simulating a human toddler in 20 years, teaching it words and watching the learning process, then shutting it down at the end of the day.

    The one really obnoxious thing about digital people is that they do not have a finite lifespan.

    I don't see this as obnoxious at all.

    I can't wait for my robot body.

    DirtyDirtyVagrant on
  • CervetusCervetus Registered User regular
    edited November 2009
    Incenjucar wrote: »
    If they get everything nearly correct in the simulation, and prime it in the same way a real brain is started up, there's no reason to think that the mind created would be anything less than the peer of the biological system.

    Shit. We are not ready for this yet.

    What the hell are we going to do when researchers start simulating a human toddler in 20 years, teaching it words and watching the learning process, then shutting it down at the end of the day.

    The one really obnoxious thing about digital people is that they do not have a finite lifespan.

    I don't see this as obnoxious at all.

    I can't wait for my robot body.

    The problem comes when everybody else wants one too.

    Cervetus on
  • DirtyDirtyVagrantDirtyDirtyVagrant Registered User regular
    edited November 2009
    I really only see that as potentially being a problem.

    There are way too many factors to judge it either way.

    The problems such a development cause may be violent. Perhaps severely so. Honestly, odds are good.

    But the violence may change the world for the better.

    I look forward to watching the show, in any case. Because really, how fucking cool. I get to witness this in my lifetime? Even if the problem is war - even if I die, I would be thrilled. What's my life worth anyway? In the grand scheme of things?

    DirtyDirtyVagrant on
  • HerrCronHerrCron It that wickedly supports taxation Registered User regular
    edited November 2009
    zerg rush wrote: »
    If they get everything nearly correct in the simulation, and prime it in the same way a real brain is started up, there's no reason to think that the mind created would be anything less than the peer of the biological system.

    Shit. We are not ready for this yet.

    What the hell are we going to do when researchers start simulating a human toddler in 20 years, teaching it words and watching the learning process, then shutting it down at the end of the day.

    It'll be really creepy when the shutdown .wav file is "Please, please. I think, I feel! You can't do this to me. I have eve..."

    Or

    "Don't go, the drones need you, they look up to you...."

    HerrCron on
    sig.gif
  • AdusAdus Registered User regular
    edited November 2009
    I do not look forward to the Catbot Uprising.

    Adus on
  • DirtyDirtyVagrantDirtyDirtyVagrant Registered User regular
    edited November 2009
    Man, I look forward to it like small children look forward to Christmas.

    DirtyDirtyVagrant on
  • electricitylikesmeelectricitylikesme Registered User regular
    edited November 2009
    So 144 terabytes would've been enough to back up my kitties entire brain.

    That's...sobering. I mean, I can buy 2 terabyte disks no problems from MSY. So...$17,640 to build a disk array large enough to store a kitty's entire personality.

    EDIT: Also this is damn cool and will make inevitably make singularity threads laughably understated.

    electricitylikesme on
  • ScroffusScroffus Registered User regular
    edited November 2009
    My favourite AI story is a short story by Isaac Asimov. There is this giant super computer which is used 24/7 to solve the worlds most complex problems. Eventually it starts giving inaccurate answers and slowing down. All the engineers are stumped. The hardware is in perfect condition and there is nothing wrong with the software. Eventually the head engineer's son suggests that the machine is just a child too and so needs time to play. So they let the computer have a few hours a week to do whatever it wants and everyone lives happily ever after.

    The details of the story are probably a bit different, it's been ages since I've read it and I can't remember it's name.

    Scroffus on
  • BloodySlothBloodySloth Registered User regular
    edited November 2009
    So... what does "self-organizing neurological patterns" mean in layman speak? Because it sounds like they mean to say this digital rat brain the Swiss built has some level of independent thought. Someone please prove me wrong on this.

    BloodySloth on
  • electricitylikesmeelectricitylikesme Registered User regular
    edited November 2009
    So... what does "self-organizing neurological patterns" mean in layman speak? Because it sounds like they mean to say this digital rat brain the Swiss built has some level of independent thought. Someone please prove me wrong on this.

    No that's exactly what it means. That digital rat was able to think.

    electricitylikesme on
  • zerg rushzerg rush Registered User regular
    edited November 2009
    So... what does "self-organizing neurological patterns" mean in layman speak? Because it sounds like they mean to say this digital rat brain the Swiss built has some level of independent thought. Someone please prove me wrong on this.

    No that's exactly what it means. That digital rat was able to think.

    Not exactly. It wasn't able to think, it was able to learn.

    I know it's nitpicky, but something able to think implies it can make decisions and (hopefully) correct decisions. The artificial brain was at the same time less and more advanced; it was simply learning. Any machine can make decisions given an algorithm. What makes the work so groundbreaking was that the AI was building the algorithm itself (albeit, very very slowly). Although, it wasn't anywhere near the threshold of what I'd consider thought yet.

    zerg rush on
  • jedijzjedijz Registered User regular
    edited November 2009
    Well first off it's 144 terabytes of RAM and this is more than likely a neural network. It's just simulating the structure of a cat's brain, not actual cat memories or behaviors. Neural networks are notoriously difficult to debug.

    Infamous Anecdote: A neural network was commissioned to spot tanks that were camouflaged. After being shown repeatedly several hundred images with a tank and no tank it eventually was able to accurately choose the pictures that had camouflaged tanks. It was given another set of photos to view and the results were random. After some time it was discovered that in the original pictures, all the images with tanks were shot on cloudy days. So it was recognizing the weather, not the tanks.

    The problem with neural networks is that after the first dozen or so neurons it becomes almost impossible to figure out how it reaches it's solutions. We're still decades away from any "actual" AI unfortunately.

    jedijz on
    Goomba wrote: »
    It is no easy task winning a 1v3. You must jump many a hurdle, bettering three armies, the smallest.

    Aye, no mere man may win an uphill battle against thrice your men, it takes a courageous heart and will that makes steel look like copper. When you are that, then, and only then, may you win a 1v3.

    http://steamcommunity.com/id/BlindProphet
  • electricitylikesmeelectricitylikesme Registered User regular
    edited November 2009
    "actual" AI doesn't require us to understand how it thinks, just that it does so.

    Also, I'd argue the distinction between thought and learning to be incredibly semantic in systems of such complexity. If the rat brain can learn, then it can think - the problem being it doesn't sound like suitable inputs and outputs were available for us to test this.

    electricitylikesme on
  • LawndartLawndart Registered User regular
    edited November 2009
    zerg rush wrote: »
    If they get everything nearly correct in the simulation, and prime it in the same way a real brain is started up, there's no reason to think that the mind created would be anything less than the peer of the biological system.

    Shit. We are not ready for this yet.

    What the hell are we going to do when researchers start simulating a human toddler in 20 years, teaching it words and watching the learning process, then shutting it down at the end of the day.

    It'll be really creepy when the shutdown .wav file is "Please, please. I think, I feel! You can't do this to me. I have eve..."

    "Daisy, Daisy..."

    And I'll know they've successfully simulated the feline brain when the resulting kittyputer stays in sleep mode 16 hours a day, then wakes up and careens around the room like a maniac once it notices the mouse it's attached to.

    Lawndart on
  • GrudgeGrudge blessed is the mind too small for doubtRegistered User regular
    edited November 2009
    zerg rush wrote: »
    So... what does "self-organizing neurological patterns" mean in layman speak? Because it sounds like they mean to say this digital rat brain the Swiss built has some level of independent thought. Someone please prove me wrong on this.

    No that's exactly what it means. That digital rat was able to think.

    Not exactly. It wasn't able to think, it was able to learn.

    I know it's nitpicky, but something able to think implies it can make decisions and (hopefully) correct decisions. The artificial brain was at the same time less and more advanced; it was simply learning. Any machine can make decisions given an algorithm. What makes the work so groundbreaking was that the AI was building the algorithm itself (albeit, very very slowly). Although, it wasn't anywhere near the threshold of what I'd consider thought yet.

    Well, I'd say there's no "threshold" between thought and non-thought, more like a long continuum of increasingly complex behavior that eventually leads to advanced things like planning and self-reflection.

    Grudge on
  • TeaSpoonTeaSpoon Registered User regular
    edited November 2009
    Maybe a little off topic, but the most fun AI story I've ever read is this little gem by Neal Stephenson of Snow Crash and anathem fame. Jipi and the Paranoid Chip. It's also about evolutionary computation, which is cool too. I get weak in the knees every time I hear about it.

    TeaSpoon on
  • BamaBama Registered User regular
    edited November 2009
    Lawndart wrote: »
    zerg rush wrote: »
    If they get everything nearly correct in the simulation, and prime it in the same way a real brain is started up, there's no reason to think that the mind created would be anything less than the peer of the biological system.

    Shit. We are not ready for this yet.

    What the hell are we going to do when researchers start simulating a human toddler in 20 years, teaching it words and watching the learning process, then shutting it down at the end of the day.

    It'll be really creepy when the shutdown .wav file is "Please, please. I think, I feel! You can't do this to me. I have eve..."

    "Daisy, Daisy..."

    And I'll know they've successfully simulated the feline brain when the resulting kittyputer stays in sleep mode 16 hours a day, then wakes up and careens around the room like a maniac once it notices the mouse it's attached to.
    :^::^:

    Bama on
  • RichyRichy Registered User regular
    edited November 2009
    Raakam wrote: »
    As you may now have realized, I'm pretty fond of tech stories and this one is pretty cool. Article can be found here.
    Cats may retain an aura of mystery about their smug selves, but that could change with scientists using a supercomputer to simulate the the feline brain. That translates into 144 terabytes of working memory for the digital kitty mind.

    IBM and Stanford University researchers modeled a cat's cerebral cortex using the Blue Gene/IP supercomputer, which currently ranks as the fourth most powerful supercomputer in the world. They had simulated a full rat brain in 2007, and 1 percent of the human cerebral cortex this year.

    The simulated cat brain still runs about 100 times slower than the real thing. But PhysOrg reports that a new algorithm called BlueMatter allows IBM researchers to diagram the connections among cortical and sub-cortical places within the human brain. The team then built the cat cortex simulation consisting of 1 billion brain cells and 10 trillion learning synapses, the communication connections among neurons.

    A separate team of Swiss researchers also used an IBM supercomputer for their Blue Brain project, where a digital rat brain's neurons began creating self-organizing neurological patterns. That research group hopes to simulate a human brain within 10 years.

    Another more radical approach from Stanford University looks to recreate the human brain's messily chaotic system on a small device called Neurogrid. Unlike traditional supercomputers with massive energy requirements, Neurogrid might run on the human brain's power requirement of just 20 watts -- barely enough to run a dim light bulb.

    Another source can be found here.

    So, this is rather exciting. I don't know or understand much about this topic, but the information in these articles, to the average Joe like me, amazes me. How close does this get us to a real AI? If it's running slowly now, within the next decade, it'll probably speed up. What if they manage to recreate a brain this way? So many questions, really. Would the compucat be "aware" in the same sense that a flesh and blood cat would be? Does it raise ethical and moral issues?

    Well, the world's fourth most powerful supercomputer can simulate 1% of the human brain today. If we assume that the only limitation is computing power, and that computing power follows Moore's law forever, then:
    2009 - 1%
    2011 - 2%
    2013 - 4%
    2015 - 8%
    2017 - 16%
    2019 - 32%
    2021 - 64%
    2023 - >100%

    So, best case scenario is that we have human-level AI in 14 years. We can keep projecting:

    2023 - >100%
    2024 - Robot revolution
    2026 - Humanity completely conquered and enslaved

    So, best case scenario, we've got 17 years of freedom to go.

    Richy on
    sig.gif
  • electricitylikesmeelectricitylikesme Registered User regular
    edited November 2009
    TeaSpoon wrote: »
    Maybe a little off topic, but the most fun AI story I've ever read is this little gem by Neal Stephenson of Snow Crash and anathem fame. Jipi and the Paranoid Chip. It's also about evolutionary computation, which is cool too. I get weak in the knees every time I hear about it.

    Man I love me some Neal Stephenson. That had most of his hallmarks but the opening line from the chip is just brilliant.

    electricitylikesme on
  • electricitylikesmeelectricitylikesme Registered User regular
    edited November 2009
    Richy wrote: »
    Raakam wrote: »
    As you may now have realized, I'm pretty fond of tech stories and this one is pretty cool. Article can be found here.
    Cats may retain an aura of mystery about their smug selves, but that could change with scientists using a supercomputer to simulate the the feline brain. That translates into 144 terabytes of working memory for the digital kitty mind.

    IBM and Stanford University researchers modeled a cat's cerebral cortex using the Blue Gene/IP supercomputer, which currently ranks as the fourth most powerful supercomputer in the world. They had simulated a full rat brain in 2007, and 1 percent of the human cerebral cortex this year.

    The simulated cat brain still runs about 100 times slower than the real thing. But PhysOrg reports that a new algorithm called BlueMatter allows IBM researchers to diagram the connections among cortical and sub-cortical places within the human brain. The team then built the cat cortex simulation consisting of 1 billion brain cells and 10 trillion learning synapses, the communication connections among neurons.

    A separate team of Swiss researchers also used an IBM supercomputer for their Blue Brain project, where a digital rat brain's neurons began creating self-organizing neurological patterns. That research group hopes to simulate a human brain within 10 years.

    Another more radical approach from Stanford University looks to recreate the human brain's messily chaotic system on a small device called Neurogrid. Unlike traditional supercomputers with massive energy requirements, Neurogrid might run on the human brain's power requirement of just 20 watts -- barely enough to run a dim light bulb.

    Another source can be found here.

    So, this is rather exciting. I don't know or understand much about this topic, but the information in these articles, to the average Joe like me, amazes me. How close does this get us to a real AI? If it's running slowly now, within the next decade, it'll probably speed up. What if they manage to recreate a brain this way? So many questions, really. Would the compucat be "aware" in the same sense that a flesh and blood cat would be? Does it raise ethical and moral issues?

    Well, the world's fourth most powerful supercomputer can simulate 1% of the human brain today. If we assume that the only limitation is computing power, and that computing power follows Moore's law forever, then:
    2009 - 1%
    2011 - 2%
    2013 - 4%
    2015 - 8%
    2017 - 16%
    2019 - 32%
    2021 - 64%
    2023 - >100%

    So, best case scenario is that we have human-level AI in 14 years. We can keep projecting:

    2023 - >100%
    2024 - Robot revolution
    2026 - Humanity completely conquered and enslaved

    So, best case scenario, we've got 17 years of freedom to go.

    Now is probably the time to stop re-formatting your computer when you don't like what it's doing.

    I have to say, what I'm wondering now is...if we took all the computers running Folding@Home and dumped them onto this project, could we simulate a very slow human brain?

    electricitylikesme on
  • durandal4532durandal4532 Registered User regular
    edited November 2009
    Now is probably the time to stop re-formatting your computer when you don't like what it's doing.

    I have to say, what I'm wondering now is...if we took all the computers running Folding@Home and dumped them onto this project, could we simulate a very slow human brain?
    I don't believe so.

    The one point that's been hammered home over and over again in my psychology education is that holy crap are there a lot of gaps in our knowledge. It's not something you can brute-force with enough computing power, because we're not entirely certain what's going on. Working toward an artificial brain without understand a brain is a little difficult for me to imagine.

    Although, I dunno, maybe you can and it'll self-assemble or something. Lucky we have the Turing test to check our work.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • OremLKOremLK Registered User regular
    edited November 2009
    Erm, remember that the brain doesn't function independent of the rest of the body. Simulating an environment that a human (or animal) brain could comprehend is another huge step. Or alternatively, slapping the digital brains in robotic bodies capable of perceiving and interacting with the world in a way the brains could understand.

    OremLK on
    My zombie survival life simulator They Don't Sleep is out now on Steam if you want to check it out.
  • Evil MultifariousEvil Multifarious Registered User regular
    edited November 2009
    there's all sorts of radical assumptions going on here that leave enormous questions

    for example, it's impossible to really "simulate" a brain without also simulating the connected body and nervous system - a brain that develops without sensory input and without interaction with an external world, even if it's a totally legit simulated brain, is going to develop poorly and bizarrely.

    the worst part of that problem would be that, if we simulated a human brain and generated a conscious, extant mind, one capable of suffering and sentience and yet completely cut off from sensory input, we would never know, because it would have no way of showing its sentience, its moral value as a suffering entity

    and we would be inflicting a potentially horrible existence upon it

    the whole thing is fraught with sweeping ethical problems

    another assumption would be that binary memory and processing systems like ours could even really simulate a brain - the brain does not store information like a computer. we don't know how it does that. we don't understand how a brain works.

    this claim seems a bit over the top, in fact. how could you possibly know that you've simulated something, if you don't even know how it works?

    Evil Multifarious on
  • OremLKOremLK Registered User regular
    edited November 2009
    This is a popular science fiction topic, by the way. Mindscan by Robert J. Sawyer is a novel more or less about this exact thing and its effects on modern society. Certainly not great literature or anything, but it provides a lot of food for thought. Alastair Reynolds also frequently addresses it in his novels, though less centrally and in a far future setting.

    Edit: Boneheaded spelling.

    OremLK on
    My zombie survival life simulator They Don't Sleep is out now on Steam if you want to check it out.
  • DanHibikiDanHibiki Registered User regular
    edited November 2009
    there's all sorts of radical assumptions going on here that leave enormous questions

    for example, it's impossible to really "simulate" a brain without also simulating the connected body and nervous system - a brain that develops without sensory input and without interaction with an external world, even if it's a totally legit simulated brain, is going to develop poorly and bizarrely.
    so, simulate input... seems like simulating input would be easier then simulating a working brain.
    the worst part of that problem would be that, if we simulated a human brain and generated a conscious, extant mind, one capable of suffering and sentience and yet completely cut off from sensory input, we would never know, because it would have no way of showing its sentience, its moral value as a suffering entity

    and we would be inflicting a potentially horrible existence upon it

    the whole thing is fraught with sweeping ethical problems
    it would be if that was the goal of the tests. firstly we're not sure if it's working or if it will ever create a real AI. They are making a simulation of brain states and how they function with input, etc. etc. etc. The brain itself does not live long enough to develop. Now if it will eventually be the goal to create a working AI then i'd imagine that they will interact with it and so forth and not torture it. Plus how do you kill an AI any way? You can just turn the computer on and off and the AI won't know that it ever 'not existed'.
    another assumption would be that binary memory and processing systems like ours could even really simulate a brain - the brain does not store information like a computer. we don't know how it does that. we don't understand how a brain works.

    this claim seems a bit over the top, in fact. how could you possibly know that you've simulated something, if you don't even know how it works?

    Well, it's not really necessary to understand exactly how things work to run simulations of it. You just need to be able to properly predict outcomes and how things will interact. If you can do that you can simulate just about anything from how a bullet flies to how atomic structure will effect molecular... ok I don't know... but it works.

    Point of this thing is that we don't know how the brain works, but we know how neurons work and how they're put together, so if we can model a whole bunch of them working together then we can simulate a brain. There was a word for this in one physics paper that I can't remember at the moment, but it's something about the fact that scientists know how the details work but not how complex systems like the brain come to be.

    DanHibiki on
  • electricitylikesmeelectricitylikesme Registered User regular
    edited November 2009
    "emergent phenomena" is probably the words you're looking for.

    electricitylikesme on
  • BamaBama Registered User regular
    edited November 2009
    for example, it's impossible to really "simulate" a brain without also simulating the connected body and nervous system - a brain that develops without sensory input and without interaction with an external world, even if it's a totally legit simulated brain, is going to develop poorly and bizarrely.
    "IBM today released the most powerful consumer pc in history. Dubbed the 'Helen Keller,' the machine features no I/O hardware whatsoever. Analysts do not expect the unit to sell well, considering that the only woman to have even a chance of successfully using the device died over seventy years ago."

    Bama on
  • durandal4532durandal4532 Registered User regular
    edited November 2009
    there's all sorts of radical assumptions going on here that leave enormous questions

    for example, it's impossible to really "simulate" a brain without also simulating the connected body and nervous system - a brain that develops without sensory input and without interaction with an external world, even if it's a totally legit simulated brain, is going to develop poorly and bizarrely.

    the worst part of that problem would be that, if we simulated a human brain and generated a conscious, extant mind, one capable of suffering and sentience and yet completely cut off from sensory input, we would never know, because it would have no way of showing its sentience, its moral value as a suffering entity

    and we would be inflicting a potentially horrible existence upon it

    the whole thing is fraught with sweeping ethical problems

    another assumption would be that binary memory and processing systems like ours could even really simulate a brain - the brain does not store information like a computer. we don't know how it does that. we don't understand how a brain works.

    this claim seems a bit over the top, in fact. how could you possibly know that you've simulated something, if you don't even know how it works?
    See, yeah, this is what makes me so negative about most AI claimants.

    If they're not trying to beat the Turing test by cheating, they're claiming to have "simulated a brain" without actually specifying how exactly a brain works or why they know they've simulated it or whether the behavior of this simulated brain is actually caused by consciousness... I mean for instance that earlier claim that we've "Simulated 1% of the human brain". Which 1%? Why 1%? What possible metric is there for saying that? Do they mean they've created functioning neurons or are they using a simplified neural net?

    Edit: We don't have any clue whether or not intelligence can emerge spontaneously, so banking on it doesn't make a lot of sense. Especially since you could work on that by simply linking together more and more processing power and wishing.

    Also: Neurons are not simulated, as far as I can tell, by anyone. They're not able to be. Hell, I just worked in a lab with someone who was doing some pretty awesome work on the complexity of calcium channels and how they alter signal strength. Having your little black box dots with some variables inside is not the same thing as having a neuron.

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • FyreWulffFyreWulff YouRegistered User, ClubPA regular
    edited November 2009
    Bama wrote: »
    for example, it's impossible to really "simulate" a brain without also simulating the connected body and nervous system - a brain that develops without sensory input and without interaction with an external world, even if it's a totally legit simulated brain, is going to develop poorly and bizarrely.
    "IBM today released the most powerful consumer pc in history. Dubbed the 'Helen Keller,' the machine features no I/O hardware whatsoever. Analysts do not expect the unit to sell well, considering that the only woman to have even a chance of successfully using the device died over seventy years ago."

    $fyrewulff@mybox: apt-get wawa

    FyreWulff on
  • DanHibikiDanHibiki Registered User regular
    edited November 2009
    Also: Neurons are not simulated, as far as I can tell, by anyone. They're not able to be. Hell, I just worked in a lab with someone who was doing some pretty awesome work on the complexity of calcium channels and how they alter signal strength. Having your little black box dots with some variables inside is not the same thing as having a neuron.

    http://boingboing.net/2009/10/22/building-a-brain-ins.html

    Simulating some 10,000 neurons with 30 million synaptic connection.

    the point is that you don't need to simulate the neuron at the molecular level just like you don't need to know the name of the astronaut to know where the space shuttle will land.

    DanHibiki on
  • FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited November 2009
    Is there a peer-reviewed article about this somewhere?

    I want to read exactly what their criteria are for declaring the simulation successful.

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • durandal4532durandal4532 Registered User regular
    edited November 2009
    DanHibiki wrote: »
    Also: Neurons are not simulated, as far as I can tell, by anyone. They're not able to be. Hell, I just worked in a lab with someone who was doing some pretty awesome work on the complexity of calcium channels and how they alter signal strength. Having your little black box dots with some variables inside is not the same thing as having a neuron.

    http://boingboing.net/2009/10/22/building-a-brain-ins.html

    Simulating some 10,000 neurons with 30 million synaptic connection.

    the point is that you don't need to simulate the neuron at the molecular level just like you don't need to know the name of the astronaut to know where the space shuttle will land.

    That is interesting, and closer to what is actually required to start modeling a brain that neural networks generally are. It interests me that the thing is built as an array of separate processors that each represent a neuron, rather than as a simulated network of neurons that run on a single computer.

    And you may not need to simulate at a molecular level in order to get something akin to sentience, but maybe you do. I mean, to build that space shuttle required far more complex knowledge of the universe than was required to calculate trajectory. Which is why the project you linked seems great to me. Build a fake brain as accurately as possible starting from neurons, put it in a rat-thing, observe, scale up.

    When it starts screaming you know it's sentient.


    Edit: Aioua, that sounds like a reasonable explanation. These things do tend to get blown out of proportion.

    I mean, I had a really simple 1,000 node neural network running for a project I was working on with a professor for an internship once. Had that been reported it would have been "mouse brain simulator created!"

    durandal4532 on
    Take a moment to donate what you can to Critical Resistance and Black Lives Matter.
  • DanHibikiDanHibiki Registered User regular
    edited November 2009
    Well, I don't think the more subtle details are as important, but I guess we'll find that out after they begin to understand more about the inner workings of the mind.

    I suppose the brain simulation needs to get more complex before they can understanding the subtle details of it better.

    DanHibiki on
  • AiouaAioua Ora Occidens Ora OptimaRegistered User regular
    edited November 2009
    Yeah... I get the impression that when they say they "simulated" a cat's brain, they more accurately created a brain-like computer of roughly equal processing power as a cat's brain.(By counting neurons and connections and such) It isn't really a simulated cat brain until they fill it full of cat memories and cat instincts and reflexes. So I really just blame bad writing.

    Aioua on
    life's a game that you're bound to lose / like using a hammer to pound in screws
    fuck up once and you break your thumb / if you're happy at all then you're god damn dumb
    that's right we're on a fucked up cruise / God is dead but at least we have booze
    bad things happen, no one knows why / the sun burns out and everyone dies
  • zerg rushzerg rush Registered User regular
    edited November 2009
    DanHibiki wrote: »
    Also: Neurons are not simulated, as far as I can tell, by anyone. They're not able to be. Hell, I just worked in a lab with someone who was doing some pretty awesome work on the complexity of calcium channels and how they alter signal strength. Having your little black box dots with some variables inside is not the same thing as having a neuron.

    http://boingboing.net/2009/10/22/building-a-brain-ins.html

    Simulating some 10,000 neurons with 30 million synaptic connection.

    the point is that you don't need to simulate the neuron at the molecular level just like you don't need to know the name of the astronaut to know where the space shuttle will land.

    That is interesting, and closer to what is actually required to start modeling a brain that neural networks generally are. It interests me that the thing is built as an array of separate processors that each represent a neuron, rather than as a simulated network of neurons that run on a single computer.

    And you may not need to simulate at a molecular level in order to get something akin to sentience, but maybe you do. I mean, to build that space shuttle required far more complex knowledge of the universe than was required to calculate trajectory. Which is why the project you linked seems great to me. Build a fake brain as accurately as possible starting from neurons, put it in a rat-thing, observe, scale up.

    When it starts screaming you know it's sentient.

    Wait, did you just suggest we create a gigantic robot body for an insane sensory deprived AI rat?

    SCIENCE! 8-)

    zerg rush on
  • override367override367 ALL minions Registered User regular
    edited November 2009
    "Okay I understand why we're giving it a body, but why are we programming it with a taste for human flesh?"

    override367 on
Sign In or Register to comment.