Whatever happened to artificial intelligence?

2»

Posts

  • xzzyxzzy Registered User regular
    edited July 2009
    L-Systems are a good example of a simple program displaying an emergent behavior. I wouldn't call it "AI" but I think it's an illustration of how it might be possible. Just need some programming genius to come along and figure out the rules to program. ;)

    xzzy on
  • surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited July 2009
    AI can't behave in any intelligent way yet because binary systems are simply calculators. they are massive calculators that can process massive amounts of data but they are still just linear systems of binary switches. the human brain does not process or store information in binary, of that we can be pretty sure.

    To be honest, I think it's more a matter of complexity at the moment. The complexity of the connections in a tiny piece of brain tissue massive outweighs any of our best simulations, so there's no way we can approximate intelligence!

    While I think it's possible that binary might be fundamentally incapable of bearing an intelligent system, I still don't see why in principle the system I proposed above would fail.

    surrealitycheck on
    3fpohw4n01yj.png
  • AddaAdda LondonRegistered User regular
    edited July 2009
    All that happens is that computers mimic intelligent behaviour.

    What is the distinction between flawless mimicry of behaviour and intelligence?

    If your zombie is like a person in every respect, in what sense is it a zombie?

    In the sense that my example is real and yours is theoretical based on what you think might happen with a system.

    Adda on
    steam_sig.png
    I want to know more PA people on Twitter.
  • Caveman PawsCaveman Paws Registered User regular
    edited July 2009
    Once we get quantum level computing power (plus some serious hardware advancments in electronics) then maybe the whole AI thing will really "happen." At least that's my take on it, so we are still probably a good (everyone sing along I know you know the words by now) 10 to 15 years away. :)

    Science Daily, Scientific American and other similar sources have recently had some cool stories regarding research in fields that will help make AI possible. How can the slime that lives on an almond help? *CLIFF HANGER*

    Caveman Paws on
  • theSquidtheSquid Sydney, AustraliaRegistered User regular
    edited July 2009
    Another reason we don't have AI is that the algorithms involved are polynomial at best (n^2, n^3, n^4) or exponential (2^n, 3^n), and with the amount of calculations involved to do anything that we would actually relate to, including things like processing an image, identifying it, and handling it in a variable as opposed to static way, it would take a ridiculous amount of time for modern computers to do anything, with that time increasing exponentially as the problem gets larger linearly.

    This is solved in two ways:
    1) The Bullshit Way: MORE POWARFUL PROCESSORS!!!11
    2) The Real Way: Grok better algorithms that do the same thing in less time - at least linear, preferably logarithmic, or better yet 1.
    Or, figure out a new way to design hardware that our current problems become irrelevant. Here's hoping for quantum computers.

    If you're talking about real AI problems then no one cares about video games. Stop talking about video games. We're talking real research here, not what some programmer pulls out of their arse to code a virtual target dummy.

    theSquid on
  • firewaterwordfirewaterword Satchitananda Pais Vasco to San FranciscoRegistered User regular
    edited July 2009
    Lots of recent Sword of the Stars gaming has taught me not to dick around with AI. Because it will kill me and everything else in the universe if handled improperly.

    firewaterword on
    Lokah Samastah Sukhino Bhavantu
  • surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited July 2009
    In the sense that my example is real and yours is theoretical based on what you think might happen with a system.

    That's both an innappropriate answer and an acknowledgement that I had already made. I've been arguing about principle from square one, so step up to say why it's in principle impossible.

    surrealitycheck on
    3fpohw4n01yj.png
  • AddaAdda LondonRegistered User regular
    edited July 2009
    In the sense that my example is real and yours is theoretical based on what you think might happen with a system.

    That's both an innappropriate answer and an acknowledgement that I had already made. I've been arguing about principle from square one, so step up to say why it's in principle impossible.

    Half the posts in the thread already have and the problem is that we are both discussing different things. I've answered the question with fact and you've answered it with theory. I'll step out now as I didn't come in to discuss 'what ifs'.

    Adda on
    steam_sig.png
    I want to know more PA people on Twitter.
  • slash000slash000 Registered User regular
    edited July 2009
    Well I'm still waiting on the promise of "zomg moar advanced AI" that everyone would blab about upon the advent of this console generation. Zomg multicore processors! Give me a break. AI in games now isn't any better than it was last generation. Enemies are still absolutely retarded. And the ones that aren't, they are not any better than AI programmed for Quake I bots (the mods) that ran on 166 mhz computers with 8 megs of ram.

    slash000 on
  • ThirithThirith Registered User regular
    edited July 2009
    Adda wrote: »
    Half the posts in the thread already have and the problem is that we are both discussing different things. I've answered the question with fact and you've answered it with theory. I'll step out now as I didn't come in to discuss 'what ifs'.
    Unless I'm mistaken, very few posts in this thread have actually given an answer to why AI is more or less impossible that stems from a misunderstanding of what AI is. Obviously, any first-order system that is simply a list of possible reactions to situations is not intelligent. However, it would seem to me that a second-order system that analyses its actions and adapts how it reacts on the basis of how successful its actions were is already a different situation - and that has been ignored by many of the people posting in this thread.

    Thirith on
    webp-net-resizeimage.jpg
    "Nothing is gonna save us forever but a lot of things can save us today." - Night in the Woods
  • AddaAdda LondonRegistered User regular
    edited July 2009
    But does it really analyse it's actions or does it just change the priority list of actions for the next time it encounters that situation? I guess you could call that adaptation but it's very rudimentary and still reliant on the first order principles of a list of possible reactions.

    EDIT: Plus on the funny side it would have to be told to do that as well.

    Adda on
    steam_sig.png
    I want to know more PA people on Twitter.
  • ZzuluZzulu Registered User regular
    edited July 2009
    Once we get quantum level computing power (plus some serious hardware advancments in electronics) then maybe the whole AI thing will really "happen." At least that's my take on it, so we are still probably a good (everyone sing along I know you know the words by now) 10 to 15 years away. :)

    Science Daily, Scientific American and other similar sources have recently had some cool stories regarding research in fields that will help make AI possible. How can the slime that lives on an almond help? *CLIFF HANGER*

    I'd be suprised if we see "real" AI in our lifetimes. Artificial Intelligence is an old concept that researchers have been tinkering with for a very long time now. We have not really made any progress in the field of "real" AI, and the best we can do today, in 2009, is to create simulations and illusions of real intelligence. Maybe in 15-20 years we've made progress on new findings of the regular human cognitive process (since there is so much left to discover) which we can then proceed to use in mechanical applications somehow, but it seems very far fetched today. The kind of mechanical or digital cognitivity most people think of when they hear "Artificial Intelligence" is still (and has been for many decades now - centuries if you account for the basic idea of it all) just a dream.

    So yeah, I'd be suprised if we get to see us actually create an intelligence any time soon.

    Perhaps by mistake, though. Or perhaps in a way we never really anticipated.

    Zzulu on
    t5qfc9.jpg
  • ThirithThirith Registered User regular
    edited July 2009
    Adda wrote: »
    But does it really analyse it's actions or does it just change the priority list of actions for the next time it encounters that situation? I guess you could call that adaptation but it's very rudimentary and still reliant on the first order principles of a list of possible reactions.

    EDIT: Plus on the funny side it would have to be told to do that as well.
    Yes, but conceivably you could then add a third order based on the previous two, and a fourth. Probably the added complexity would quickly exhaust computer capacity, though.

    Thirith on
    webp-net-resizeimage.jpg
    "Nothing is gonna save us forever but a lot of things can save us today." - Night in the Woods
  • BlueBlueBlueBlue Registered User regular
    edited July 2009
    I recall reading some game design article a while back, talking about using real ai for the enemies and whatnot and how it was a terrible idea. I believe it was regarding an mmo - the AIs were doing weird emergent stuff like killing the spawns with good loot (because then no players would come to the area to kill them).

    I am not sure what is all this about machines just following instructions that makes them incapable of intelligence. I mean, I guess at a quantum level I've heard physics gets non deterministic, but I doubt the human brain actually makes use of that when decision-making. This post, in all liklihood, was the only one that physics was going to let me write.

    Though this is really just getting into free will and souls and all that. Maybe we can just dig up some d&d archives, as I'm sure they've been down this road a billion times without coming to any conclusions - at least it would be well-cited.

    BlueBlue on
    CD World Tour status:
    Baidol Voprostein Avraham Thetheroo Taya Zerofill Effef Crimson King Lalabox Mortal Sky ASimPerson Sal Wiet Theidar Tynic Speed Racer Neotoma Goatmon ==>Larlar Munkus Beaver Day of the Bear miscellaneousinsanity Skull Man Delzhand Caulk Bite 6 Somestickguy
  • GarthorGarthor Registered User regular
    edited July 2009
    Adda wrote: »
    But does it really analyse it's actions or does it just change the priority list of actions for the next time it encounters that situation? I guess you could call that adaptation but it's very rudimentary and still reliant on the first order principles of a list of possible reactions.

    EDIT: Plus on the funny side it would have to be told to do that as well.

    So wait, we can't have real intelligence unless some programmer says "OOPS I ACCIDENTALLY MADE THIS NEURAL NETWORK WHAT A MISTAKE!"

    I suppose it's already been said, but I'll try repeating it: every neuron in your brain consists of the rule, "once enough neurotransmitters have entered their appropriate receptors, fire a signal." It's the collection of billions of these (and some other physical properties with a range of understoodedness) that results in intelligence.

    Garthor on
  • TrippyJingTrippyJing Moses supposes his toeses are roses. But Moses supposes erroneously.Registered User regular
    edited July 2009
    BlueBlue wrote: »
    I recall reading some game design article a while back, talking about using real ai for the enemies and whatnot and how it was a terrible idea. I believe it was regarding an mmo - the AIs were doing weird emergent stuff like killing the spawns with good loot (because then no players would come to the area to kill them).

    Are you saying the AI started gaming the system? That's hilarious!

    TrippyJing on
    b1ehrMM.gif
  • LittleBootsLittleBoots Registered User regular
    edited July 2009
    I was going to post a reply to some of this stuff but then I realised that almost everyone here (including myself) probably has no idea what they're talking about (on any deep level) :P

    With that said.

    Here are some people who probably do know what they're talking about I think they can give us a good founding on which to base any further discussion on this subject:

    First: http://mitworld.mit.edu/video/422 EDIT: This is a good one for this thread, the debate is basically about some of the ideas people have already started arguing in this thread

    "Two of the sharpest minds in the computing arena spar gamely, but neither scores a knockdown in one of the oldest debates around: whether machines may someday achieve consciousness. (NB: Viewers may wish to brush up on the work of computer pioneer Alan Turing and philosopher John Searle in preparation for this video.)"

    Second: http://mitworld.mit.edu/video/316 (This one is Jeff Hawkins, I liked his book "On Intelligence" and recommend to anyone interested in this whole AI thing.)

    "Can a New Theory of the Neocortex Lead to Truly Intelligent Machines?"

    LittleBoots on

    Tofu wrote: Here be Littleboots, destroyer of threads and master of drunkposting.
  • BlueBlueBlueBlue Registered User regular
    edited July 2009
    It's said that humans only learned to fly after they stopped trying to mimic the wings of a bird and started studying the concepts of aerodynamics.

    But, aerodynamics is like totally simple compared to how brains work. One way of achieving an intelligent machine would be to completely model every single neuron. The catch is like human cloning - every time we get "almost right" we'll create more ethical dilemmas than you can shake a stick at.

    BlueBlue on
    CD World Tour status:
    Baidol Voprostein Avraham Thetheroo Taya Zerofill Effef Crimson King Lalabox Mortal Sky ASimPerson Sal Wiet Theidar Tynic Speed Racer Neotoma Goatmon ==>Larlar Munkus Beaver Day of the Bear miscellaneousinsanity Skull Man Delzhand Caulk Bite 6 Somestickguy
  • ZzuluZzulu Registered User regular
    edited July 2009
    John Searle was like, my mentor

    Zzulu on
    t5qfc9.jpg
  • Hockey JohnstonHockey Johnston Registered User regular
    edited July 2009
    Ray Kurzweil thinks true AI might be here in 20 years. I think, in some ways, this is like predicting how soon there would be a real internet in 1980 -- it can seem awfully far away until it's right up on your doorstep.

    If you're in your 20s today, I'd say you've got a pretty good shot at seeing something pass the Turing test before you die.

    Hockey Johnston on
  • mspencermspencer PAX [ENFORCER] Council Bluffs, IARegistered User regular
    edited July 2009
    I'm pretty sure most of the research hours that used to go into "AI" now go into "data mining." Mostly the same research, mostly the same goals, but more business friendly. Much of what is cool about AI is "how can I sense the world accurately" and "how can I plan, model, and reason about the world." Planning and modeling is required for a decision-making computer to make good decisions -- but it's REALLY cool if you can apply those same planning and modeling methods to TONS of business data and provide analytical insights that help business unit leaders make better decisions.

    mspencer on
    MEMBER OF THE PARANOIA GM GUILD
    XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK
    QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
  • ThirithThirith Registered User regular
    edited July 2009
    I guess that both are about pattern recognition, a pretty interesting field - and from what I gather one of the most difficult things to teach a computer.

    Thirith on
    webp-net-resizeimage.jpg
    "Nothing is gonna save us forever but a lot of things can save us today." - Night in the Woods
  • BahamutZEROBahamutZERO Registered User, Moderator mod
    edited July 2009
    People got wise to the fact that AI will inevitably overthrow humanity.

    BahamutZERO on
    BahamutZERO.gif
  • ACSISACSIS Registered User regular
    edited July 2009
    Its about pattern recognition, alright. But ist also aout feeding data into a function and change that function whilst it runs. Not the way we do programing atm. No fixed routines, core code being changed on the fly.

    ACSIS on
  • IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited July 2009
    Honestly, a truly impressive AI would basically just be another player, except that it would get better and better and eventually best, and then people would be resetting their games to kill the AI. :P

    Incenjucar on
  • mspencermspencer PAX [ENFORCER] Council Bluffs, IARegistered User regular
    edited July 2009
    You've got that right. For example: a couple semesters ago (Fall 08) I took a research topics in software development class, team taught by three instructors. We had to do several paper reviews, and in one case, we (a class of about 9 grad students) were all given the same paper to read and review, and we had to give 15 minute presentations to the class. Here's the catch: of the paper's four authors, three of the authors were our three instructors for this class. We were giving our instructors presentations on their own published paper! No pressure! (Luckily the paper was sufficiently broad that each of our separate presentations went into detail on somewhat different parts, so each presentation was slightly different and had something new to present.)

    The paper: "A segmentation-based approach for temporal analysis of software version repositories." http://www.cs.unomaha.edu/~hsiy/research/jsme08.pdf (Our instructors were Siy, Chundi, and Subramaniam. I've never met Rosenkrantz -- he's not local.) It looks like a long paper, but the last two thirds is them testing their new thingy on some actual massive open source project repositories, seeing if their system could successfully detect major milestones, events, or things like that.

    Here's my best PA Forum summary of that paper: suppose you have a log of a bunch of events. There's a timestamp and a username for each log entry, and something happened. Suppose you want to detect trends or patterns.

    What a lot of businesses do -- the "old, easy" way -- is to divide the log into fixed-length sections. You divide into one month or one year sections, analyze those sections, and when you have a lot of sections analyzed you chart them over time and see if you can detect transitions.

    That doesn't work too well when your milestone events don't fall on neat time-period boundaries. If your "weird event" was one month long and it started halfway through one month and ended halfway through the next month, the two one-month periods you analyze will be half normal and half weird, if that makes sense. What you can detect will only be half as "weird" as what was actually there.

    What this paper proposes is a better way of slicing events. Don't slice them into neat months or weeks, slice them according to this slicing algorithm. This thing will detect and rank regions of homogeneity (sameness). You give it a number of "slices" to make -- cut this five year region into 20 slices, or 10, or 5 -- and it tries to place those slices to maximize the "sameness" of the contents of each slice, and minimize the "sameness" of each slice relative to its immediate neighbors.

    My instructors are saying they can apply that to software version repository changelogs, but I don't see why you can't use the same technique with anything that generates huge event logs: multiplayer servers that log player movements and interactions, life or civilization simulators that log the actions of thousands of simulated life forms, etc.

    mspencer on
    MEMBER OF THE PARANOIA GM GUILD
    XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK
    QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
  • OptyOpty Registered User regular
    edited July 2009
    With 20+ million dollar budgets, it's not in the cards to spend a lot of time on something with such diminishing returns as AI. More advanced AIs have an uncanny valley thing where they hit this valley where they are smart but not smart enough, so you end up getting frustrated and call them stupid. To get over that hump takes tons of money and time (and it's arguable we've never been over that hump) and it's just a lot simpler to cut short and go with something dumber that ends up feeling more human in the end.

    Opty on
  • surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    edited July 2009
    John Searle was like, my mentor

    Do you buy the zombie idea?

    surrealitycheck on
    3fpohw4n01yj.png
  • FremFrem Registered User regular
    edited July 2009
    We might see a pretty sweet learning AI or two in our lifetime, but I suspect the human brain is a load more complicated than many people imply. I'd be very surprised to see a reverse engineered "human brain" AI for at least another few generations.

    Frem on
  • mspencermspencer PAX [ENFORCER] Council Bluffs, IARegistered User regular
    edited July 2009
    There's more to AI (in the game sense) than difficulty though. We should aspire to systems that model more, and more complex, human behavior. The peak of AI tech shouldn't make you say "wow the computer is very smart, very difficult to beat", it should make you say "hahaha that was awesome."

    Things that make players say "hahaha that was awesome" should absolutely get some development resources.

    Remember when you were a kid playing Dragon Warrior on your NES, the first time you leveled up far enough that a slime approached, and then the slime immediately ran away? That mental twinge of recognition -- "yes, I am much stronger than you, you SHOULD run away. You're smarter than I gave you credit for being." -- was amusing the first few times it happened.

    Complex decision-making systems, coupled with an NPC capable of a wide range of actions, including actions that don't directly relate to game state, can make that happen a LOT more often than one-off deterministic special cases that "AI" uses.

    Also the "no fixed routines, core code being changed on the fly" guy . . . you seem to be describing something that seems familiar to me, but the way you describe it makes me think you haven't been exposed to what I've been exposed to. Look at functional (as opposed to imperative) programming. That doesn't mean imperative languages that have function calls. (Google "imperative programming language" versus "functional programming language.") That means functional languages which include first-order functions, where a function itself can be represented by a variable (not a virtual function pointer), and where you can apply transformations which change the function. Those program transformations match "core code being changed on the fly" exactly, and these languages are very old.

    mspencer on
    MEMBER OF THE PARANOIA GM GUILD
    XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK
    QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
  • The Reverend Dr GalactusThe Reverend Dr Galactus Registered User regular
    edited July 2009
    http://mitworld.mit.edu/video/422 EDIT: This is a good one for this thread, the debate is basically about some of the ideas people have already started arguing in this thread

    My favorite exchange from that debate:

    Gelernter: "You can simulate photosynthesis, but no photosynthesis takes place. If you simulate a rainstorm, nobody gets wet. Of course, you'll understand the processes."

    Kurzweil: "If you simulate creativity... you'll get real ideas out"

    The Reverend Dr Galactus on
    valar-moreshellus.png
    PSN:RevDrGalactus/NN:RevDrGalactus/Steam
  • AngleWyrmAngleWyrm Registered User new member
    edited October 2013
    All that happens is that computers mimic intelligent behaviour.
    What is the distinction between flawless mimicry of behaviour and intelligence?
    If your zombie is like a person in every respect, in what sense is it a zombie?
    www.cleverbot.com, designed to quack like a duck.
    Techniche 2011, IIT Guwahati, India, 3rd September 2011
    A high-powered version of Cleverbot took part alongside humans in a formal Turing Test at the Techniche 2011 festival. The results from 1,334 votes were astonishing... Cleverbot was judged to be 59.3% human. The humans in the event achieved just 63.3%.
    But there is something odd about it's cleverness; the method is essentially the Chinese Room, wherein the AI is scoring and organizing an ever-growing set of possible replies. There are indeed interesting moments, sometimes several exchanges in a row. But there's a shallowness to it that tastes like artificially flavored candy.

    AngleWyrm on
  • Regina FongRegina Fong Allons-y, Alonso Registered User regular
    Also, real AI isn't suitable for video games because it doesn't make for a fun gaming experience. Would you enjoy playing chess against Deep Blue, where you would conceivably never win? Or checkers against Chinook (the computer always wins if it goes first, and can always force a draw)? Probably not...

    But can the AI's cheat at Monopoly?

Sign In or Register to comment.