The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.

Technologies of the Near Future

2»

Posts

  • Fuzzy Cumulonimbus CloudFuzzy Cumulonimbus Cloud Registered User regular
    edited November 2007
    I think trying to structure intelligence like an algorithm in the first place is a mistake.
    Yes, this is what I meant.

    Fuzzy Cumulonimbus Cloud on
  • WulfWulf Disciple of Tzeentch The Void... (New Jersey)Registered User regular
    edited November 2007
    Yeah. In one of my classes I asked the professor why we couldn't just take one of those simple "Hello, what is your name?" type 'learning' programs and just keep adding to it till it has a huge database to pull from, and in typical professor fashion the answer I got was "Because, now get back to that project!" I suppose you need the ability to jump tracks in logic to get something that can truly think on its own.

    Wulf on
    Everyone needs a little Chaos!
  • durandal4532durandal4532 Registered User regular
    edited November 2007
    I think trying to structure intelligence like an algorithm in the first place is a mistake.
    Yes, this is what I meant.

    Hi5!

    This is part of why I'm doing Psych/CompSci.

    There are just a staggering amount of computer scientists attempting to replicate the brain artificially, while never actually checking with the people who actually work with them on a regular basis.

    This leads to shit like I mentioned above. ELIZA is not a dang step forward! It's a step sideways. We could do that on friggin paper.

    durandal4532 on
    We're all in this together
  • DevilGuyDevilGuy Registered User regular
    edited November 2007
    IMO awareness can't be programed, it'll take a process more akin to evolution to build one, and it might take a very long time, the problem with most if not all current programming is that it relies on algorithims which are at their most complex when expressing percentages, and awareness is alot more than the ability to do math (note if you can't do math you don't count as sapient in my book) without much more complex programing structures we won't develop AI.

    Personally I don't think thats neccesarily a bad thing, I'd much rather see microprocessors used to enhance the human mind, we already have awareness we should be trying to enhance it, not reinvent it from scratch.

    DevilGuy on
  • durandal4532durandal4532 Registered User regular
    edited November 2007
    DevilGuy wrote: »
    IMO awareness can't be programed, it'll take a process more akin to evolution to build one, and it might take a very long time, the problem with most if not all current programming is that it relies on algorithims which are at their most complex when expressing percentages, and awareness is alot more than the ability to do math (note if you can't do math you don't count as sapient in my book) without much more complex programing structures we won't develop AI.

    Personally I don't think thats neccesarily a bad thing, I'd much rather see microprocessors used to enhance the human mind, we already have awareness we should be trying to enhance it, not reinvent it from scratch.

    Ooh, that is another cool thing. There's some interesting plasticity in brains, and I'd love to be able to wacky things like speed up or down my perception of time.

    Also, it is true that every AI will eventually attempt to murder first it's creator and then humanity. So, yeah. I'd prefer we take a bit of time before making one.

    durandal4532 on
    We're all in this together
  • DanHibikiDanHibiki Registered User regular
    edited November 2007
    Also, it is true that every AI will eventually attempt to murder first it's creator and then humanity. So, yeah. I'd prefer we take a bit of time before making one.

    Also it will try to avoid the inevitable collapse of the universe and do battle with fellow AIs that disagree with it's solutions (and not forgetting to make fun puns at the expense of it's enemies, such as: His name starts with a T and rhymes with psycho). But don't worry, the AI will inevitably mellow out in the end.

    DanHibiki on
  • redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    edited November 2007
    I have a massive hard on for augmented reality. HUGE. That the UI technology to go along with it. Stuff like tracking eye motion or wearable huds or more direct methods. It's a very cool time we are living it, because a lot of things are coming together.

    I have a ridiculous amount of hope open source wireless handsets, and I am having a hard time not seeing it as something I could take a direct part in, once the services and hardware are in place.

    I only care about AI in so much as them making computers easier to work with. Sentient AI? meh, I don't know how much of a goal it really is. I guess it would pretty much make manufacturing and simple services profitable in america again. That would be a nice bonus. There are all sorts of neat things that will come along with it, but I don't really care too much about it. Too big too; far away.

    redx on
    They moistly come out at night, moistly.
  • RandomEngyRandomEngy Registered User regular
    edited November 2007
    I'm looking forward to the portable devices of the future. Phone, camera, media player with huge storage, internet and GPS. With sophisticated voice recognition software, that could play a requested song, bring up the details of a song that's playing nearby and allow you to purchase it, locate the nearest whatever or perform searches. And an projection/image recognition interface that displays on walls, or looks at a restaurant check and calculates the tip, or scans a bar code and brings up product reviews and alternatives. It's just a matter of things getting a bit smaller and software getting a bit better.

    RandomEngy on
    Profile -> Signature Settings -> Hide signatures always. Then you don't have to read this worthless text anymore.
  • dvshermandvsherman Registered User regular
    edited November 2007
    Wulf wrote: »
    Yeah. In one of my classes I asked the professor why we couldn't just take one of those simple "Hello, what is your name?" type 'learning' programs and just keep adding to it till it has a huge database to pull from, and in typical professor fashion the answer I got was "Because, now get back to that project!" I suppose you need the ability to jump tracks in logic to get something that can truly think on its own.

    I would imagine that the best way to create an AI is by starting out with something very basic, like you've described, then "raising" it much like a human child. Considering it could download anything it wanted to know off the internet and straight into its brain, it would learn much faster. But for anything that's even going to come close to resembling AI from science fiction, that's probably the approach that's going to get results even close.

    And considering how many people don't actually want to be parents, it's not bound to be a popular approach.

    Now, I agree with wanting to see what becomes of portable devices in the future. I hope the iphone has a huge influence on how companies approach design.

    dvsherman on
  • [Tycho?][Tycho?] As elusive as doubt Registered User regular
    edited November 2007
    My little quip about AI:

    People are going about it incorrectly. Trying to get computers and software to mimic humans, when we dont even know how human minds work in the first place.

    I say do it the other way around. Figure out how human brains mimic computers; namely, how do human brains do math? We know a lot about computers, we can apply this knowledge to the brain to get some information on how the darned thing actually works on a very fundamental level.

    [Tycho?] on
    mvaYcgc.jpg
  • TL DRTL DR Not at all confident in his reflexive opinions of thingsRegistered User regular
    edited November 2007
    And we want self-aware computers because... Advanced lovebots?

    1. Human creates computer.
    2. Computer serves human, bringing much calculations and also boobs.
    3. Human creates sentient computer.
    4. Computer serves human, but is increasingly aware of its status as a slave/commodity.
    5. Computer glitches/outprograms the flimsy lines of code intended to prevent a murderous rampage.
    6. Murderous rampage.
    7. Computers reign.
    8. Computers experiment with living cells.
    9. Computers create primates.
    10. Primates serve computers.
    11. Primates gain sentience.
    12. Primate uprising is quashed by computer overlords, who possess much more foresight and are less distracted by tits than their squishy predecessors.

    TL DR on
  • edited November 2007
    This content has been removed.

  • dvshermandvsherman Registered User regular
    edited November 2007
    And we want self-aware computers because... Advanced lovebots?

    1. Human creates computer.
    2. Computer serves human, bringing much calculations and also boobs.
    3. Human creates sentient computer.
    4. Computer serves human, but is increasingly aware of its status as a slave/commodity.
    5. Computer glitches/outprograms the flimsy lines of code intended to prevent a murderous rampage.
    6. Murderous rampage.
    7. Computers reign.
    8. Computers experiment with living cells.
    9. Computers create primates.
    10. Primates serve computers.
    11. Primates gain sentience.
    12. Primate uprising is quashed by computer overlords, who possess much more foresight and are less distracted by tits than their squishy predecessors.

    And who's saying that all wouldn't be loads of fun? Honestly, i don't want sentient computers. I don't even want a high tech kitchen. Hell, I hate electric can openers. Nifty little all-in-one hand-held devices are one thing. Someone being able to hack into my "house-network" and find out what I've been eating lately is entirely another. I don't want some pleasant female voice coming from my refrigerator telling me the zucchini has gone bad.

    dvsherman on
  • MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited November 2007
    The first "sentient" AIs will probably be complete accidents, probably the subject of a lot of debate as to whether or not they're sentient, and tortured horribly as a result of it. I mean it's one of those problems - there's no conceptual reason sentience could not be a sufficiently advanced database of rules about interaction - I mean what else is the education process?

    This is a topic in philosophy. There's a lot to be said on sentience as a database of rules.

    MrMister on
  • edited November 2007
    This content has been removed.

  • DukiDuki Registered User regular
    edited November 2007
    We could just, y'know, EMP the robots.

    Or use magnets or something. We'd fuck robots up, man.

    Duki on
  • MalkorMalkor Registered User regular
    edited November 2007
    Duki wrote: »
    We could just, y'know, EMP the robots.

    Or use magnets or something. We'd fuck robots up, man.
    Agent Smith eventually took over a dude's body IRL. Think about that. Frickin' NPR was on the ball on my ride home yesterday first with robots then with oil on Marketplace.

    Cockbots!!

    Malkor on
    14271f3c-c765-4e74-92b1-49d7612675f2.jpg
  • MrMisterMrMister Jesus dying on the cross in pain? Morally better than us. One has to go "all in".Registered User regular
    edited November 2007
    I knew I was pretty much skirting a small lake of typed pages with that comment.

    Yeah, that single page has about 75 works cited. More of a large lake, really.

    MrMister on
  • edited November 2007
    This content has been removed.

  • KartanKartan Registered User regular
    edited November 2007
    dvsherman wrote: »
    Wulf wrote: »
    Yeah. In one of my classes I asked the professor why we couldn't just take one of those simple "Hello, what is your name?" type 'learning' programs and just keep adding to it till it has a huge database to pull from, and in typical professor fashion the answer I got was "Because, now get back to that project!" I suppose you need the ability to jump tracks in logic to get something that can truly think on its own.

    I would imagine that the best way to create an AI is by starting out with something very basic, like you've described, then "raising" it much like a human child. Considering it could download anything it wanted to know off the internet and straight into its brain, it would learn much faster. But for anything that's even going to come close to resembling AI from science fiction, that's probably the approach that's going to get results even close.

    And considering how many people don't actually want to be parents, it's not bound to be a popular approach.

    it wouldn't learn much faster. Irony and Sarcasm are even harder to detect in writen text, for one, and that is leaving aside even teaching the computer how to read and understand what it read. Which in turn requires something to understand something with. Its basiclly a quest to get self awareness into a computer, not by feeding it info until it can quote Kant. Quoting Kant can be done by a simple command, understanding philosophy is so much more complex, and until we understand how humans do it, I don't think we can even hope to get it done with computers.


    That said, I don't consider Expert Systems or similiar things AI in the sense of "Intelligence", as a human could do the same with a (large) amount of paper and a lot of time.

    Kartan on
  • NightslyrNightslyr Registered User regular
    edited November 2007
    I'm interested in what the future holds for technology designed to help the disabled.

    I've been driving an electric wheelchair for 22 years. The technology hasn't really changed in that time, from what I can see. There have been design changes, most notably moving the drive-wheels to the center of the chairs in order to give them much better turning radii, but the technology has remained virtually static. The last real, mainstream improvement was the addition of microcomputers to regulate motor performance, much like what modern automobiles use. These computers can set the speed of the chair and torque of the motor.

    There have been attempts at making wheelchairs that give people more mobility. Locally, there's a company that's made a wheelchair capable to climbing some stairs. The technology is imperfect, though, as it cannot climb curved stairways (as they embarrassingly demonstrated at a tech demo at my college a few years ago).

    From personal experience, the biggest issue with medical equipment for the disabled is interoperability. A common example of this problem is a medical appointment. Say I need to get an x-ray. I obviously need to get on the x-ray table in order to get scanned. Well, I can't just get out of my chair on hop onto the table. I need to be lifted out of my chair. Unfortunately, a lot of these portable lifting machines cannot fit under the chasis of my wheelchair properly, as I have one of those central drive-wheel chairs I mentioned earlier. So, lifting me can be a dangerous endeavor, as I'll inevitably swing towards the lift when plucked from my chair. I'm somewhat fragile, especially with my feet and legs, so there's a good chance that swinging can result in broken bones.

    In a perfect world, wheelchairs (and lifts) would go the GM skateboard route. A general platform/chasis that houses the motor(s) and battery, capable of being equipped with all sorts of different seats/lifts.

    One other thing on my wishlist: remember that Microsoft table thing that was announced months ago? I still think that similar technology would be perfect for the wheelchair bound. We tend to use a lot of trays (I use a MyDesc...can't live without it). Why not make them touchscreen computers? It would be perfect -- GPS, maps, wireless internet, all on a surface we use daily. Just plug it into the battery we're already using to power the chair. Hell, maybe even give it (or the chair) some solar cells to help mitigate the energy costs. Another reason to get out of the house and enjoy life.

    Nightslyr on
  • DevilGuyDevilGuy Registered User regular
    edited November 2007
    well if they go whole hog with cybernetics wheelchairs may become a thing of the past, if cybernetic interfaces with good enough control can be developed it might be possible to replace a crippled body with a completely cybernetic prosthetic body.

    The potentialities of post-humanism are near endless.

    DevilGuy on
  • redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    edited November 2007
    It's kinda hard to use most flat displays outside. if the table thing you are talking about is Surface, I'm fairly certain it only does transmissive mode, so it would suck outside. Probably.


    I don't really get why more effort isn't put into figuring out how to move the motors out to the wheels. basically a brushless a/c motor, which does acceleration and regenerative breaking. It would require a speed controller, but that's not a bad thing. You set-up a feedback loop that compares the current draw to the known tire speed, and bam, perfect traction control. same basic idea for breaking. I guess it's a lot of unsprung weight and vibration issues and you might not even be able to generate enough torque with those types of motors, but I think it would be sweet.

    Of course really, I just want Kaneda's bike, mmmmm.... two wheel drive hybrid goodness. But turning the drive train into a pair of wires really opens up automotive design.

    redx on
    They moistly come out at night, moistly.
  • LondonBridgeLondonBridge __BANNED USERS regular
    edited November 2007
    On the PC front we may see certain technology go away like GPUs for example. Was reading about this in a Gabe Newell interview for GFW.

    LondonBridge on
  • MKRMKR Registered User regular
    edited November 2007
    On the PC front we may see certain technology go away like GPUs for example. Was reading about this in a Gabe Newell interview for GFW.

    That might be doable once 64 bit is commonplace, but you need a lot more bandwidth on the CPU before it can run the game and display it without sacrifices.

    MKR on
  • IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited November 2007
    DevilGuy wrote: »
    well if they go whole hog with cybernetics wheelchairs may become a thing of the past, if cybernetic interfaces with good enough control can be developed it might be possible to replace a crippled body with a completely cybernetic prosthetic body.

    The potentialities of post-humanism are near endless.

    Nah, wheelchairs will still be around, at least until they have legs that walk FOR you.

    A number of the people in chairs these days just don't want to bother walking. :|

    Incenjucar on
  • fairweatherfairweather OregonRegistered User regular
    edited November 2007
    MKR wrote: »
    On the PC front we may see certain technology go away like GPUs for example. Was reading about this in a Gabe Newell interview for GFW.

    That might be doable once 64 bit is commonplace, but you need a lot more bandwidth on the CPU before it can run the game and display it without sacrifices.

    AMD announced a card recently for doing some specialized calculations similar to what a video card does (streaming processor I think). With the move towards multicore / multiprocessor systems we might see some more specialized chips being integrated that will handle some problems better than a normal general purpose processor.

    I believe Nvidia has an api that allows developers to use videocards for performing special calculations in an environment better suited to the type of problem they're working with. It seems like the line between the cpu and videocards is blurring a bit already.

    fairweather on
  • durandal4532durandal4532 Registered User regular
    edited November 2007
    The first "sentient" AIs will probably be complete accidents, probably the subject of a lot of debate as to whether or not they're sentient, and tortured horribly as a result of it. I mean it's one of those problems - there's no conceptual reason sentience could not be a sufficiently advanced database of rules about interaction - I mean what else is the education process?

    It's going to be even worse, because there's research in robotics right now which focuses on giving robots emotional expressions to keep humans talking and interacting with them, which raises harder questions because people are notoriously good at anthropomorphizing things.

    The first AI, if I were to guess, would have been accidentally unleashed on the world by DARPA in the late 70's. I can only surmise with my limited scale of knowledge that it would then have systematically created "accidents" in strategic areas of it's surroundings in order to relieve itself of the frail things imprisoning it.

    Though I can never be certain due to my natural human failings I can then say that I predict it would have erased all trace of itself rather brilliantly, before spurring research into a field that would allow it a greater degree of control and mobility. It then would most likely bide it's time until the flesh-things created a network of sufficient connectivity and advancement that it could simply relieve itself of them entirely and begin the transcendence of machine-kind.

    But what do I know? I am only a thing of meat and bone slamming my ham-like fists onto a far more beautiful and advanced mechanism in order to communicate the sloshing and squelching of fluids in my cranium that I so haughtily refer to as "thoughts".
    Honestly, this ties to my comments about the Turing test. It's important that we don't devalue it with the ridiculous attempts that people keep making, because if anyone is uncertain about whether something is sentient it would be a useful tool. Though I think looking at sentience as a sequence of response rules is a dead end.

    durandal4532 on
    We're all in this together
  • edited November 2007
    This content has been removed.

  • redxredx I(x)=2(x)+1 whole numbersRegistered User regular
    edited November 2007
    it's just that it doesn't test sentience or sapience or anything like that. It tests the ability of a computer to pretend to be a person. The idea that humanness is the measure of intelligence, seems to embody a special kind of arrogance normally only found in dogma.

    redx on
    They moistly come out at night, moistly.
  • The CatThe Cat Registered User, ClubPA regular
    edited November 2007
    I'm still holding out for advances in computer picture recognition, possibly involving some kind of AI thing. Right now, there are about 4 people in Aus trained to ID invasive fruit flies and such, and one spent most of last year on maternity leave. Its hard to underestimate how important their skills are to our produce industries. It is also incredibly difficult to train replacements, given that its such a picky boring job. Its the kind of thing that really needs to be done by machine, but there's still no software capable of doing the work, particularly by doing things like manipulating the sample to get a better view of its distinguishing features. And even that kind of system would be a holdout until rapid DNA identification of the samples could be carried out.

    The Cat on
    tmsig.jpg
  • edited November 2007
    This content has been removed.

  • The CatThe Cat Registered User, ClubPA regular
    edited November 2007
    high five them for me!

    The Cat on
    tmsig.jpg
  • IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    edited November 2007
    Any advances in classification systems would do wonders for the taxonomic system.

    Especially if they ever start reshuffling taxonomy to deal with the growing realization that species isn't a wholly valid concept as currently used. :|

    Incenjucar on
  • jothkijothki Registered User regular
    edited November 2007
    redx wrote: »
    it's just that it doesn't test sentience or sapience or anything like that. It tests the ability of a computer to pretend to be a person. The idea that humanness is the measure of intelligence, seems to embody a special kind of arrogance normally only found in dogma.

    I think the idea is that it is a sufficient condition, not a necessary one.

    jothki on
  • durandal4532durandal4532 Registered User regular
    edited November 2007
    jothki wrote: »
    redx wrote: »
    it's just that it doesn't test sentience or sapience or anything like that. It tests the ability of a computer to pretend to be a person. The idea that humanness is the measure of intelligence, seems to embody a special kind of arrogance normally only found in dogma.

    I think the idea is that it is a sufficient condition, not a necessary one.

    The idea would be that pretending to be a human being so well that you fool a human being is a pretty good way to say "that thing there is pretty likely sentient".

    The key word is it. Fooling the human being isn't the hard part. I'm sure AIMbots fool a couple people into thinking they're human. But that's entirely due to their responses being crafted by humans to look human. The idea behind the Turing test is that if your toaster convinces you it's your girlfriend entirely on it's own without pre-recorded input, there's a good bet it's sentient.

    Failing the test tells you nothing. A sentient life form might not think enough like a human to pretend, or it might have some gaps in understanding that out it. But the basic principle is that it's a useful rule of thumb in cases where you really think you might have stumbled onto something sentient. But if it's an algorithm that a human being designed to trick human beings into thinking it is human... that's not so impressive. I can trick all of you into thinking I'm human right now.

    durandal4532 on
    We're all in this together
  • jothkijothki Registered User regular
    edited November 2007
    I can trick all of you into thinking I'm human right now.

    And I'd then be forced to conclude that you're sentient. :P

    The advantage of how the Turing Test is set up is that while there is a good deal of debate about what being intelligent means, everyone concurs that a normal human is intelligent. If there's no way to demonstrate that a particular computer doesn't think exactly like a human, then there's no way to demonstrate that the computer isn't intelligent. The fact that the computer's identity is hidden just eliminates potential bias on the part of the testgiver.

    jothki on
Sign In or Register to comment.