The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
Holy Wow! - The thrilling potential of Sixth Sense Technology...
At TEDIndia, Pranav Mistry demos several tools that help the physical world interact with the world of data -- including a deep look at his SixthSense device and a new, paradigm-shifting paper "laptop." In an onstage Q&A, Mistry says he'll open-source the software behind SixthSense, to open its possibilities to all.
MongerI got the ham stink.Dallas, TXRegistered Userregular
edited November 2009
That... well, that is a damn good sales pitch. Ironically enough, I'm not sure that I see it going anywhere real fast without a brand name push from a major corporation. The type of data infrastructure that would be required to make it practical for the kind of use he's presenting is... pretty staggering.
I see some aspects being grabbed up for implementation by whoever, but not everything, obviously there's a long way to go still, but the wait is a hell of a lot shorter than it used to be.
I like the way he thinks though.
I mean, it's hard to ignore the fact that it's pretty damn incredible.
How many decades until public schools start using this?
Well, Gutenberg had that crazy printing press idea somewhere in the range of 1436 and your average public school is still working on getting a decent textbook.
I say 2763.
Assuming the R'largh allow us technology more advanced than a toothpick. All hail Xarg'pfh!
so it's all theoretical, right. none of that is actually implemented in the way that was demonstrated
it's a good idea, but until they fit a huge whallop of processing power into a a small clip-on camera it's not possible. remember, there's a massive entertainment unit behind even the playstation eye or microsoft's natal, and they aren't nearly as refined or precise as what this guy's imagining
DyvionBack in Sunny Florida!!Registered Userregular
edited November 2009
Open source software next month... you can build your own setup for $300... So for christmas I get to start walking around with a camera and a projector on my chest? Quick, someone save a cart on newegg with all the pieces I need!
Dyvion on
Steam: No Safety In Life
PSN: Dyvion -- Eternal: Dyvion+9393 -- Genshin Impact: Dyvion
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
but he named a price ($300), so I assume that it will be released soon?
or am I assuming wrong. I'm generally not confident... with anything.
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
There was a couple of bits that require a bunch of custom code like the browser window drag, but stuff like recognizing gestures, products, etc is perfectly possible and already a solved problem
FyreWulff on
0
DyvionBack in Sunny Florida!!Registered Userregular
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
but he named a price ($300), so I assume that it will be released soon?
or am I assuming wrong. I'm generally not confident... with anything.
The price he named was when he was talking about building it yourself. The $300 is the cost to assemble what he's wearing from existing items.
Dyvion on
Steam: No Safety In Life
PSN: Dyvion -- Eternal: Dyvion+9393 -- Genshin Impact: Dyvion
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
but he named a price ($300), so I assume that it will be released soon?
or am I assuming wrong. I'm generally not confident... with anything.
The price he named was when he was talking about building it yourself. The $300 is the cost to assemble what he's wearing from existing items.
Oh that makes much more sense. Thanks.
nealcm on
0
HenroidMexican kicked from Immigration ThreadCentrism is Racism :3Registered Userregular
edited November 2009
This is the coolest thing I've seen. I just can't believe what I'm seeing.
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
but he named a price ($300), so I assume that it will be released soon?
or am I assuming wrong. I'm generally not confident... with anything.
he vaguely mentioned how much it would cost to put the hardware together, and said that he would release some of the software as open-source
i guess he could be a programming genius and have worked out how to simply process the million variables present in such a gestural system on relatively cheap hardware, but even so there's no way the thing is anywhere near the consumer level of functionality that was presented in the videos
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
There was a couple of bits that require a bunch of custom code like the browser window drag, but stuff like recognizing gestures, products, etc is perfectly possible and already a solved problem
gestural interfaces generally work using direct, touch-based input which is practically as direct a mode of input as a mouse and requires little processing. accurate gestural systems based on a camera or visual interface is another kettle of fish entirely
bsjezz on
0
HenroidMexican kicked from Immigration ThreadCentrism is Racism :3Registered Userregular
edited November 2009
He said it's about $300 for the hardware itself. I'm sure the database access required for this sort of thing will be figured out, especially with the open-source distribution. I can't wrap my head around how dynamic that has to be. I had enough problems programming a calculator back in school.
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
There was a couple of bits that require a bunch of custom code like the browser window drag, but stuff like recognizing gestures, products, etc is perfectly possible and already a solved problem
gestural interfaces generally work using direct, touch-based input which is practically as direct a mode of input as a mouse and requires little processing. accurate gestural systems based on a camera or visual interface is another kettle of fish entirely
You did notice the easy to pick up colored finger caps right
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
There was a couple of bits that require a bunch of custom code like the browser window drag, but stuff like recognizing gestures, products, etc is perfectly possible and already a solved problem
gestural interfaces generally work using direct, touch-based input which is practically as direct a mode of input as a mouse and requires little processing. accurate gestural systems based on a camera or visual interface is another kettle of fish entirely
You did notice the easy to pick up colored finger caps right
yes, i did. it still takes large amounts of processing power to interpret that into anything accurate, especially in the kind of widely varying environmental conditions the presenter demonstrates it as working in
edit: what i think is he's got a device set up that does some pretty cool stuff in MIT lab conditions, but for the purposes of the presentation he's expanded into what it could do if implemented on a commercial level
bsjezz on
0
MongerI got the ham stink.Dallas, TXRegistered Userregular
edit: what i think is he's got a device set up that does some pretty cool stuff in MIT lab conditions, but for the purposes of the presentation he's expanded into what it could do if implemented on a commercial level
Pretty much this. This is cool and all, but it is still a tech demo.
I'm also not convinced that open source is necessarily the correct move for this particular kind of platform. Something like the iPhone can exist as a platform because Apple has a fair amount of marketshare and enough scratch to buy the rest of the necessary marketshare. And that's what matters when it comes to backend support from content providers. Without that... well, you can play pong on the ground on the subway. That's fun, right?
Underwhelming. Manifesting all these tasks into physical actions is cool and all, but it's not really that bad now and has a potential to get a lot better as currently set up. A paradigm shift that's completely unnecessary, at least for now.
But I'm not sure we have powerful enough computers that are small enough to make it feasable as he showed it. I'm pretty sure in all the examples he showed that the backpack containted a laptop, plus than you have the projector and camera hanging by your neck along enough batteries to have this all running more than a few hours.
Even if they do miniturize it enough, well I'd much rather have this sort of thing displayed into a set of glasses than projected onto the surfaces themselves
the difficulty in how to focus your eyes onto glasses as well as the far off object you're looking at makes the projector set up a much more feasible solution
if I wasn't lazy I'd rig up my netbook with this stuff and try it out
the difficulty in how to focus your eyes onto glasses as well as the far off object you're looking at makes the projector set up a much more feasible solution
Bullshit. The difficulty in how to focus your eyes onto glasses as well as the far off object you're looking at makes awesome stereoscopic shit.
A few things:
-The projector seemed to always project flawlessly onto whatever surface he was interacting with, regardless of the angle or distance. I would think there would be more issues with this. For example, what if I do the gesture to take a photo, but the camera isn't at the right height or angle? Maybe this is just a user training issue or something.
-He showed himself pasting things from other sheets of paper. Does it actually pull in the text from the book or does it just pull it in as an image. Are cameras actually good enough to be able to pull text out of images they take in this manner?
-What happens if you use your sheet computer on the subway or somewhere noisy? Is the mic close enough to the action that the computer can filter out the background noise?
edit: what i think is he's got a device set up that does some pretty cool stuff in MIT lab conditions, but for the purposes of the presentation he's expanded into what it could do if implemented on a commercial level
Pretty much this. This is cool and all, but it is still a tech demo.
I'm also not convinced that open source is necessarily the correct move for this particular kind of platform. Something like the iPhone can exist as a platform because Apple has a fair amount of marketshare and enough scratch to buy the rest of the necessary marketshare. And that's what matters when it comes to backend support from content providers. Without that... well, you can play pong on the ground on the subway. That's fun, right?
Your reasoning is flawed here. There are plenty of things we can do with this, even assuming no direct commercial support for it. And, in any case, we'll see something similar in a closed-source version eventually too.
Posts
You would be staggered by it.
All right, people. It is not a gerbil. It is not a hamster. It is not a guinea pig. It is a death rabbit. Death. Rabbit. Say it with me, now.
Holy shit indeed. Excuse me, my mind is currently blown.
I like the way he thinks though.
I mean, it's hard to ignore the fact that it's pretty damn incredible.
I'll have to go pick up my jaw now.
And the most impressive thing to me remains the hand turned cell phone.
And I think I like it.
I thought this was a cult spambot but wow.
How many decades until public schools start using this?
Gosh.
I say 2763.
Assuming the R'largh allow us technology more advanced than a toothpick. All hail Xarg'pfh!
All right, people. It is not a gerbil. It is not a hamster. It is not a guinea pig. It is a death rabbit. Death. Rabbit. Say it with me, now.
I have to agree with FyreWulff. He had me when he dragged a picture and text from a piece of paper and copied it into the computer/camera thing.
it's a good idea, but until they fit a huge whallop of processing power into a a small clip-on camera it's not possible. remember, there's a massive entertainment unit behind even the playstation eye or microsoft's natal, and they aren't nearly as refined or precise as what this guy's imagining
PSN: Dyvion -- Eternal: Dyvion+9393 -- Genshin Impact: Dyvion
i'm not so sure. as monger mentioned in the second post, what was demonstrated requires a kind of data infrastructure that simply doesn't exist yet, so i'm more inclined to think the presentation was mostly speculative
but he named a price ($300), so I assume that it will be released soon?
or am I assuming wrong. I'm generally not confident... with anything.
There was a couple of bits that require a bunch of custom code like the browser window drag, but stuff like recognizing gestures, products, etc is perfectly possible and already a solved problem
The price he named was when he was talking about building it yourself. The $300 is the cost to assemble what he's wearing from existing items.
PSN: Dyvion -- Eternal: Dyvion+9393 -- Genshin Impact: Dyvion
All I can say is.... I want one.
Oh that makes much more sense. Thanks.
he vaguely mentioned how much it would cost to put the hardware together, and said that he would release some of the software as open-source
i guess he could be a programming genius and have worked out how to simply process the million variables present in such a gestural system on relatively cheap hardware, but even so there's no way the thing is anywhere near the consumer level of functionality that was presented in the videos
gestural interfaces generally work using direct, touch-based input which is practically as direct a mode of input as a mouse and requires little processing. accurate gestural systems based on a camera or visual interface is another kettle of fish entirely
You did notice the easy to pick up colored finger caps right
yes, i did. it still takes large amounts of processing power to interpret that into anything accurate, especially in the kind of widely varying environmental conditions the presenter demonstrates it as working in
edit: what i think is he's got a device set up that does some pretty cool stuff in MIT lab conditions, but for the purposes of the presentation he's expanded into what it could do if implemented on a commercial level
I'm also not convinced that open source is necessarily the correct move for this particular kind of platform. Something like the iPhone can exist as a platform because Apple has a fair amount of marketshare and enough scratch to buy the rest of the necessary marketshare. And that's what matters when it comes to backend support from content providers. Without that... well, you can play pong on the ground on the subway. That's fun, right?
All right, people. It is not a gerbil. It is not a hamster. It is not a guinea pig. It is a death rabbit. Death. Rabbit. Say it with me, now.
And after that they'll move to displaying the HUD onto contact lenses and have you interact with it using metal impulses.
Although it seems like this is also inevitable.
But I'm not sure we have powerful enough computers that are small enough to make it feasable as he showed it. I'm pretty sure in all the examples he showed that the backpack containted a laptop, plus than you have the projector and camera hanging by your neck along enough batteries to have this all running more than a few hours.
Even if they do miniturize it enough, well I'd much rather have this sort of thing displayed into a set of glasses than projected onto the surfaces themselves
Stabbing yourself in the eye does not sound like a particularly user-friendly interface design choice.
if I wasn't lazy I'd rig up my netbook with this stuff and try it out
Bullshit. The difficulty in how to focus your eyes onto glasses as well as the far off object you're looking at makes awesome stereoscopic shit.
And you don't even need to be good at Magic Eyes!
This will help us stay human.
-The projector seemed to always project flawlessly onto whatever surface he was interacting with, regardless of the angle or distance. I would think there would be more issues with this. For example, what if I do the gesture to take a photo, but the camera isn't at the right height or angle? Maybe this is just a user training issue or something.
-He showed himself pasting things from other sheets of paper. Does it actually pull in the text from the book or does it just pull it in as an image. Are cameras actually good enough to be able to pull text out of images they take in this manner?
-What happens if you use your sheet computer on the subway or somewhere noisy? Is the mic close enough to the action that the computer can filter out the background noise?
Your reasoning is flawed here. There are plenty of things we can do with this, even assuming no direct commercial support for it. And, in any case, we'll see something similar in a closed-source version eventually too.