The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
Please vote in the Forum Structure Poll. Polling will close at 2PM EST on January 21, 2025.

Unlimited Detail

NerdtendoNerdtendo Registered User regular
edited March 2010 in Games and Technology
Doesn't appear to be a thread on this.. quoting the article directly:

http://www.rockpapershotgun.com/2010/03/10/unlimited-detail-wants-to-kill-3d-cards/
udt.jpg

Jim pointed me at this video earlier, presumably believing that my knowledge of, for instance, how to overclock a processor without setting my house on fire means I can say whether there’s any water to its boasts. Um. Maybe? Unlimited Detail Technology reckon their clever tech is the biggest leap in realtime 3D graphics in decades. If what they say is true, we’re going to wave goodbye to games rendered in polygons, and hello to games free from cubist edges and limited model counts. Take a look below, and see what you think.

It’s eight minutes long, but they’re worthwhile minutes. Apart from the one powerpoint slide about why polygons are bad that it shows about 15 times, anyway.

http://www.youtube.com/watch?v=Q-ATtrImCx4&feature=player_embedded

(Before the ‘but that looks rubbish’ comments arrive – the reason Unlimited Detail give for the stuff they’re showing looking a little rustic is that the artwork is created by programmers, not artists.)

Let’s leave aside the fact that the voiceover alternates between chirpy educational TV and a strange creeping menace, and instead concentrate on what it’s promising. To whit, games that can show as much or as little as the creators wish, with no apparent concern as to the hardware they’re running on. It’s done by using points instead of polygons, it runs purely in software, and it can even do its thing on a mobile phone. It sounds amazing. It sounds crazy. Maybe it is – grud only knows we’ve seen plenty of wondrous-sounding technology promises fail to arrive over the years, but let’s hope this one can pull it off. The rough concept behind it is that the game is only ever rendering the pixels you can see at any one time, rather than trying to muster the whole shebang on a constant basis. Or, to use their words:
“Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call MASS CONNECTED PROCESSING. Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.”

There’s another video on the site – I can’t find an embeddable version yet, but it’s got much more footage of the tech itself in action. It’s the Comparison one at the bottom of the page that you’re after. Oh, be warned Voiceover Guy gets even stranger in it, possibly due to the insane bell-based soundtrack.

With the released footage only demonstrating static environments rather than an interactive landscape, and anything playable apparently being some 16 months off, it’d be reckless to start shouting “THE FUTURE! THIS IS THE FUTURE!” just yet. The theory seems sound, but the practice can only be complicated. Creating content, for one thing – it’s expensive enough for developers to create a current-gen AAA game, so how do they muster the resources to fill a photo-real world with, ah, unlimited detail? It’d be lovely to see it, of course, but it may take some doing.

Secondly, the tech’s strength seems to be in geometry – real-time animation, lighting and shadowing seems a little skipped over thus far, and may be an area in which UDT lags behind the otherwise more old-fashioned polygon-based rendering system. Again, though, the stuff on show is terribly early and wasn’t created by pro artists. Moreover, even if it can’t ultimately compete with high-end traditionally-rendered games, it could well be a fantastic way to make low-end machines create far more complicated scenes than they’d otherwise be capable of.

Keeping an eye on this one. The potential is incredible, whether or not the industry picks it up.

Appears to basically be a new method of rendering voxels. Instead of rendering every voxel on screen, it just renders each one that requires pixel space on your screen. Pretty amazing looking. I've always been a fan of voxel graphics, and have been disappointed that more games don't use voxels.

IHZR47b.png
Nerdtendo on

Posts

  • Delta AssaultDelta Assault Registered User regular
    edited March 2010
    Well... I guess I had been wondering when Novalogic was coming out with their next Delta Force game...

    Delta Assault on
  • DarmakDarmak RAGE vympyvvhyc vyctyvyRegistered User regular
    edited March 2010
    I can't view youtube at videos at work but I definitely will check it out when I get home, even if they're just blowing smoke up our asses it sounds interesting.

    Darmak on
    JtgVX0H.png
  • TychoCelchuuuTychoCelchuuu PIGEON Registered User regular
    edited March 2010
    Darmak wrote: »
    I can't view youtube at videos at work but I definitely will check it out when I get home, even if they're just blowing smoke up our asses it sounds interesting.

    I don't see how it can't be both. In fact I think the first entails the second.

    TychoCelchuuu on
  • CymoroCymoro Registered User regular
    edited March 2010
    Voxels 2: Back with a Vengeance.

    Cymoro on
    i am perpetual, i make the country clean
  • DarmakDarmak RAGE vympyvvhyc vyctyvyRegistered User regular
    edited March 2010
    Darmak wrote: »
    I can't view youtube at videos at work but I definitely will check it out when I get home, even if they're just blowing smoke up our asses it sounds interesting.

    I don't see how it can't be both. In fact I think the first entails the second.

    Can't be both what? Smoke up our asses and interesting? You're right, it could be both, hence why I'm going to watch that when I get home. :P

    Darmak on
    JtgVX0H.png
  • ElvenshaeElvenshae Registered User regular
    edited March 2010
    Did you guys read the word document they show when (I guess?) they're discussing searching?

    It's ... interesting.

    Elvenshae on
  • DarianDarian Yellow Wizard The PitRegistered User regular
    edited March 2010
    This youtube comment seems on point to me:
    From what I understand John Carmack is implementing something like this in the next id Tech engine. (Along with tons of other stuff.) But the main limitation of this from what I understand is that you can only use this for static objects. Anything requiring animation wouldn't work very well, as you'd have to do old-school frame-by-frame animation which would be pretty damn time consuming. Modern skeletal animation wouldn't be feasible.

    I can understand how their algorithm can quickly search out the points for background/static objects, but as soon as you start to introduce movement to the system then it needs to be able to generate new images, not just search on an old one. So for games, this probably will have to work in conjunction with other methods to generate the active elements.

    But outside of game applications, this could be fantastic for things like virtual tours, interactive real estate listings, etc., if there's an easy way to generate the point data in the first place.

    Darian on
  • UncleSporkyUncleSporky Registered User regular
    edited March 2010
    Actually their system sounds really close to raytracing, really. One calculation per point on the screen. Raytracing voxels, I suppose?

    UncleSporky on
    Switch Friend Code: SW - 5443 - 2358 - 9118 || 3DS Friend Code: 0989 - 1731 - 9504 || NNID: unclesporky
  • randombattlerandombattle Registered User regular
    edited March 2010
    Darian wrote: »
    This youtube comment seems on point to me:
    From what I understand John Carmack is implementing something like this in the next id Tech engine. (Along with tons of other stuff.) But the main limitation of this from what I understand is that you can only use this for static objects. Anything requiring animation wouldn't work very well, as you'd have to do old-school frame-by-frame animation which would be pretty damn time consuming. Modern skeletal animation wouldn't be feasible.

    I can understand how their algorithm can quickly search out the points for background/static objects, but as soon as you start to introduce movement to the system then it needs to be able to generate new images, not just search on an old one. So for games, this probably will have to work in conjunction with other methods to generate the active elements.

    But outside of game applications, this could be fantastic for things like virtual tours, interactive real estate listings, etc., if there's an easy way to generate the point data in the first place.

    Then again who says you can't get the computer to generate your animations for you.


    If it works as well as they claim it does then they should whore out to like microsoft and make it exclusive to windows/xbox or something like that.

    randombattle on
    itsstupidbutidontcare2.gif
    I never asked for this!
  • MetallikatMetallikat Registered User regular
    edited March 2010
    Yeah... without further information, I'm gonna have to put this up on the shelf, right beside the perpetual motion machines and cold fusion. It simply sounds too good to be true, without enough hard facts backing it up.

    Metallikat on
  • DarmakDarmak RAGE vympyvvhyc vyctyvyRegistered User regular
    edited March 2010
    Well I watched the video and it seems plausible, I just don't know if it will catch on. If it works as well as they say it does then I'd like to see some companies adopt it and make games with that tech.

    Darmak on
    JtgVX0H.png
  • amnesiasoftamnesiasoft Thick Creamy Furry Registered User regular
    edited March 2010
    I'd be significantly more willing to believe this under 2 conditions:
    1) Their website had a software demo you could download to see it in action on your computer.
    2) I hadn't heard of this over a year ago with no progress having apparently been made in that time.

    amnesiasoft on
    steam_sig.png
  • BloodySlothBloodySloth Registered User regular
    edited March 2010
    Yeah one of the comments on that rockpapershotgun post in the OP links to a couple articles made in 2008 about exactly the same thing with what look like screens from the video.

    BloodySloth on
  • cyphrcyphr Registered User regular
    edited March 2010
    Reddit's already all over this.

    cyphr on
    steam_sig.png
  • Waka LakaWaka Laka Riding the stuffed Unicorn If ya know what I mean.Registered User regular
    edited March 2010
    Outcast was ahead of it's time, now we can definitely confirm it.

    Waka Laka on
  • TehSlothTehSloth Hit Or Miss I Guess They Never Miss, HuhRegistered User regular
    edited March 2010
    Yeah, amazingly enough, geometry is why complex animations don't take a year to develop. We owe a lot to having the ability to rig a skeleton, wrap it in a flesh like casing and have it move in a vaguely human-like manner. I'm guessing there's some sort of solution to this too, but it seems to me that if it had to manage (as in change/update) the geometry of anything with volume, all of a sudden we wouldn't be dealing with a non-infinite amount of points.

    Granted, it could be a really cool technology for CAD programs, precision and detail are highly desirable, although it'd really need to be supported well to even think of competing in that kind of market.

    TehSloth on
    FC: 1993-7778-8872 PSN: TehSloth Xbox: SlothTeh
    twitch.tv/tehsloth
  • KazhiimKazhiim __BANNED USERS regular
    edited March 2010
    This looks like the ass-end of ugly. Maybe it's just me, though?

    Kazhiim on
    lost_sig2.png
  • BloodySlothBloodySloth Registered User regular
    edited March 2010
    Kazhiim wrote: »
    This looks like the ass-end of ugly. Maybe it's just me, though?

    As mentioned, that's just because they decided to "show off" their tech by producing something extremely garish and overly detailed. Once you got someone with actual artistic talent behind the wheel presumably it would start looking like something... well, something you would want to look at.

    BloodySloth on
  • TetraNitroCubaneTetraNitroCubane Not Angry... Just VERY Disappointed...Registered User regular
    edited March 2010
    Kazhiim wrote: »
    This looks like the ass-end of ugly. Maybe it's just me, though?

    I agree. I wasn't terribly impressed with what I was seeing, but I think that's just because the demos they rendered were just really bad from a design standpoint. I would hope the tech could be used for non-ugly, if it works.

    I will say that I find their 'polygon' examples hilarious, though. A mattress from Half-Life 2? The trunk of a tree from Kameo? The ugliest render of a tree in close up from Oblivion? Where are the character models from more recent games? I wouldn't call the players or zombies from L4D2 'flat' or 'angular', and that's not even the most graphically intense game ever. If they're trying to talk about why polygons are inadequate for the task, they need this new tech against the best examples of polygon rendering.

    If they could render something like Crysis in software, they'd convince me a lot more.

    TetraNitroCubane on
  • RookRook Registered User regular
    edited March 2010
    Actually their system sounds really close to raytracing, really. One calculation per point on the screen. Raytracing voxels, I suppose?

    Well, more Raycasting really. I don't really get it, as you have ray tracing which produces really good results. And this, which looks like a dog threw up on something.

    Rook on
  • Patrick RipollPatrick Ripoll Registered User regular
    edited March 2010
    Rook wrote: »
    Actually their system sounds really close to raytracing, really. One calculation per point on the screen. Raytracing voxels, I suppose?

    Well, more Raycasting really. I don't really get it, as you have ray tracing which produces really good results. And this, which looks like a dog threw up on something.
    (Before the ‘but that looks rubbish’ comments arrive – the reason Unlimited Detail give for the stuff they’re showing looking a little rustic is that the artwork is created by programmers, not artists.)

    Patrick Ripoll on
  • AtheraalAtheraal Registered User regular
    edited March 2010
    I think this kind of technique, assuming it works, would pair really well with using actual recorded footage to build levels. Most image/video processing software I've seen generates clouds of points anyway, and extrapolates polygons from those.

    I could see this, ironically enough, take some of the load off of developers when creating the hyperrealistic content to fill their game. Or at least change the workflow in an interesting (more movie-like?) way.

    Atheraal on
  • HonkHonk Honk is this poster. Registered User, __BANNED USERS regular
    edited March 2010
    Rook wrote: »
    Actually their system sounds really close to raytracing, really. One calculation per point on the screen. Raytracing voxels, I suppose?

    Well, more Raycasting really. I don't really get it, as you have ray tracing which produces really good results. And this, which looks like a dog threw up on something.

    Raytracing is a method and like any method it can produce totally different results depending on how it's used. Just because something is raytracing doesn't automatically mean that it's good.

    Also this may or may not be rendered with a raytracing renderer. That they use some kind of raytracing method for the construction of the scene objects has nothing to do with how it's rendered.

    Honk on
    PSN: Honkalot
  • EchoEcho ski-bap ba-dapModerator, Administrator admin
    edited March 2010
    Actually their system sounds really close to raytracing, really. One calculation per point on the screen. Raytracing voxels, I suppose?

    That's what I thought too from the little technical detail they do mention in the video.

    Also, I became highly suspicious at the constant "UNLIMITED DETAIL!", yet the video never showed any closeups of stuff or stayed still at all to have time to look at stuff.

    Echo on
  • Stabbity StyleStabbity Style He/Him | Warning: Mothership Reporting Kennewick, WARegistered User regular
    edited March 2010
    Give a live interactive demo of your technology and I'll believe you.

    Stabbity Style on
    Stabbity_Style.png
  • SirUltimosSirUltimos Don't talk, Rusty. Just paint. Registered User regular
    edited March 2010
    It does seem like a lot of smoke and mirrors, however, I'm willing to give them the benefit of the doubt as long as they can produce something a little more concrete (think playable demo or whatnot).

    As for whether or not it catches on, I think that depends entirely on the tools. Right now the entire industry works on polygons, and making a switch to their cloud point data won't be an easy or quick one.

    SirUltimos on
  • amnesiasoftamnesiasoft Thick Creamy Furry Registered User regular
    edited March 2010
    Rook wrote: »
    Actually their system sounds really close to raytracing, really. One calculation per point on the screen. Raytracing voxels, I suppose?

    Well, more Raycasting really. I don't really get it, as you have ray tracing which produces really good results. And this, which looks like a dog threw up on something.
    (Before the ‘but that looks rubbish’ comments arrive – the reason Unlimited Detail give for the stuff they’re showing looking a little rustic is that the artwork is created by programmers, not artists.)
    Actually, that was another thing that bugged me. If you're system is supposed to be so awesome, you'd think they'd being willing to invest a little bit of money into hiring an artist to make an appropriate showing of their technology. This just has so many flaws going around it that it can't possibly be real.

    amnesiasoft on
    steam_sig.png
  • chiablochiablo Registered User regular
    edited March 2010
    I'm fairly certain that as soon as you try to animate anything, this technology falls apart. It's easy enough to make polygons move since they are bound together, but to get a billion little dots to follow a set pattern would be a computational nightmare. I can definitely see this technology used for landscapes and environments, but the characters and physics based items would all need to use the tried-and-true polygon system.

    chiablo on
    [SIGPIC][/SIGPIC]
  • UncleSporkyUncleSporky Registered User regular
    edited March 2010
    That's why you have to figure out a way to treat this like FF7, only instead of a 2D backdrop with polygon characters, you have an UNLIMITED DETAIL backdrop with polygon characters.

    UNLIMITED

    UncleSporky on
    Switch Friend Code: SW - 5443 - 2358 - 9118 || 3DS Friend Code: 0989 - 1731 - 9504 || NNID: unclesporky
  • AlexPHottAlexPHott Registered User regular
    edited March 2010
    chiablo wrote: »
    I'm fairly certain that as soon as you try to animate anything, this technology falls apart. It's easy enough to make polygons move since they are bound together, but to get a billion little dots to follow a set pattern would be a computational nightmare. I can definitely see this technology used for landscapes and environments, but the characters and physics based items would all need to use the tried-and-true polygon system.
    True, but it could still be useful for drawing static environment that will not move, it could still save some computing power.

    AlexPHott on
  • akesoakeso Registered User regular
    edited March 2010
    Kazhiim wrote: »
    This looks like the ass-end of ugly. Maybe it's just me, though?

    I agree. I wasn't terribly impressed with what I was seeing, but I think that's just because the demos they rendered were just really bad from a design standpoint. I would hope the tech could be used for non-ugly, if it works.

    I will say that I find their 'polygon' examples hilarious, though. A mattress from Half-Life 2? The trunk of a tree from Kameo? The ugliest render of a tree in close up from Oblivion? Where are the character models from more recent games? I wouldn't call the players or zombies from L4D2 'flat' or 'angular', and that's not even the most graphically intense game ever. If they're trying to talk about why polygons are inadequate for the task, they need this new tech against the best examples of polygon rendering.

    If they could render something like Crysis in software, they'd convince me a lot more.

    You have to remember this is from a graphics engineer prospective and they are in fact looking at some of the hardest objects in the game to engineer short of faces.
    They at no time imply they are artists, in fact in one of the demos they say "imagine what someone with artistic ability could do with this" numerous times.

    akeso on
    primeval-warrior-sig.jpg
  • GlalGlal AiredaleRegistered User regular
    edited March 2010
    That's why you have to figure out a way to treat this like FF7, only instead of a 2D backdrop with polygon characters, you have an UNLIMITED DETAIL backdrop with polygon characters.

    UNLIMITED
    palpatine.jpgUnlimited, you say...

    Glal on
  • Snarkman3Snarkman3 Registered User regular
    edited March 2010
    Glal wrote: »
    That's why you have to figure out a way to treat this like FF7, only instead of a 2D backdrop with polygon characters, you have an UNLIMITED DETAIL backdrop with polygon characters.

    UNLIMITED
    palpatine.jpgUnlimited, you say...



    :^:

    Why didn't I think of that? It seems so obvious now.

    Snarkman3 on
  • corin7corin7 San Diego, CARegistered User regular
    edited March 2010
    That is a lot of data to sift through in fractions of seconds. I would fear some pretty serious flickering with any kind of real game. Interesting none the less.

    corin7 on
  • mspencermspencer PAX [ENFORCER] Council Bluffs, IARegistered User regular
    edited March 2010
    I think they're missing the point. Current rendering techniques already use a huge cloud of points. These points are really flexible too: you can create them dynamically as particles, you can store them in rectangular arrays called textures, and you can organize them in a scene so they're easy to find and render.

    Of course we're going to need some super-advanced search algorithm to help us find which textures to draw on the screen where. Something that lets us figure out which of these textures are facing the camera and which aren't. Wait . . . it's coming to me . . . bsp trees, oct trees, all organizing a set of poly -- wait that's right polygons are bad, they're so last generation. We'll call them . . . point cloud structures. A point cloud structure takes texture, bump map, and lighting information. It has three sides, but it's totally not a polygon because polygons are so last generation. You can even organize them as a . . . point cloud structure strip, put a bunch of these strips together to make a mesh, tie the meshes together with bones . . .

    I hope someone is writing this down, because I am a *genius*! This stuff is going to revolutionize computer graphics well through the end of the 20th century!

    . . . wait, we're already . . . wha? Doing this for a decade already? Don't tell the venture capitalists that! I don't have my seed capital yet!

    mspencer on
    MEMBER OF THE PARANOIA GM GUILD
    XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK
    QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
  • TigTig Registered User regular
    edited March 2010
    Impressive video. but I can't resist mentioning that was the weirdest VO I've ever heard on a tech video.


    it's somewhere between Eddie Izzard... and a camp Oliver Postgate.
    odd

    Tig on
  • TurkeyTurkey So, Usoop. TampaRegistered User regular
    edited March 2010
    That was pretty interesting. Hopefully they'll keep posting updates as they come.

    Turkey on
  • GnomeTankGnomeTank What the what? Portland, OregonRegistered User regular
    edited March 2010
    What's most interesting is that many cutting engine are already doing image space rendering. Go look at papers on deferred shading and Screen Space Ambient Occlusion (SSAO). Those are both image based techniques, specifically created to do what these guys are doing, except much more specialized. SSAO is simply an occlusion algorithm meant to work only on the N pixels you can actually see. It's basically done by rendering the scene from the light angle in to what's called a G-Buffer, then applying that G-buffer to a full screen quad at render composition time.

    What's the point? You get rid of rendering passes to create immense lighting detail that is normally only possible with a stupid amount of lighting passes with standard additive pixel space lighting.

    I guess my point is that these guys really aren't doing anything that's not already being utilized in modern graphics programming, they're just up sizing the concept to handle everything in the scene, not just a lighting pass. The issue there, as others have pointed out, is that polygon based rendering offers a lot of really, really compelling features that are easier because of the rules of geometry, trigonometry and linear algebra. Physics, collision detection, animation...all are possible because of the mathematical laws of geometry. If you remove the geometry aspect, you completely nullify 25 years of rendering techniques...the industry would never accept this.

    I think these guys have some interesting ideas, but their concept is, in my mind, destined to fail. The next leap in graphics is tessellation, not pixel clouds. Tessellation is a much more specialized solution that solves the problem of detail extremely well, within the framework of the math and physics we already know. There is a reason a lot of really smart people at ATI, Nvidia, Microsoft, Khronos Group and countless other places have decided that tessellation is the way to go. There is a place for image based techniques in modern rendering, and that won't change....but the likelihood if it "taking over the art" is very slim.

    GnomeTank on
    Sagroth wrote: »
    Oh c'mon FyreWulff, no one's gonna pay to visit Uranus.
    Steam: Brainling, XBL / PSN: GnomeTank, NintendoID: Brainling, FF14: Zillius Rosh SFV: Brainling
Sign In or Register to comment.