Jim pointed me at this video earlier, presumably believing that my knowledge of, for instance, how to overclock a processor without setting my house on fire means I can say whether there’s any water to its boasts. Um. Maybe? Unlimited Detail Technology reckon their clever tech is the biggest leap in realtime 3D graphics in decades. If what they say is true, we’re going to wave goodbye to games rendered in polygons, and hello to games free from cubist edges and limited model counts. Take a look below, and see what you think.
It’s eight minutes long, but they’re worthwhile minutes. Apart from the one powerpoint slide about why polygons are bad that it shows about 15 times, anyway.
http://www.youtube.com/watch?v=Q-ATtrImCx4&feature=player_embedded
(Before the ‘but that looks rubbish’ comments arrive – the reason Unlimited Detail give for the stuff they’re showing looking a little rustic is that the artwork is created by programmers, not artists.)
Let’s leave aside the fact that the voiceover alternates between chirpy educational TV and a strange creeping menace, and instead concentrate on what it’s promising. To whit, games that can show as much or as little as the creators wish, with no apparent concern as to the hardware they’re running on. It’s done by using points instead of polygons, it runs purely in software, and it can even do its thing on a mobile phone. It sounds amazing. It sounds crazy. Maybe it is – grud only knows we’ve seen plenty of wondrous-sounding technology promises fail to arrive over the years, but let’s hope this one can pull it off. The rough concept behind it is that the game is only ever rendering the pixels you can see at any one time, rather than trying to muster the whole shebang on a constant basis. Or, to use their words:
“Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call MASS CONNECTED PROCESSING. Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.”
There’s
another video on the site – I can’t find an embeddable version yet, but it’s got much more footage of the tech itself in action. It’s the Comparison one at the bottom of the page that you’re after. Oh, be warned Voiceover Guy gets even stranger in it, possibly due to the insane bell-based soundtrack.
With the released footage only demonstrating static environments rather than an interactive landscape, and anything playable apparently being some 16 months off, it’d be reckless to start shouting “THE FUTURE! THIS IS THE FUTURE!” just yet. The theory seems sound, but the practice can only be complicated. Creating content, for one thing – it’s expensive enough for developers to create a current-gen AAA game, so how do they muster the resources to fill a photo-real world with, ah, unlimited detail? It’d be lovely to see it, of course, but it may take some doing.
Secondly, the tech’s strength seems to be in geometry – real-time animation, lighting and shadowing seems a little skipped over thus far, and may be an area in which UDT lags behind the otherwise more old-fashioned polygon-based rendering system. Again, though, the stuff on show is terribly early and wasn’t created by pro artists. Moreover, even if it can’t ultimately compete with high-end traditionally-rendered games, it could well be a fantastic way to make low-end machines create far more complicated scenes than they’d otherwise be capable of.
Keeping an eye on this one. The potential is incredible, whether or not the industry picks it up.
Posts
I don't see how it can't be both. In fact I think the first entails the second.
Can't be both what? Smoke up our asses and interesting? You're right, it could be both, hence why I'm going to watch that when I get home. :P
It's ... interesting.
Steam: Elvenshae // PSN: Elvenshae // WotC: Elvenshae
Wilds of Aladrion: [https://forums.penny-arcade.com/discussion/comment/43159014/#Comment_43159014]Ellandryn[/url]
I can understand how their algorithm can quickly search out the points for background/static objects, but as soon as you start to introduce movement to the system then it needs to be able to generate new images, not just search on an old one. So for games, this probably will have to work in conjunction with other methods to generate the active elements.
But outside of game applications, this could be fantastic for things like virtual tours, interactive real estate listings, etc., if there's an easy way to generate the point data in the first place.
Then again who says you can't get the computer to generate your animations for you.
If it works as well as they claim it does then they should whore out to like microsoft and make it exclusive to windows/xbox or something like that.
I never asked for this!
1) Their website had a software demo you could download to see it in action on your computer.
2) I hadn't heard of this over a year ago with no progress having apparently been made in that time.
Tumblr
Granted, it could be a really cool technology for CAD programs, precision and detail are highly desirable, although it'd really need to be supported well to even think of competing in that kind of market.
twitch.tv/tehsloth
As mentioned, that's just because they decided to "show off" their tech by producing something extremely garish and overly detailed. Once you got someone with actual artistic talent behind the wheel presumably it would start looking like something... well, something you would want to look at.
I agree. I wasn't terribly impressed with what I was seeing, but I think that's just because the demos they rendered were just really bad from a design standpoint. I would hope the tech could be used for non-ugly, if it works.
I will say that I find their 'polygon' examples hilarious, though. A mattress from Half-Life 2? The trunk of a tree from Kameo? The ugliest render of a tree in close up from Oblivion? Where are the character models from more recent games? I wouldn't call the players or zombies from L4D2 'flat' or 'angular', and that's not even the most graphically intense game ever. If they're trying to talk about why polygons are inadequate for the task, they need this new tech against the best examples of polygon rendering.
If they could render something like Crysis in software, they'd convince me a lot more.
Well, more Raycasting really. I don't really get it, as you have ray tracing which produces really good results. And this, which looks like a dog threw up on something.
I could see this, ironically enough, take some of the load off of developers when creating the hyperrealistic content to fill their game. Or at least change the workflow in an interesting (more movie-like?) way.
Raytracing is a method and like any method it can produce totally different results depending on how it's used. Just because something is raytracing doesn't automatically mean that it's good.
Also this may or may not be rendered with a raytracing renderer. That they use some kind of raytracing method for the construction of the scene objects has nothing to do with how it's rendered.
That's what I thought too from the little technical detail they do mention in the video.
Also, I became highly suspicious at the constant "UNLIMITED DETAIL!", yet the video never showed any closeups of stuff or stayed still at all to have time to look at stuff.
As for whether or not it catches on, I think that depends entirely on the tools. Right now the entire industry works on polygons, and making a switch to their cloud point data won't be an easy or quick one.
UNLIMITED
Let's Play S.T.A.L.K.E.R: Shadow of Chernobyl - Vanilla
You have to remember this is from a graphics engineer prospective and they are in fact looking at some of the hardest objects in the game to engineer short of faces.
They at no time imply they are artists, in fact in one of the demos they say "imagine what someone with artistic ability could do with this" numerous times.
:^:
Why didn't I think of that? It seems so obvious now.
Of course we're going to need some super-advanced search algorithm to help us find which textures to draw on the screen where. Something that lets us figure out which of these textures are facing the camera and which aren't. Wait . . . it's coming to me . . . bsp trees, oct trees, all organizing a set of poly -- wait that's right polygons are bad, they're so last generation. We'll call them . . . point cloud structures. A point cloud structure takes texture, bump map, and lighting information. It has three sides, but it's totally not a polygon because polygons are so last generation. You can even organize them as a . . . point cloud structure strip, put a bunch of these strips together to make a mesh, tie the meshes together with bones . . .
I hope someone is writing this down, because I am a *genius*! This stuff is going to revolutionize computer graphics well through the end of the 20th century!
. . . wait, we're already . . . wha? Doing this for a decade already? Don't tell the venture capitalists that! I don't have my seed capital yet!
XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK
QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
it's somewhere between Eddie Izzard... and a camp Oliver Postgate.
odd
Twitter
What's the point? You get rid of rendering passes to create immense lighting detail that is normally only possible with a stupid amount of lighting passes with standard additive pixel space lighting.
I guess my point is that these guys really aren't doing anything that's not already being utilized in modern graphics programming, they're just up sizing the concept to handle everything in the scene, not just a lighting pass. The issue there, as others have pointed out, is that polygon based rendering offers a lot of really, really compelling features that are easier because of the rules of geometry, trigonometry and linear algebra. Physics, collision detection, animation...all are possible because of the mathematical laws of geometry. If you remove the geometry aspect, you completely nullify 25 years of rendering techniques...the industry would never accept this.
I think these guys have some interesting ideas, but their concept is, in my mind, destined to fail. The next leap in graphics is tessellation, not pixel clouds. Tessellation is a much more specialized solution that solves the problem of detail extremely well, within the framework of the math and physics we already know. There is a reason a lot of really smart people at ATI, Nvidia, Microsoft, Khronos Group and countless other places have decided that tessellation is the way to go. There is a place for image based techniques in modern rendering, and that won't change....but the likelihood if it "taking over the art" is very slim.