Our new Indie Games subforum is now open for business in G&T. Go and check it out, you might land a code for a free game. If you're developing an indie game and want to post about it, follow these directions. If you don't, he'll break your legs! Hahaha! Seriously though.
Our rules have been updated and given their own forum. Go and look at them! They are nice, and there may be new ones that you didn't know about! Hooray for rules! Hooray for The System! Hooray for Conforming!

# 3D Rendering: Perspective Projection and Z-Buffering

Registered User
edited July 2010
With the advice given here, I've taken care of the 3D rendering arm of this project. Thanks a lot!!!

Hey helpful PAers, know anything about perspective projection and Z-buffering? ... By hand?

The background
Spoiler:

Filling a polygon
Spoiler:

Moving to 3D
Spoiler:

The Z-Buffer
Spoiler:

The questions / tl;dr
For clarification, at this point the all of my math does work, and with the Z-buffer off, it shows the classic problems with simply setting a paint order (two objects intersecting through each other, etc). This means that my math for determining the pixelX, pixelY and the fillPolygon call itself are acting appropriately.

The Z-buffer does function, but it has difficulty with a certain subset of camera theta and phi angles (azimuth and elevation). I have deduced that this has to do with my assessment of the pZ coordinate on a per pixel basis. All of that in mind...
• What needs to be done to the a perspective projected Z coordinate when the projection is complete? At the moment, I just leave it as it is, alongside it's projected x and y coordinate (in world coordinates), in order to get an equation of a plane from several of the projected points
• Is it even mathematically correct to assume that if several (10 to 12) points are co-planar before perspective projection, that they will still be co-planar after?
• Should I be using some other point as a reference for the final distance calculation? At the moment, I'm using the camera position (cX, cY, cZ) in world coordinates.
• Anything else I should be looking out for?

ArdentMarauder on
·

## Posts

• Registered User
edited July 2010
You could simplify things a bit by transforming your vertices by the inverse of your camera transform (puts your camera at the origin and looking down the Z axis, positive or negative depends on if you're using left or right-handed orientation). It looks like you're doing all of your math in world coordinates which can get pretty hairy.

·
• Registered User
edited July 2010
That's a great idea! Thanks! The inverse of the camera would only have to be done once per frame, so that's not bad at all. Let me see if I can whip that up quick...

·
• Registered User
edited July 2010
That's how both OpenGL and DirectX work, though one looks down +Z and one looks down -Z (I forget which is which). It forces you to generate a true transformation matrix for your camera too, instead of using yaw and pitch angles, which can be handy in other ways.

·
• Registered User
edited July 2010
zilo wrote: »
That's how both OpenGL and DirectX work, though one looks down +Z and one looks down -Z (I forget which is which). It forces you to generate a true transformation matrix for your camera too, instead of using yaw and pitch angles, which can be handy in other ways.

How is this different than a perspective transformation? The perspective handles the big/small issue... and that's about all I need. Albeit, my case is insanely simple relative to a real graphics engine (no shading, shadows, texture mapping, HLSL, etc)

·
• Registered User
edited July 2010
Perspective transformations deal with things like field of view, near and far plane, etc- converting things from their positions inside the view frustum to 2D screen space. Camera transformations are simply where your camera is and which direction it's pointed.

edit: That wikipedia article is kinda weird. As a first attempt I'd ditch that chunk of math and make your "perspective transformation" just chopping off the Z coordinate.

One other benefit to using a proper camera transform is that finding the depth buffer values becomes trivial- it's just the z coordinates of your pixel locations in camera space.

·
• Registered User regular
edited July 2010
Hey helpful PAers, know anything about perspective projection and Z-buffering? ... By hand?

The background
Spoiler:

Filling a polygon
Spoiler:

Moving to 3D
Spoiler:

The Z-Buffer
Spoiler:

The questions / tl;dr
For clarification, at this point the all of my math does work, and with the Z-buffer off, it shows the classic problems with simply setting a paint order (two objects intersecting through each other, etc). This means that my math for determining the pixelX, pixelY and the fillPolygon call itself are acting appropriately.

The Z-buffer does function, but it has difficulty with a certain subset of camera theta and phi angles (azimuth and elevation). I have deduced that this has to do with my assessment of the pZ coordinate on a per pixel basis. All of that in mind...
• What needs to be done to the a perspective projected Z coordinate when the projection is complete? At the moment, I just leave it as it is, alongside it's projected x and y coordinate (in world coordinates), in order to get an equation of a plane from several of the projected points
• Is it even mathematically correct to assume that if several (10 to 12) points are co-planar before perspective projection, that they will still be co-planar after?
• Should I be using some other point as a reference for the final distance calculation? At the moment, I'm using the camera position (cX, cY, cZ) in world coordinates.
• Anything else I should be looking out for?

Disclaimer: I haven't worked on graphics this low level for 7 or 8 years, so most of this is subject to a quick and dirty implementation and sanity check.
• The perspective projected z coordinate should, if I recall correctly, be used as the entry in the z-buffer. If it's negative, the point is behind the image plane and should not be displayed. Naturally depending on how the z-buffer is handled this may have to be converted to an integer, etc. This is much quicker than computing the distance directly, and allows you to linearly interpolate the z-buffer value across triangles.
• I think in general an affine transformation does not necessarily preserve coplanarity of points. However, you shouldn't really be supporting non-triangle polygons at a low level anyways. Instead, if you want to support them at a high level, implement them with constructs like triangle strips. Done right it won't need anymore data, and will actually be quicker to render because you don't have to deal with all the awful stuff involved in filling an arbitrary polygon.
• See the first bullet point.
• Implement simple linear interpolation across triangles and a lot of stuff like basic shading and texture mapping is very easy to add on.

·
• Registered User
edited July 2010
Clipse wrote: »
• The perspective projected z coordinate should, if I recall correctly, be used as the entry in the z-buffer. If it's negative, the point is behind the image plane and should not be displayed. Naturally depending on how the z-buffer is handled this may have to be converted to an integer, etc. This is much quicker than computing the distance directly, and allows you to linearly interpolate the z-buffer value across triangles.
• I think in general an affine transformation does not necessarily preserve coplanarity of points. However, you shouldn't really be supporting non-triangle polygons at a low level anyways. Instead, if you want to support them at a high level, implement them with constructs like triangle strips. Done right it won't need anymore data, and will actually be quicker to render because you don't have to deal with all the awful stuff involved in filling an arbitrary polygon.
• See the first bullet point.
• Implement simple linear interpolation across triangles and a lot of stuff like basic shading and texture mapping is very easy to add on.

After checking all of the math on my previous implementation, you're absolutely right. Co-planarity before does not necessarily imply co-planarity after. So it's on to triangles!

·
• Registered User regular
edited July 2010
Man, this thread makes me wish I hadnt dropped my last math class at college because "i didnt need it"

Cause...yeah...yeah I did fucking need it.

·
• Registered User
edited July 2010
Due to your advice, I've gotten this arm of the project all wrapped up. Thanks!!

·