Special Nvidia 3D Vision Rendering Technique

Post Reply
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Greetings all. Wow, been a long time since my last post, I think it was 2 or 3 years ago, back when I was playing NAW a lot and stereoscopic Doom3 and Quake4.

So, in the mean time, I've actually not been playing games a whole lot, instead I've been busy trying to write one. And it's still ongoing, not finished yet. It's a first person shooter, written in C++ and aimed and Windows PC. I'm using Ogre (http://www.ogre3d.org" onclick="window.open(this.href);return false;) for rendering, Bullet for physics, and various other libraries.

The engine uses deferred shading and various post processing FX. For the last 6 years my computer graphics card has been an nv7950GT or the like, and a CRT monitor. Since I was using old drivers, I hadn't been able to test my game in stereo. However I've just updated to SLI gt560 cards and Asus monitor with nv 3D Vision, so I've got a pretty decent, modern, stereoscopic rig now.

The bad news was that my game was a dog in 3D Vision. Like so many deferred shading games, it was a disaster. Lights were all wrong, and my in-game glass mesh was messed up. Pretty disappointing since I love stereo gaming, and this was my own game! After 3 years of meticulously crafting the shaders and rendering pipeline, the idea of having to go back and do it all again just wasn't on the cards. Plus since I'm using a library for rendering (Ogre), integrating the nvidia specific header that allows you to access their stereo matrices and the like isn't really a practical option.

However, I have come up with a solution. I don't know if it's been done before, but it works great. It allows me to take over the stereo render myself but still have 3D Vision display the result. Using this technique, everything works, and integrating the method only required a few minor changes to the code. I think it could be done without much difficult on most rendering pipelines.

The catch is, it requires the user to perform a couple of calibration steps first, but these are fairly simple. It also requires a small registry edit of the 3D Vision profile created for the game. I'm of the opinion that stereoscopic gamers are a sturdier bunch than the average gamer, so these steps should be fairly trivial.

Anyhow, here's some cross-eyes picks of the results. If people are interested, I'll outline the method.

I'm also interested in feedback about how people feel about having to do a manual calibration step. Obviously it's preferable not to, but is it a deal breaker? Would you rather a game that worked in stereo even though you had to do a little setup step yourself, or would you just avoid the game.

(Note: My game isn't anywhere near finished, so these images are not of the final models, geometry and textures. But it gives a good idea of how it's going to look.)


Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Special Nvidia 3D Vision Rendering Technique

Post by cybereality »

Wow! That looks amazing man! I too am a fan of Ogre, but I haven't got nearly as far as you have. Nice job.

I'd definitely be interested in knowing the solution you came up with. I was even planning on ditching Ogre and going straight DirectX (arrgg!) for this side project I've been wanting to work on, but I've been sort of on the fence. So I'd love to hear if there was a way to use Ogre and also use the Nvidia API. I'm sure some other people would be interested as well.

Also, congratulations on the new rig. I have a similar setup, lots of fun!
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Hey cybereality. Glad you like the pics.

I don't use the Nvidia API at all, for two reasons. Firstly, it would require I modify the actual Ogre library source code, and that just sounds too difficult and complex. Secondly, I'd have to re-write large chunks of my shader and rendering pipeline. After 3 years of setting it up in the first place, this is just to much of a task for an indie developer with limited time.

The following is a semi-technical explanation of how I do it. Note that end users don't need to worry about this, their experience is vastly simpler. This is how to progam the effect, not how the end user calibrates it! The end user only needs do the registry edit, and a couple of calibration steps. Even these may turn out to be optional.

1. The game needs to be played normally and a 3D Vision profile saved. (Ctrl+F7) It is important that the user set the stereo seperation to maximum and the convergence to minimum before saving.


2. The game is quit, and the user finds the saved profile in the registry via RegEdit, the standard windows registry too. The keys are located here:
For 32-bit OS:
HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Global\Stereo3D\

For 64-bit OS:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\NVIDIA Corporation\Global\Stereo3D\

Once they find the game's profile, they add the following key

StereoTextureEnable

and set its value to 0. This stops the 3D drivers from automatically creating stereo textures for all the behind-the-scenes textures, but it still renders stereoscopically to the final screen. A side advantage of this is that it saves a fair bit of GPU memory. Deferred shading uses large gbuffers for storing rendering data. This method halves the amount of memory used compared to normal 3D vision stereo rendering.

It's possible this step can be skipped if either Nvida add an official profile for the game which is already set correctly, or I add an app that does the setting for the user automatically.


3. Now for the actual technique. In game, the scene is rendered twice, once for the left and right eyes, but those scenes aren't displayed directly but are instead stored on two internal textures. These two little "screens" are then placed in the world right up close to the camera. They are positioned right up close to the "virtual" 3D vision eyes, each one completely covering the field of view of one eye but totally out of the field of view of the other eye. It's as if the 3D camera was wearing a pair of glasses.

So now the right "virtual eye" only sees the right little scene image, and the left "virtual eye" only sees the left image. 3D vision now renders those images to the main view, and hey presto, the user gets a proper stereoscopic 3D Vision image.


4. The user can now adjust seperation, convergence and other 3D settings to their hearts content, however they don't use the standard 3D vision controls but instead use ones controlled by the game. 3D vision has, in effect, surrendered all 3D control over the application.

You might be thinking that this method would have problems getting a proper map between the pixels on the little scene textures and the main view screen. That's where the calibration step comes in, and it works well and easily gets the pixel-to-pixel mapping perfect.

The user goes through three screens and completes a step on each. These steps are done while the game is running in stereo mode, but the user need not be wearing their glasses.

First screen, the user presses up and down cursor keys to zoom in and out, and left and right keys to scale bigger or smaller, until the following image fills the screen with no blur and touching the edges of the screen. (In stereo there's a green arrow as well.)

Image

Second screen, users presses left and right keys to eradicate banding on the image. Heres' two shots, one with a little banding and one correctly set.
Image
Image

Final screen, the user presses up and down keys to eradicat banding on the image. Here's one correctly set.
Image

And that's it. The settings will be saved (for this screen resolution) and the user won't need to do it again. Now they can set the in-game convergence, seperation, etc as they like.

And even this may not be necessary. I'll have to do some tests, but it may be that if I calibrate these settings myself, it'll work on any monitor already. So it may be that even the calibration step is optional.
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Special Nvidia 3D Vision Rendering Technique

Post by cybereality »

Hmm... Thats one serious hack right there (in the good way, and in the bad way).

I, myself, would be willing to go through those steps (as I'm sure many users here would). But for your average gamer, even PC gamer, that may be a lot to ask. Thankfully, stereo 3D gamers are probably in the top 1% in terms of technical know-how and/or patience. So you do have that going for you. I mean, it seems straight-forward enough, but it still could be an annoyance. I wonder if there is any way to automate the process.
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

It may be possible to have the steps automated... or it might not. It really just boils down to two steps: Add the registry key, and do the calibration. I think it looks scarier than it actually is.

But you're right, your typical PC gamer doesn't want to do things like reg edits and calibration step. I'm working on the assumption that s3d users are generally not typical PC gamers, but are more technically savvy. Not that it takes a huge amount of skill to add a registry key, but yes, they would need to be a notch above the typical.

I'm hoping that the user would rather do a few steps and get a fully stereoscopic game with all the fx working, rather than just have another game on the heap that has too many 3ds issues to be enjoyed. In marketing the game I'd point out that 3D Vision support is available but requires some manual configuration to work.

Edit: On thinking about this, it probably will be possible to automate it. I found the manual method so simple I didn't really worry about it, but you raise a good point, it probably does look daunting to some people. So I guess I better do some more work.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

mkultra333 wrote:I don't use the Nvidia API at all, for two reasons. Firstly, it would require I modify the actual Ogre library source code, and that just sounds too difficult and complex. Secondly, I'd have to re-write large chunks of my shader and rendering pipeline. After 3 years of setting it up in the first place, this is just to much of a task for an indie developer with limited time?
I don't understand why you'd need to modify Ogre or your shader and rendering pipeline in order to use the 3D Vision API, can you elaborate a bit more on this ?
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Just to be clear, the nvidia api is for programming, not for the end user. You can make an ogre 3d game and use it with 3D Vision and it'll often work just fine, even if the developer knew nothing about 3D Vision. However, some things like deferred shading and some post processing typically break under those circumstance. If you look at the Ogre demos, most of them work fine, but the deferred shading demo does not. While my def shading engine isn't based on the ogre demo, it none the less also breaks if you just directly try to use it with 3DVision. It's not just Ogre deferred shading that breaks, lots of games out there have these same issues. It often has to do with a mismatch between where the shaders think things are in the world compared to where 3D vision says they are, since 3D vision uses different stereo matrices compared to the normal mono matrices.

I'm not an expert on the nvida api, but here's my understanding:

I'd need to include nvapi.h in my project, which gives access to many of the lower level controls for the stereoscopic system. However, since I'm using Ogre for rendering, and ogre is built to be agnostic between OpenGL and DirectX, Ogre hides almost all of the information I need to interact with the api properly. Just looking at it now, and picking something at random, there's a function such as

NVAPI_INTERFACE NvAPI_Stereo_CreateHandleFromIUnknown(IUnknown *pDevice, StereoHandle *pStereoHandle);

To use it I need information about the Direct3D devices and their states, but I don't have that information because it's hidden inside the Ogre lib. So I think that unless I go into the Ogre source code itself and try and integrate the nvidia API, the api mostly won't work. But I want to avoid going into the Ogre lib, since it's large and fairly complex and I've just as much chance of breaking everything as I have of getting it working correctly.

A second issue is that if I try to use the API, it would mean I had to re-write many of my shaders and rendering pipeline, since the way the api processes data is not simply to render two images from two different viewpoints. Since it's taken 3 years of programming and debugging to get my renderer to its current state, I don't like the idea of another anknown number of months or years altering them for the api.

I am thinking I can probably automate my own system though, so hopefully end users will be almost none-the-wiser that I'm using an alternative method of rendering.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

mkultra333 wrote:Just to be clear, the nvidia api is for programming, not for the end user. You can make an ogre 3d game and use it with 3D Vision and it'll often work just fine, even if the developer knew nothing about 3D Vision.
Yep I know about that, I'm a programmer myself and also used Ogre in the past.
mkultra333 wrote:To use it I need information about the Direct3D devices and their states, but I don't have that information because it's hidden inside the Ogre lib. So I think that unless I go into the Ogre source code itself and try and integrate the nvidia API, the api mostly won't work.
This thread should help in this case, apparently there is no need to modify Ogre :
http://www.ogre3d.org/forums/viewtopic. ... 45&start=0" onclick="window.open(this.href);return false;
mkultra333 wrote:A second issue is that if I try to use the API, it would mean I had to re-write many of my shaders and rendering pipeline, since the way the api processes data is not simply to render two images from two different viewpoints.
I think that's precisely how the NVIDIA API does work, ie. by using two previously rendered images for stereo output. That's explained in this paper : GDC09-3DVision-The_In_and_Out.pdf . Where did you read that it was not the case ?
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Thanks for that thread link. That might solve the Ogre.lib issue, although it still leaves me with the need to rewrite my renderer and shaders.
I think that's precisely how the NVIDIA API does work, ie. by using two previously rendered images for stereo output. That's explained in this paper : GDC09-3DVision-The_In_and_Out.pdf . Where did you read that it was not the case ?
Yes, it renders the image twice, but it doesn't do it in a simple way. It modifies the translation matrices in a manner that makes deferred shading and post processing difficult to fix. Not impossible, but difficult, especially at a late stage in the project.

From http://developer.nvidia.com/sites/defau ... omatic.pdf" onclick="window.open(this.href);return false;
Specifying a Depth for Stereoization
As explained in The Existing Conceptual Pipeline on page 10, controlling the W coordinate output from the vertex shader is the key to controlling the apparent depth of rendered objects. There are several methods to modify this value; the appropriate method for your application depends on your current pipeline and requirements. None of the methods described here should affect rendering when running in non-stereoscopic modes, and can be left enabled at all times for free.
My shaders and gbuffer do not work on the w component, they mostly use viewspace.

Another advantage of my method is that both deferred shading and post processing work perfectly. I have HDR blooming, motion blur and deferred lights all working flawlessly. Whereas in the normal 3D vision method:
DEFERRED RENDERERS
Deferred Renderers suffer the unprojection problem described in Post Processing, but to a more extreme degree because unprojection is even more common in deferred renderers.
There are two main approaches to fixing this problem. The first solution is exactly the same as described in Post Processing on page 21.
The second solution is to simply skip unprojection altogether by storing the world-space position directly in the G-buffer. This solution, while trivial, is not usually practical due to space and bandwidth constraints.
http://3dvision-blog.com/things-that-hu ... to-nvidia/" onclick="window.open(this.href);return false;
Post-Processing and Screen-Space Effects
2D screen-space effects can greatly hurt the stereo effect. Things like blurry glow, bloom filters, image-based motion blur fall into this category. These effects are usually created by rendering the 3D geometry to a texture, then rendering a 2D screen aligned quad to the screen. The geometry in the texture is no longer at the correct depth in the world as the effect should be, thus working poorly in 3D Stereo. You should provide the option to disable these effects for people playing in stereo and render the geometry to the back buffer.
I'd like to point out that I haven't been saying that it's impossible to get it all running the standard way, just that it's difficult and time consuming, while the method I've outlined is something that can simply be added easily to existing pipelines.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

mkultra333 wrote:Thanks for that thread link. That might solve the Ogre.lib issue, although it still leaves me with the need to rewrite my renderer and shaders.
Why so ? If you use the NVIDIA API as explained in the paper you shouldn't have anything to modify in your rendered or shaders. Basically you only pass two images to the NVIDIA API and that's all, the way you create your images is not relevant.
mkultra333 wrote:Yes, it renders the image twice, but it doesn't do it in a simple way. It modifies the translation matrices in a manner that makes deferred shading and post processing difficult to fix. Not impossible, but difficult, especially at a late stage in the project.
I may be wrong but I think you are mixing two things here. If you rely on the 3D Vision driver you must modify your shaders because the driver is in charge of the stereo rendering and expects correct W coordinates. But if you use the NVIDIA API you are in charge of the rendering and you only need to send the two final images to the API, hence no modifications are needed for your shaders.
mkultra333 wrote:Another advantage of my method is that both deferred shading and post processing work perfectly. I have HDR blooming, motion blur and deferred lights all working flawlessly.
I can understand that you don't want to spend some time updating your shaders according to the NVIDIA recommandations. But that would still be the best option since you wouldn't have to link to the NVIDIA API, rely on a hack that is not guaranteed to work in the future or place additional constraints on the end-user.

This way your game could also work with other stereo drivers (TriDef, iZ3D) and other GPUs (AMD/ATI) without modification. For the 2D post-processing effects for which you can't use a W coordinate, you could also add an option to disable them as specified in the NVIDIA paper.

But you can also add support for the AMD Quad-Buffer SDK and still use the first solution with minimal changes to your code. I don't know how it does work but you can have a look here :
http://developer.amd.com/sdks/QuadBuffe ... fault.aspx" onclick="window.open(this.href);return false;
Last edited by Fredz on Wed Nov 30, 2011 4:24 am, edited 1 time in total.
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Ah, that is interesting. So if I use the API, I can simply pass two images, and the driver won't do any weird tricks I don't want? I didn't realize, that would be ideal. I would rather do it in a normal way, it was just a question of the difficulty involved. If it isn't too difficult I'll give it a shot.

Worst comes to worse, I'll use the above as a fallback position if I can't get the NVApi working. At least I know I can get something out there that works stereoscopically.

Have you programmed Ogre witht the nvapi?
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

Nope I've not used the NVIDIA API with Ogre, I was programming under Linux at that time.

Good luck with your project.
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Could you give an example of how I would achieve the same effect as the registry setting

StereoTextureEnable 0

using the nvapi? Don't worry about describing sessions etc, their pdf explains that, but what kind of call would I issue to turn off stereo textures? If you don't know, that's fine, I'm just curious how complicated it might be, and their documentation looks a bit sparse in the pdf, and dense and complex in the header file.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

You can't use the NVIDIA API to do that since this registry entry is used by the 3D Vision driver. As I said, these are two completely different things and are mutually exclusive.

You could still try to access the registry directly from your application by using appropriate Win32 system calls, I've done that in the past to for an inventory application, but it was for XP and I'm not sure it does work the same way (or at all) on Vista/7. Anyway that woud still be a hack and it can be somehow dangerous to fiddle with the registry from an application in case of bugs.

The only way to avoid stereoization of your offscreen textures would be to do as they said in their papers, by avoiding their heuristics based on the size of render targets :
"Automatic duplication is based on the surface size :
- Surfaces equal or larger than back buffer size are duplicated
- Square surfaces are NOT duplicated
- Small surfaces are NOT duplicated
- Heuristic defined by driver profile setting (Consult documentation for fine tuning)"
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

You can't use the NVIDIA API to do that since this registry entry is used by the 3D Vision driver. As I said, these are two completely different things and are mutually exclusive.
I'm not saying I want to use the registry to do it, I'm saying I want the same outcome as if I'd used the registry i.e. I want to create mono textures instead of stereo ones.

I must have misunderstood you. I thought you meant that I could feed the driver two mono images and have it render them as stereo, which is what I'm currently doing and what I'd require if I use nvapi.h. But if I can't create mono textures in the first place that becomes impossible.

I'm not sure what you are suggesting, using nvapi.h with Ogre, will work. Looking through the header, it's all low level stuff. This was probably fine if you were programming Linux, since I expect you were directly using OpenGL calls for your drawing, so nvapi would naturally fit in with that. But Ogre does not expose low level draw calls, at least not to any great degree, and usually not in a DirectX or OpenGL specific way.

I'm curious anyway, if you don't use 3D visions automated rendering and use the nvapi instead, how much work do you have to do? A glance at the header, plus some other code snippets, seems to indicate I suddenly become responsible for everything, at the lowest level. Declaring surfaces, tagging them... for instance, here's some code I read from http://developer.download.nvidia.com/pr ... nd_Out.pdf" onclick="window.open(this.href);return false;

Code: Select all

Tagging stereo image with Stereoscopic signature
// Lock the stereo image
D3DLOCKED_RECT lr;
gImageSrc->LockRect(&lr,NULL,0);
// write stereo signature in the last raw of the stereo image
LPNVSTEREOIMAGEHEADER pSIH=
(LPNVSTEREOIMAGEHEADER)(((unsigned char *) lr.pBits) + (lr.Pitch* gImageHeight));
// Update the signature header values
pSIH->dwSignature= NVSTEREO_IMAGE_SIGNATURE;
pSIH->dwBPP= 32;
pSIH->dwFlags= SIH_SWAP_EYES; // Srcimage has left on left and right on right
pSIH->dwWidth= gImageWidth*2;
pSIH->dwHeight= gImageHeight;
// Unlock surface
gImageSrc->UnlockRect();
This is way too low level to be accessed through Ogre, and appears to be only one tiny part of everything needed to get stereo running.

I'm not trying to be difficult, and I readily admit to having no experience with these low level operations so maybe I'm mis-understanding. But it seems that a great deal of work and low level programming is required to make the nvapi and manual rendering work, and Ogre does not seem to give access to these kinds of operations in a way that would fit in with nvapi. Maybe it does, I don't know, but it doesn't appear to.
The only way to avoid stereoization of your offscreen textures would be to do as they said in their papers, by avoiding their heuristics based on the size of render targets :
Yes, I thought about those heuristics. They aren't any good, since the textures are always large and always not square. They have the same resolution as the players screen, say 1280x720 or 1920x1080. So that road is out. I can't make them small, obviously, and I can't make them square, again obviously.

BTW, thanks for your responses. I appreciate the info.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

mkultra333 wrote:But Ogre does not expose low level draw calls, at least not to any great degree, and usually not in a DirectX or OpenGL specific way.
The link to the Ogre forum I gave explains this precisely, ie. how to cast the Ogre render system to a Direct3D one. Did you read it ?
mkultra333 wrote:I'm curious anyway, if you don't use 3D visions automated rendering and use the nvapi instead, how much work do you have to do? A glance at the header, plus some other code snippets, seems to indicate I suddenly become responsible for everything, at the lowest level. Declaring surfaces, tagging them...
The "everything" you talk about accounts to the 15-20 lines given in the PDF I linked to. It's just a copy-paste operation and then you only need to add the Ogre to Direct3D cast of the render system.
mkultra333 wrote:I'm not trying to be difficult, and I readily admit to having no experience with these low level operations so maybe I'm mis-understanding.
The only low-level operations that are needed are the 15-20 lines in the PDF, you don't even have to understand them.
mkultra333 wrote:But it seems that a great deal of work and low level programming is required to make the nvapi and manual rendering work, and Ogre does not seem to give access to these kinds of operations in a way that would fit in with nvapi.
Ogre like any other 3D engine is perfectly suited for stereoscopic 3D, and the amount of work needed is negligible in this case. I can understand that you don't feel like doing it, but saying that Ogre doesn't allow it is not a good excuse.
mkultra333 wrote:Yes, I thought about those heuristics. They aren't any good, since the textures are always large and always not square. They have the same resolution as the players screen, say 1280x720 or 1920x1080. So that road is out. I can't make them small, obviously, and I can't make them square, again obviously.
If these render targets are always the same size than the screen it means that they probably need to be stereoized anyway. If that creates an incorrect rendering in stereo then the problem must be elsewhere, in a shader most probably. You need to read the "NVIDIA 3D Vision Automatic" pdf to find out how to modify your shader in this case.

Anyway, it seems that you are more interested in finding counter-arguments about not doing it instead of trying. I've explained all I know about this subject and you can find the rest in the NVIDIA papers, it's up to you.

EDIT : you can also have a look at this paper to have a better understanding of heuristics used by the 3D Vision driver : Beyond 2D Monitor NVIDIA 3D Stereo
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

If the amount of work required to get modern rendering systems using 3D Vision was "negligible" then we'd see 9 out of 10 games fully supporting s3d. We don't.

I'm playing Bulletstorm at the moment. It's awesome, the graphics are excellent and the s3d is the best I've seen so far. But in spite of the fact that it was written specificially with a mind to s3d (for ati), it still didn't work with 3D Vision at release, the first few patches had serious issues, and even now there are occasional small issues (halos around some lights).

And this is with teams of very experienced programmers working in hand with Nvidia.

So while I appreciate the help, your tone of implying that this is just some trivial task, and I'm just too slack to do it, doesn't really ring true. And a multitude of links to PDFs consisting of slideshows of eyes with Vs coming out of them doesn't really change that point.

The fact is that only a handful of games manage perfect s3d rendering, and that deferred shading and post processing are two of the biggest problems. This is all I've been saying. I've made the point that it isn't impossible, but if you insist its easy and that an indie programmer should be able to knock his Ogre deferred shader into a s3d system with ease, while professional studios using direct Nvidia staff help have problems... well, guess I'll have to disagree.

If you do write an Ogre deferred renderer that works with Nvidia 3D vision, I'd be only too happy to use it. But try it first, it may not be as trivial as you think.

I'll leave it at that, don't particular want a flame war. Just a dose of reality.

----------------------------------------------------------------------------------------------------------------------


As I've said from the start, the shaders don't work with s3d in deferred systems often because of a clash between shaders working in mono space while the textures are in stereo space. I'd certainly prefer to just use the automated nvidia system, especially if I can avoid having to grapple with nvapi, so I had a closer look at what is causing the issue.

I'm still a bit confused on the issue, but as best I can tell, the problem arises when certain light boxes and glass effects in the game attempt to access info in the gbuffer but use the wrong UV.

In mono rendering, the objects use their worldViewProjection matrices to get their positioin, then they get a UV by going x/w and y/w. While this works in mono, in stereo the UVs end up shifted by the amount of eye seperation, ending up on the wrong part of the gbuffer. If I can just work out how to get around this, the issues should be fixed. Unfortunately, while the pdfs mention this kind of thing, it's hard to tell if the fix they offer applies to my case, or how I would apply it if it did.

Have an idea that might work, will give it a shot tonight after work. At worst, I'll use my hack as a plan B.
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Special Nvidia 3D Vision Rendering Technique

Post by cybereality »

I wish I could help more, but this is really beyond my knowledge at this point. From what I understand, it does appear like the Ogre source would need to be patched in order to do this properly. But there could be a clever work-around, sure. But I haven't tried to do this myself, so maybe its easier than it looks.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

mkultra333 wrote:If the amount of work required to get modern rendering systems using 3D Vision was "negligible" then we'd see 9 out of 10 games fully supporting s3d. We don't.
Most game editors and developers simply don't care or are even overly against the idea, that's as simple as that. Some examples :

Article : The Elder Scrolls V: Skyrim Director Not a Fan of 3D Gaming

"Bethesda's Todd Howard, the director of the upcoming The Elder Scrolls V: Skyrim, isn't a fan of the recent 3D gaming fashion, as the developer doesn't like the idea of wearing clunky glasses just to make players feel a bit more immersed into the action displayed on the screen."

Article : Another Game Developer Condems Stereo-3D Technology

"Chief executive officer of id Software, a subsidiary of Bethesda and the developer of legendary Doom and Quake franchises, said that stereo-3D technology must become more affordable so that to impact video game industry. Before that happens, few game developers will adopt stereo-3D (S3D) for their titles."
mkultra333 wrote:I'm playing Bulletstorm at the moment. It's awesome, the graphics are excellent and the s3d is the best I've seen so far. But in spite of the fact that it was written specificially with a mind to s3d (for ati), it still didn't work with 3D Vision at release, the first few patches had serious issues, and even now there are occasional small issues (halos around some lights).
Developing AAA games is a very intense activity with very tight schedules, so if S3D is not part of the process right from the start it won't generally be ready at game launch. And they have a lot of more important things to do than adding S3D support when approaching the release date, like fixing last minute bugs which has been quite the norm recently, even after launch with the numerous patch releases. Also S3D at this time is probably considered to have a very low ROI factor by game editors, so developers most probably don't have much pressure to make their games look good in S3D.
mkultra333 wrote:And this is with teams of very experienced programmers working in hand with Nvidia.
Experienced in 3D programming but not in S3D, the whole Crysis 2 reprojection debacle is quite a good illustration of this. And most game developers don't work in hand with NVIDIA to add S3D to their titles, it supposes a real incentive from their editors to do this.
mkultra333 wrote:So while I appreciate the help, your tone of implying that this is just some trivial task, and I'm just too slack to do it, doesn't really ring true. And a multitude of links to PDFs consisting of slideshows of eyes with Vs coming out of them doesn't really change that point.
Sorry about this, I know I can sound a little bit harsh sometimes. But you were so fast at dismissing everything I said that I really felt you didn't really want to add S3D to your game and were only looking for an excuse to not do it.
mkultra333 wrote:The fact is that only a handful of games manage perfect s3d rendering, and that deferred shading and post processing are two of the biggest problems.
Yep these are indeed problems, but the NVIDIA papers explain how to make deferred shading work, and several titles with deferred shading have indeed been released in S3D. For the post-processing effects I also said that if they are inherently in 2D you don't have to deal with them and should only offer a way to deactivate them in S3D via game options.
mkultra333 wrote:I've made the point that it isn't impossible, but if you insist its easy and that an indie programmer should be able to knock his Ogre deferred shader into a s3d system with ease, while professional studios using direct Nvidia staff help have problems... well, guess I'll have to disagree.
Re-read what I said, I didn't say that it was easy for everything, I said it was easy to add support for the NVIDIA API. And if you go this route you don't have to modify any of your shaders.

I also said that if you take the second route (relying on the 3D Vision driver) then you would need to modify your shaders, but I didn't say that it'll be easy. Without seeing how they are written there is no way I can know that anyway. I then pointed out that the NVIDIA paper explains how to adapt deferred shading to S3D.
mkultra333 wrote:I had a closer look at what is causing the issue. [...] If I can just work out how to get around this, the issues should be fixed. Unfortunately, while the pdfs mention this kind of thing, it's hard to tell if the fix they offer applies to my case, or how I would apply it if it did.
Sorry to sound like "that's what I've been saying from the start", but that's precisely what I've been saying all along. Either go the simple route by adding support for the NVIDIA API (15-20 lines of code) or go the second route by adapting your shaders so they are supported by the 3D Vision driver, with the aid of the NVIDIA papers.

Good luck with all this, and don't hesitate to keep us informed if you succeed in tackling this. It could make a good reference for indies developers in the future.
Last edited by Fredz on Thu Dec 01, 2011 2:36 am, edited 2 times in total.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

cybereality wrote:I wish I could help more, but this is really beyond my knowledge at this point. From what I understand, it does appear like the Ogre source would need to be patched in order to do this properly. But there could be a clever work-around, sure. But I haven't tried to do this myself, so maybe its easier than it looks.
There are indeed work-arounds and no patches to the Ogre sources are needed, that's precisely what we've been talking about.
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Sorry to sound like "that's what I've been saying from the start", but that's precisely what I've been saying all along. Either go the simple route by adding support for the NVIDIA API (15-20 lines of code) or go the second route by adapting your shaders so they are supported by the 3D Vision driver, with the aid of the NVIDIA papers.
The reason I keep raising counter arguments is essentially because the fixes you've suggested just don't fit my problem. I have a deferred shader, and the pdfs and nvapi give a solution to a problem that arises with deferred shaders. Thing is, it's not my problem they describe.

Here's the nvidia problem description and solution, taken from the pdfs. I'll specifically quote http://developer.nvidia.com/sites/defau ... omatic.pdf" onclick="window.open(this.href);return false;, though several other pdfs refer to the same issue.
DEFERRED RENDERERS
Deferred Renderers suffer the unprojection problem described in Post Processing, but to a more extreme degree because unprojection is even more common in deferred renderers.
There are two main approaches to fixing this problem. The first solution is exactly the same as described in Post Processing on page 21.
The second solution is to simply skip unprojection altogether by storing the world-space position directly in the G-buffer. This solution, while trivial, is not usually practical due to space and bandwidth constraints.


The problem they describe is "unprojection". This is when you get a fragment world space position by using a depth value and a screen position value such as UV. This fails in stereo because the screen position has been shifted.

The simple solution is to just store the world pos in the GBuffer. If your GBuffer is already full though, you can't do that.

The more complex solution is to install nvapi, get the stereo seperation and convergence from nvapi and feed it to the shader, and then use that additional information to correct the unprojection step. The linked pdf gives the shader code.

As far as I can see, these are the only two solutions to deferred shading problems given in any of the pdfs. And the only use for nvapi as far as stereo deferred shading problems go is that you can use it to tell your shaders the seperation and convergence. If there is some other solution info in those pdfs, then my apologies for missing it.

So, here's the thing... I don't do stereo unprojection, and I already store full world pos in my gbuffer. (Actually viewspace position, but since it's pre-projection it doesn't get touched by the nvidia automated stereo processes and I can convert it to worldspace easily if I want.) In otherwords, I can see nothing here that helps my situation. You keep telling me the answers simple, the answers right there, but I cannot see it. Perhaps there's some little snippet I've missed, and if you point that out I'll be thrilled. But as it stands, neither the shader fixes offered in the pdfs or adding nvapi will furnish any solution to my problems.

So if it looks like I'm trying to shoot down all your solutions, its because they do not appear to be solutions. Not to my shader problems.

However, this is not to say that I'm not highly appreciative of you attempts to help. I'd certainly rather get a handful of suggestions that haven't worked that to be ignored completely! And the information might help me recognize the solution when I finally do stumble over it.

-----------------------------------------------------------------------------------------------------

Speaking of solutions, I tried the ideas I had yesterday as soon as I got home, but they didn't work. However, a closer look at just exactly what was going wrong has given me another possible line of attack. I was too tired that night, but I'll have another shot tonight.

I noticed that the misalignment seems to be exactly double the seperation. This leads me to believe the malfunction is occuring because the stereo effect is being applied twice to some parts of the rendering. And I think I know how this is occuring.

First the main sceen gets rendered to the gbuffer. Then light boxes and some glass effects get added, and finally spotlights get added. Oddly the spotlights, while still deferred, are drawing just fine, but the light boxes and glass efffects are misaligned. It occured to me that the spotlights are drawn using fullscreen quads, while the light boxes and glass are real geometry, but all of them read from the gbuffer.

My guess is that the spotlight reads from the right part of the gbuffer because it has a w of 1. It's at screen depth, and the uv it comes up with for accessing the stereo gbuffer is bascially a mono uv. But the light boxes and glass are rendered in the stereo 3d world, and the uvs they come up with to access the gbuffer are also stereo. That's a stereo UV accessing a stereo gbuffer... stereo plus stereo = stereo x 2!

So what might work is to feed in a second "mono" worldviewprojection matrix to the light boxes and glass, seperate from the stereo worldviewprojection matrix, just for the purpose of working out where in the gbuffer to read from. That just might work, and isn't too tricky to implement.
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Special Nvidia 3D Vision Rendering Technique

Post by cybereality »

But if you already have 2 fully rendered stereo views, can't you just pass these to the Nvidia API and be done with it?
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

(By two stereo views, I assume you mean a rendered left view on one texture and a rendered right view on another.)

AFAIK, no. All the textures needed get created as stereo textures by the nvidia stereo drivers automatically, and there's very little control over this. From earlier in the thread:
mkultra333:

Could you give an example of how I would achieve the same effect as the registry setting

StereoTextureEnable 0

using the nvapi? Don't worry about describing sessions etc, their pdf explains that, but what kind of call would I issue to turn off stereo textures? If you don't know, that's fine, I'm just curious how complicated it might be, and their documentation looks a bit sparse in the pdf, and dense and complex in the header file.

Fredz

You can't use the NVIDIA API to do that since this registry entry is used by the 3D Vision driver. As I said, these are two completely different things and are mutually exclusive.

You could still try to access the registry directly from your application by using appropriate Win32 system calls, I've done that in the past to for an inventory application, but it was for XP and I'm not sure it does work the same way (or at all) on Vista/7. Anyway that woud still be a hack and it can be somehow dangerous to fiddle with the registry from an application in case of bugs.

The only way to avoid stereoization of your offscreen textures would be to do as they said in their papers, by avoiding their heuristics based on the size of render targets :
"Automatic duplication is based on the surface size :
- Surfaces equal or larger than back buffer size are duplicated
- Square surfaces are NOT duplicated
- Small surfaces are NOT duplicated
- Heuristic defined by driver profile setting (Consult documentation for fine tuning)"
The hack I mentioned in the opening post is the only way I know of getting around this.

Fredz may know better though, he's actually used nvapi while I'm just going by looking at it. I have noticed stuff about tagging textures as left or right, so I could be wrong.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by Fredz »

mkultra333 wrote:(By two stereo views, I assume you mean a rendered left view on one texture and a rendered right view on another.) AFAIK, no. All the textures needed get created as stereo textures by the nvidia stereo drivers automatically, and there's very little control over this.
I incorrectly thought that when using the NVAPI the 3D Vision driver was automatically aware of this and didn't do the stereoization, but it seems I was wrong, sorry about that.

There are some functions in the NVAPI that deal with the registry, you can at least create a default profile for your game, but I didn't see anything to directly modify registry values like StereoTextureEnable. The options seem to be limited to separation, convergence and frustum adjust mode, I guess other options are only available to registered developers under a NDA.

You may have a look at the NVAPI_Reference_Developer.chm file in the NVAPI zip file for more information about the different available functions : http://developer.nvidia.com/nvapi" onclick="window.open(this.href);return false;

To sum it up, I think you've got these options :

1) rely on the 3D Vision driver and correct your shaders so they are correctly rendered in S3D.
That's probably the more sensible route since it'll normally make your game compatible with any stereo driver without any dependency on them. Except if you need to do some unprojections when there is no other solution, but that may not be needed from what you said.

2) create the two images yourself and use the NVAPI to show them in stereo with 3D Vision
This solution doesn't need any modification to your shaders, but you need to generate the two views, use the 15-20 lines of code so they are rendered in S3D and cast the Ogre render system to a Direct3D one. But you need to find a way to modify the StereoTextureEnable registry entry.

3) rely on the 3D Vision driver without modification of your shader code
That's probably the easiest solution and the one that you use already. But you need to find a solution for the registry modification, preferably one that doesn't need user intervention.

To modify the registry you have these options :
- let the users do it manually, which is how you do it now but is quite a burden ;
- use a .reg file with the correct parameters that you execute from your game, but user modifications will be lost ;
- do it programmatically using functions of the Windows API, but you must be careful with this. You need to find your registry entry or create it if it doesn't exist and put sensible values for StereoTextureEnable and other parameters (convergence, separation, frustum adjust mode, etc.).
mkultra333 wrote:Fredz may know better though, he's actually used nvapi while I'm just going by looking at it. I have noticed stuff about tagging textures as left or right, so I could be wrong.
I've never used this API, only Ogre as I said. All I know about 3D Vision is what I've been reading in the docs and on several forums. You can find some interesting threads about this here for example :
- [DIY] homemade NVIDIA 3dVision interface code ;
- Nvidia StereoAPI problem ;
- fantastic! I've gotten my first 3d vision demo running!!.

To be exhaustive, here is a list of all papers I found about stereo 3D programming (from AMD, NVIDIA and Sony) :
- http://askyl.free.fr/index.php?option=c ... ut=default" onclick="window.open(this.href);return false;
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Tried my "pipe in another wvp matrix for the uv" idea last night, alas, didn't work. Since the special Nvidia transforms happen after the vertex shader, there's not really anything I can do to get around it within the vertex shader.

However, I had another idea that worked. I fortunately had a free channel in my gbuffer, so I stored a screen UV position there during the main Gbuffer render. Then when later renders read back from the Gbuffer, they also read that original UV. They compare it to their own calculated UV, and the difference gives the offset needed to fix the unwanted misalignment.

This should fix up the problem with the glass. Unfortunately it just revealed a deeper, more serious stereoscopic flaw in the small, deferred point lights.

There are two ways you can render a deferred light. You can either render it using a quad at screen depth, or you can render it using some kind of geometry out in the 3D world. I use both, a fullscreen quad for the deferred spotlights and small cubes for the small point lights. The spotlights render fine, and always have. But now the cube lights, though getting the right UV info for accessing the Gbuffer, are rendering in stereo as strange cubes of lit area. Their "cube-ness" distorts surrounding geometry, giving the impression of eating into the background like a sort of faded in, cut out cube.

This is a different ballpark of stereoscopic error, I doubt I have any chance of getting those kinds of lights to work with the standard nv 3D vision system. This is a pity, because this kind of geometry deferred light has the option of not just doing point lights, but also line lights (like a fluoro lamp) or plane lights (like a disco floor), while the quad method is really only efficient for point lights or spotlights.

(I have heard of methods of doing quad line lights, I think they do it in MW3 or one of the recent titles, but they use a complex tiling method that really is beyond me.)

So now I think I have basically 2 options, more or less the same as the options Fredz outlined above.

1. Stick with my experiemental rendering method I used in the opening post. This removes all barriers and is already set to go. Anything I can render in mono will come out in stereo as well. But its a bit of a hack, non-standard. It may or may not be adaptable to other s3d systems like tridef, I don't know.

2. Redo my point light method so that it uses quads instead of geometry. This requires a moderately serious rewrite of part of the engine, and means a lot of lighting methods I was going to use will no longer be an option. On the plus side, it'll be compatible with 3D Vision automatic mode, and maybe has a better chance of working with things like tridef, although again I don't really know.

Hmm. Damn. I did really want to have it run in s3d automatic mode. But I really wanted line and plane lights as well. Decisions, decisions...
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Got it!

I was thinking that if only there was a way to get 3D geometry to act like 2D sprites, I could solve my problems. I'd spotted something in the pdfs about controlling 2D sprite depth using the w component. Turns out I could use the same method to crunch my small deferred light boxes down from 3D stereo objects to 2D mono objects. This eliminates all stereo seperation, which was what has been causing the problems.

Normally the vertex shader would create the 3d version of the vertex like this:

Code: Select all

OUT.p = mul(wvpMat, IN.p);
This vertex ends up with a w component, and between the vertex shader and the fragment shader the Nvidia 3D vision system applies the stereo effect. But it uses the w to work out the stereo depth, so if we mess with OUT.p in the right way we can make it act like a 2D sprite and render at screen depth. The new code is like this:

Code: Select all

    OUT.p = mul(wvpMat, IN.p);
    if(OUT.p.w>0.0)
    {
		OUT.p.x/=OUT.p.w ;
		OUT.p.y/=OUT.p.w ;
		OUT.p.z=0 ;
		OUT.p.w=1 ;	
    }
Now all stereo separation has been killed, which is just what I need for my deferred light boxes.

This isn't all that's needed though. A new issue now is that the box won't cover the right amount of screen. The left eye will be missing the left edges and the right eye will be missing the right edges, because the box is now drawing right in the middle of where the old left and right boxes use to be drawn. To solve this, I added code that automatically stretches the box to the left and right of the screen, no matter what angle you view it from. I also stretched it a little up and down too, just to make sure we don't miss any areas, otherwise the light will be clipped. Finally, I also added the correct UVs. This means I no longer need to store additional UV data in the gbuffer like I did in the last post.

Here's a comparison of the old vertex program versus the new one.

Old Vertex Program.

Code: Select all

VOut lamp_default_d3d_vs(
		
		VIn IN,
    uniform float4x4 wvpMat,
		uniform float4x4 WorldViewXf

    
		) 
{
    	VOut OUT;

	OUT.p = mul(wvpMat, IN.p);
	OUT.position = OUT.p ;
	OUT.colour = float4(IN.colour, IN.data.w) ;

	// transform the light position into world view space
	float4 LightPos=float4(IN.p.xyz+IN.data.xyz, 1) ; // light position relative to vertex position
	OUT.ltpos=mul(WorldViewXf, LightPos) ;

	OUT.WorldViewPos=mul(WorldViewXf, IN.p) ;

	return OUT;
}
New Vertex Program

Code: Select all

VOut lampDebug_vs(
		
		VIn IN,
    uniform float4x4 wvpMat,
	uniform float4x4 WorldViewXf
		) 
{
    VOut OUT;

    OUT.p = mul(wvpMat, IN.p);

    // crush the image flat to avoid stereo projection problems
    if(OUT.p.w>0.0)
    {
		OUT.UV.x=OUT.p.x/OUT.p.w ;
		OUT.UV.y=OUT.p.y/OUT.p.w ;
		
		OUT.p.x/=OUT.p.w ;
		OUT.p.y/=OUT.p.w ;
		OUT.p.z=0 ;
		OUT.p.w=1 ;	
    }

	
    OUT.colour = float4(IN.colour, IN.data.w) ;

		// transform the light position into world view space
		float4 LightPos=float4(IN.p.xyz+IN.data.xyz, 1) ; // light position relative to vertex position
		OUT.ltpos=mul(WorldViewXf, LightPos) ;

		OUT.WorldViewPos=mul(WorldViewXf, IN.p) ;
		
		
		
	// stretch out the light box so that we cover a wider area in stereoscopic mode
		
	// get vector from centre to vertex
    float4 vec ;
    vec.xyz=normalize(-IN.data.xyz) ;
    
    // rotate the vector
    // isolate WorldViewXf rotation-only part
    float3x3 modelViewRotXf;
    modelViewRotXf[0] = WorldViewXf[0].xyz;
    modelViewRotXf[1] = WorldViewXf[1].xyz;
    modelViewRotXf[2] = WorldViewXf[2].xyz;
    
    vec.xyz= mul(modelViewRotXf,vec.xyz);
    vec.w=1 ;
		
		if(vec.x>0.0f)
		{
			OUT.p.x+=0.3 ;
			OUT.UV.x+=0.3 ;
		}
		else
		{
			OUT.p.x-=0.3 ;
			OUT.UV.x-=0.3 ;
		}
		

		if(vec.y>0.0f)
		{
			OUT.p.y+=0.1 ;
			OUT.UV.y+=0.1 ;
		}
		else
		{
			OUT.p.y-=0.1 ;
			OUT.UV.y-=0.1 ;
		}
		

    return OUT;
}
The number "0.3" and "0.1" aren't necessarily precisely correct, they're just what I've used for experimenting.

So far testing indicates this does the job. Which is great, all I have to do is change a couple of vertex programs. And it means I get the best of both worlds, it works in nvidia automatic mode AND I can still use line lights and plane lights. Fixing things up now, I'll post some pics once it's all sorted.
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Special Nvidia 3D Vision Rendering Technique

Post by cybereality »

Great news. Can you take some screenshots?
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

No pics yet, ran into a new problem.

I mistakenly thought I could get correct screenUV values for the glass (and some effects like explosions and fire) but I was wrong. These objects are rendered out in the 3D world, and cannot be converted to 2D sprites like the deferred lights can. And they read the gbuffer to get depth information, to work out if they are in front or behind walls and other solid objects.

Unfortunately, I'm now realizing that only 2D, screen depth items can read correctly from the gbuffer. 3D objects out in the 3D world will often read from the wrong location. I didn't notice at first, because flat walls and floor don't show up obvious faults, but if you look at them from the wrong angle, the UVs needed to access the gbuffer become more and more distorted by perspective. This comes about because of a mismatch between the w component I use in the pixel program to create the UV, and the real w that the nvidia s3d driver has modified in between the vertex program and pixel program.

I'm realizing now that there's just no way to get that "real" w value, except perhaps via nvapi, which I really don't want to use. (One because of the extra hassle, and two because it's an Nvidia specific fix rather than a generic fix).

However, I think I can get around this another way. The reason to use the gbuffer depth value was to cut down on batches, I don't need all the world geometry when rendering glass and explosions, I just need to read the gbuffer to see there's anything obscuring them. Everything drawn is another batch, and graphics cards like as few batches as possible, so getting rid of the entire world except glass and explosions was a plus.

But in stereo 3D I may have no option but to add back the world, and use the real depth buffer to obscure stuff instead of the gbuffer. Means more batches, which is undesirable, but it seems the most straight forward way to solve the problem. I'll therefore have two rendering modes, a mono one that uses my current renderer setup, and a stereo one that uses the new, slightly less efficient stereoscopic-friendly one.

Edit: Had another idea. Maybe I can render the glass and explosions without any kind of depth test. Then I render a full screen quad (2D) that compares the glass/explo render with the gbuffer depth, and writes out only the locations that it finds the glass not obscured by other objects. Will take an extra render pass, but might be better than doing a full world render.
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Unfortunately getting 3D automatic mode to work with the glass and smoke effects mostly boils down to undoing lots of the batching optimization I've spent ages working on. Takes a non-trivial rewrite of both renderer and shaders. The resulting renderer will be a lot less efficient, but then the fact that shadow maps still only get rendered once might offset this, compared to my custom mode.

Might give the user three options.

1. Mono renderer. Most efficient.

2. 3D Automatic renderer. Works as 3D vision compatible games normally do. Possibly less efficient.

3. 3D Custom renderer. Uses my hack from the opening post, possibly more efficient, some fx look better. Requires a reg edit.
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Ok, I think I've got it now. Have been re-ker-jiggering the broken shaders and altering the renderer, so instead of doing depth tests against a value stored in the gbuffer, I just use the real ztesting depth buffer. I lose some efficiency because of this, not sure how much. And there's a couple of effects I had to modify, for instance fire or smoke behind glass isn't as strongly coloured as it used to be (I had to use a blend add instead of a blend modulate due to not being able use gbuffer depth tests.)

But overall, it isn't too bad. The mirroring effect on the metal (which is really just a cube map) looks different to my custom s3d render, it sits on the surface instead of looking like it has some 3D depth, but this is relatively minor. Interesting experience, doing the conversion. I almost quit several times, but having things run in automatic mode was pretty appealing. I still have a couple of things to tweak and shaders to add, but the core seems to be running fine.

My tips to other people, if they want their deferred shading program to run in automatic mode, would be
1. Don't use unprojection, store full worldpos or viewpos in your gbuffer. I was lucky that I was already doing this. If you don't, you'll have no choice but to use nvapi.
2. Only 2D screen depth renders can accurately read the gbuffer. Any effects that use 3D geometry out in the world that try to read the gbuffer will have to be modified.
3. As with 2 above, one of the main things you might be tempted to read from the gbuffer is depth. This won't work anymore, you'll have to use the proper depth buffer, so you'll probably have to render more geometry than you'd normally want.

Since this is my first shot at any of this, I could be totally wrong on any of the above, so look into it yourself. And finally, here's some new pics of the renderer now.


Image

Image

Image

Image

Image

Image
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Special Nvidia 3D Vision Rendering Technique

Post by cybereality »

Wow! Looks great!
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Noticed some visual anomalies yesterday. Sometimes triangles at the very edge of the screen would not be rendered, and it would vary from eye to eye. For instance, if the triangle wasn't visible in the right eye frustum, then it would mistakenly not be rendered in the left eye either even though it was still in the left eye frustum. It was intermittent, not predictable, would flicker even if the view didn't move. Frustum adjusts using ctrl + F11 didn't help. Updating to the very latest beta drivers, 290.36, didn't help either.

Only seems to happen in SLI mode, and seems to be a driver glitch rather than any faulty culling in Ogre or my game, since it happens at the level of individual triangles rather than complete meshes. Plus it's inconsistent even when the view is kept stable. Often takes 20-30 seconds before the flickering starts, and even then is only occasional. A bit annoying. However, after a little experimentation I found three possible fixes.

1. Turn on VSync at the game startup screen. This seems to cure it without hurting framerate.

2. Set r_maxgpuquery to 2 in bzn.cfg. This is a control specific to my game that forces gpu buffer flushes that prevent more than 2 frames being queued up. It was intended for another purpose, but oddly seems to fix this. By contrast, setting "Maximum pre-rendered frame" to 1 or 2 in the Nvidia Global Controls didn't help, even though it supposedly is doing something very similar. This option hurts the framerate a little.

3. Turn off SLI. This works, but hurts the framerate a lot.

Obviously option 1 is the simplest and best, I mention the other two only for some technical background. I don't really see why any of them should make any difference, but they seem to fix the bug. My guess is that in SLI mode one card isn't getting it's frustum triangle culling reset properly or something, or the conversion to screen clip space sometimes gets incorrectly set. But those are just wild guesses based on it looking as if one eye is culling with a frustum intended for the other. Fortunately the fix is super-simple, so all is well.
mkultra333
Cross Eyed!
Posts: 159
Joined: Sun Nov 02, 2008 10:18 am
Contact:

Re: Special Nvidia 3D Vision Rendering Technique

Post by mkultra333 »

Just a quick update, regarding the SLI culling issue mentioned above.

Seems things have changed in the drivers. I noticed the other day that the culling error has re-emerged, and vsync no longer helps. It seems the stereo drivers have now taken over vsync completely, so any option set in the program or nvidia control panel make no difference.

Fortunately solution 2, setting the maximum gpu command buffers to 2, fixed the issue. (I wonder, perhaps 3 card sli systems might need to set it to 3. The number of command buffers might need to match the number of cards when doing SLI stereo rendering.)

Interestingly, I didn't seem to get the issue on all rendered surfaces, only the MRT (Multiple Render Target) surface.

On another topic, it looks like nvidia has shifted around and obfuscated the game specific registry settings, so the hack I started this topic with would now be difficult or impossible to implement. And even if it could be re-implemented, there'd be no guarantee that Nvidia wouldn't scramble the reg settings around again at a later stage. So just as well I decided to do things the "proper" way.
Post Reply

Return to “General Stereoscopic 3D Discussion”