Can You Explain How S-3D Gaming Works?

Don't know the first thing about S-3D? Just bought your first pair of 3D glasses? Trying to figure out what to buy? Post HERE!
Post Reply
User avatar
Neil
3D Angel Eyes (Moderator)
Posts: 6861
Joined: Wed Dec 31, 1969 6:00 pm
Which stereoscopic 3D solution do you primarily use?: S-3D desktop monitor
Contact:

Can You Explain How S-3D Gaming Works?

Post by Neil »

Hello everyone!

I've been thinking about writing a piece about how stereoscopic 3D drivers work, but I am stumped on how to explain it. It would be cool if we could trace the concept of the image from the game, and how it gets to the output of the 3D display.

Could some of our members post here with their explanation of S-3D. Let's see if we can formulate a clear answer so we can put it in a document people will understand.

I'm looking for an explanation right down to the DirectX level (at a layman's level) up to the 3D result. Interaction with the CPU, the GPU, everything. NVIDIA used to have a brief explanation, but they don't do that anymore. It doesn't have to be complicated - just interesting.

Regards,
Neil

Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Which stereoscopic 3D solution do you primarily use?: Head Mounted Display (HMD)
Location: Canada

Post by Tril »

I'll give a short explanation about how it might work on DirectX 9. Forgive the bad writing style. I just don't have Neil's incredible talent at properly formulating what I want to say.

How stereoscopic 3D drivers work

Nowadays, games use vertex shaders and pixel shaders to do all sort of processing during rendering. One of the tasks of the vertex shaders is to move all of the vertex to position them in space where they should be in the 3D world.

To generate a stereoscopic view of the world, you need to generate two view. To do that, you need to change the position of all the rendered vertex. That can be done by the stereoscopic 3D drivers by modifying the vertex shaders of the game to make them do the necessary calculations to move the vertex properly to generate the two views.

The stereoscopic 3D drivers needs to treat the HUD and the post-processing differently from the rest of the scene. You don't want their vertex to move from their original intended positions. There are many ways to achieve that goal. See the second post of this thread for some of them.

Problems can arise in some cases, such as with the medium and high quality shadows in Crysis. Instead of using the usual shadow rendering techniques, this game renders the shadows with polygons. This causes a problem with the generic way of modifying shaders used in stereoscopic 3D drivers. Parts of the shadows that are normally off-screen are rendered on the screen. This causes the parts of the shadows that are normally off-screen to stretch. Fixing this in the stereoscopic 3D drivers is a difficult task because the programmers have to make a special fix for that specific game and they have to understand perfectly all the details of the method used to draw the shadow to allow them to modify the shaders used by the game when drawing the shadows.

There are many versions of vertex shaders (1.0, 2.0, 2.x, 3.0) and they all need to be supported equally well by the stereoscopic 3D drivers for any game to work properly in S-3D.

Once the two views are generated, the stereoscopic 3D drivers does some processing on the left and right rendered frames to output them to the format supported by the display device.

_______

Pageflipping for shutter glasses with CRT monitor support

This is something that is hard to add in stereoscopic 3D drivers (but not impossible). You need to display the left and right views alternatively at a high refresh rate such as 120 Hz. The problem is that in DirectX, there's no way to display faster than you render using the available functions. This feature was probably never requested to Microsoft to be included in DirectX because this is not needed when you don't use pageflipping (unconfirmed, this is my hypothesis). There are surely some ways to force the video card to output at a rate faster than the rendering rate. It's up to the stereoscopic 3D drivers programmers to figure out how.
Last edited by Tril on Mon May 19, 2008 11:02 am, edited 1 time in total.
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image

User avatar
Neil
3D Angel Eyes (Moderator)
Posts: 6861
Joined: Wed Dec 31, 1969 6:00 pm
Which stereoscopic 3D solution do you primarily use?: S-3D desktop monitor
Contact:

Post by Neil »

This is a good start. Already better than I could have explained it with my magical tongue. :P

What about the relationship between the CPU and the GPU? Are the stereo calculations done before the GPU processes the result? Is it all done in the GPU?

NVIDIA used to have a diagram of the relationship between the hardware components and the stereo driver explaining the frame buffer, etc. - but it is no more. Anyone have it? Maybe an explanation of how the components relate to each other?

Regards,
Neil

dreamingawake
Two Eyed Hopeful
Posts: 61
Joined: Wed Feb 04, 2009 9:58 am

Re: Can You Explain How S-3D Gaming Works?

Post by dreamingawake »

Hm, while I cannot contribute much on the factual end of the process I just wanted to say
that it's very interesting, and this is a great idea.

It would be neat to gather as much data as you can now as to how S3d is done, and then
compare that say.. 5 years in the future.. and see if things are done any different.

Do you imagine there will be differences 5 years from now ?

Will we still even have the X86 architecture ?

I know this isn't part of the computational mechanics behind S3d, however a MAJOR component
of it is simply.. our brains!

It still amazes me that when I'm viewing S3d content, the image I'm seeing is actually being produced by
my two eyes and my brain ! The S3d image doesn't exist on the monitor/screen itself, it's merely an illusion,
a hallucination almost. The fact of the matter is, we are half of the equation when it comes
to how S3d is possible !

Once, I tried to get my friend who has a lazy eye to use 3d vision. -She couldn't see the 3d..

funny how we are just imitating with technology what our bodies already naturally do... lol

User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11394
Joined: Sat Apr 12, 2008 8:18 pm
Which stereoscopic 3D solution do you primarily use?: S-3D desktop monitor

Re: Can You Explain How S-3D Gaming Works?

Post by cybereality »

Well I don't know exactly how each driver developer has implemented their software, but I will try to give an explanation in layman's terms.

Basically all modern games since the original Playstation are rendering using 3D polygons, as opposed to 2D spites like in the Nintendo era. So by their very nature, the depth information is calculated within the 3D engine. However, as a result of programming games to work with flat 2D monitors, the full 3D rendering must be converted into a 2D image. This process is known as rasterization, and has been used on every modern game since DOOM was released on the PC (or Spear of Destiny if you want to get technical). After a 3D scene has been rasterized, it is simply a 2D snapshot, similar to taking a photograph of the real world. The 3D information is lost forever. So in order to preserve the full depth of the scene, a 3D driver must do its magic right before the scene is rasterized.

The basic concept of this is simple. In a standard game the scene is rendered from one viewpoint, or a single camera. For true stereoscopic 3D you would need two viewpoints, one for each eye. So the scene must be rendered from two virtual cameras. So what the driver does is take the existing camera and simply offset the position to the left slightly (for the left eye view) and then slightly to the right (for the right eye view). These two renders can now be stored in memory temporarily before they are converted to a hardware-specific transmission format. After this conversion process is complete a user with the proper hardware will now be viewing two independent views for each of his or her eyes and thus get a stereoscopic experience. Although pixels are not truly coming out of the screen, when your mind melds the two viewpoints together you get an experience of true depth just as you would when naturally looking at a real-life location. Although this explanation may gloss over some of the finer details of the stereoscopic driver implementations, I think it paints a picture anyone can understand.

User avatar
xhonzi
Binocular Vision CONFIRMED!
Posts: 273
Joined: Wed Mar 31, 2010 3:35 pm
Which stereoscopic 3D solution do you primarily use?: S-3D Projector Setup
Location: Thornton, CO USA

Re: Can You Explain How S-3D Gaming Works?

Post by xhonzi »

I didn't think most games were "truly" rendering two separate frames, but that Nvidia/ATI was keying in on the "z-buffer" data to understand how far to "shift" parts of the rendered frame for a 3D effect.

Is that not true?

User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Which stereoscopic 3D solution do you primarily use?: LCD shutter glasses
Location: Perpignan, France
Contact:

Re: Can You Explain How S-3D Gaming Works?

Post by Fredz »

No that's not how it does work. If you were using ZBuffer information only, you would know depth information for the visible pixels from that point of view, but you'd be unable to determine the color of pixels that are visible only for the left or the right eye.

I'm currently writing a stereoscopic driver for Linux/OpenGL, here is a mix of what I'm trying to do and what I think I've understood from reading OpenGL forums and NVIDIA papers :

At the start of the game :

Code: Select all

- use DLL injection to intercept OpenGL/Direct3D calls
- identify the game from its executable filename
- read the game profile settings (separation, convergence, etc.)
- intercept window creation :
  - modify the frequency of the graphics mode using the one specified in the driver settings
  - use the same resolution and color depth than the one specified in the call
During execution of the game :

Code: Select all

- intercept all camera related calls :
  - read the camera matrix
  - compute two camera matrices :
     - use an off-axis projection
     - use the specified separation and convergence
- duplicate the back buffer for each eye
- intercept all render target creation calls :
  - identify view dependent render targets using an heuristic :
     - exclude square surfaces and small surfaces, use game profile settings, etc.
     - keep surfaces the same size or larger than the back buffer
  - duplicate all view dependent render targets for each eye
- intercept every draw call :
  - render each call twice :
    - use the camera matrices previously computed
    - once for each duplicated back buffer and render target
    - calculate and apply stereo separation to each vertex shader
- intercept every call related to buffer swap :
    - don't render the scene
In a separate thread, continuously :

Code: Select all

- send a signal to the glasses for the left eye
- flip the back buffer for the left eye to the front buffer
- wait for vertical retrace if in page flipping mode
- send a signal to the glasses for the right eye
- flip the back buffer for right eye to the front buffer
- wait for vertical retrace
Signals for the glasses, depending on the selected eye :

Code: Select all

- page flipping mode :
  - set DDC SDA line to 0 or 1
  - display a 1/4 or 3/4 white/blue line at the bottom of the screen
  - send a command to the USB port (GeForce 3D Vision)
- dual screen :
  - select the first or second screen
- anaglyph :
  - mask the red channel or the blue and green channels for red/cyan glasses
  - similar method for yellow/blue and green/magenta glasses

User avatar
iondrive
Sharp Eyed Eagle!
Posts: 367
Joined: Tue Feb 10, 2009 8:13 pm
Which stereoscopic 3D solution do you primarily use?: LCD shutter glasses

Re: Can You Explain How S-3D Gaming Works?

Post by iondrive »

hi all,

z-buffer-based shifting:
regarding horizontal shifting based on z-buffer data, I think that's how "virtual 3d" works in the TriDef driver. You can tell if a driver does not do this if you stand close to the outside corner of a building with your nose to the wall so that one eye sees around the corner and the other eye only sees the wall. If only z-buffer-based shifting was used, then you would have some missing image data in one eye. The effect with this technique can be very good or very bad depending on the game and settings. You just have to try it with a game to find out. It's a good backup in case normal methods don't work. I would like to see this implemented in screenshots where your 3d screenshot viewer can adjust separation.

Regarding explaining 3d settings/gaming:

The shoebox analogy:
No analogy is perfect but I think it's pretty easy for a beginner to get the basic idea if you say that separation controls the depth of the box and convergence controls how close it is to your face. Graphics would help for those who don't know the word "diorama". The analogy isn't perfect because your eyes get a different perspective as you bring something closer to your face (increasing apparent depth) but for the most part, it's still a good analogy for beginners. Perhaps it's worthwhile to note that separation and convergence controls work a little differently between iZ3D and nVidia, but I think the analogy is still pretty good for either although I guess nVidia now uses the word "depth" instead of "separation" like in the old-school drivers.

The dual photograph analogy:
If you take a photograph with two cameras, using one camera for each eye, that's analogous to the rendering process of a 3d scene. If you could increase the distance between your eyes, that would be like increasing separation and results in greater perceived depth. Increasing separation too much makes you go cross-eyed too much so this is a case where more is not always better. It's good to point out to newbies that you don't want too much separation or depth. If you show each photograph to each eye, you should see a stereo-3d image but you can shift the photographs left and right in opposite directions to change the overall depth like moving the shoebox closer or further. This is what the convergence controls do although nVidia does things a little differently. In the normal case, the content of the photos don't change when you adjust convergence. This is not true with nVidia.

Concluding:
So you see, this approach talks about rendering and display theory with the photograph analogy and the effect for the viewer's perception with the shoebox analogy. I really think the best short explanation is a combination of the shoebox diorama analogy and a dual photograph analogy. Beyond that, a newbie needs advice about how to adjust things although some of that could be in a separate thread in order to not overwhelm people. In this case I would recommend different strategies for different kinds of games but it often helps to find a line on the ground going off into the distance. This helps spotlight the crossover point with your glasses off so you can set it easier. OK, you know all that already. Otherwise I'd like to compliment you on a nice and thorough explanation with good screenshots to help. Great job.

--- iondrive ---

Marcelino777
One Eyed Hopeful
Posts: 1
Joined: Sun Mar 01, 2020 10:58 am
Contact:

Re: Can You Explain How S-3D Gaming Works?

Post by Marcelino777 »

hi all, that it's very interesting, and this is a great idea. Very interesting information thank you all

Post Reply

Return to “I'm New To Stereoscopic 3D!”