While working on iOS, I never got around to look into frame buffer objects in any depth. If you’re like me, you just copy & pasted the multi-sampling resolve code samples from Apple document and never worried about it again. But to implement the post-processing framework for the 3D modeler project, this wasn’t enough.
The “don’t-you-worry-about-it” Way
On OSX, things are even easier – at least in the beginning. You just specify multi-sampling as part of the NSOpenGLPixelFormat when creating an NSOpenGLView and then happily render away, never having to worry about it again. Everything is done for you…
- (id)initWithFrame:(NSRect)frameRect
{
NSOpenGLPixelFormatAttribute attr[] = {
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core, // Needed if using opengl 3.2 you can comment this line out to use the old version.
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFASupersample,
NSOpenGLPFASampleBuffers, 1,
NSOpenGLPFASamples, 4,
NSOpenGLPFAMultisample,
0
};
NSOpenGLPixelFormat *pix = [[NSOpenGLPixelFormat alloc] initWithAttributes:attr];
self = [super initWithFrame:frameRect pixelFormat:pix];
if ( self != nil )
{
// [..]
}
return self;
}
However, once you start wondering about rendering to an image (e.g. for saving snapshots) or post-processing effects like depth of field (for which you need the render result as a texture), things get a bit more complicated.
I’ll spare you the mechanics of OpenGL FBOs here and simply refer you to the excellent opengl-tutorial.org. When I started, I already knew this part but there were a couple of consequences that I wasn’t really aware of at the time.
Creating FBOs
There are basically 3 criteria that are important when creating an FBO: Do you needed it just for rendering to or also be available as a texture? Should it be multi-sampled or not? And do you need a depth or stencil buffer?
Let’s start with the code to create a non-multisampled, non-texture-backed FBO:
GLuint createRenderTarget(uint16_t const width, uint16_t const height)
{
GLuint frameBuffer = 0;
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
// Color buffer
GLuint colorBuffer = 0;
glGenRenderbuffers(1, &colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorBuffer);
// Depth buffer
GLuint depthrenderbuffer;
glGenRenderbuffers(1, &depthrenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer);
GLuint test = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if ( test != GL_FRAMEBUFFER_COMPLETE )
{
Log::error("OpenGL", "Failed to create multi-sampled render target");
return 0;
}
return frameBuffer;
}
What this does is create a new FBO, create and bind a color buffer to it as well as create and bind a depth buffer. Note the GL_DEPTH_COMPONENT24 which instruct it to use a 24-bit depth buffer. Also note that I use RGB here. Make sure to check out my post about pre-multiplied alpha if you want to create a texture with alpha and end up having weird transparency values in your result!
Texture-Backed
To have it as a texture-backed FBO, we do the following:
GLuint createRenderTargetTexture(uint16_t const width, uint16_t const height)
{
GLuint frameBuffer = 0;
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
// The texture we're going to render to
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
// "Bind" the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, renderedTexture);
// Give an empty image to OpenGL ( the last "0" )
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
// Poor filtering. Needed !
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// The depth buffer
GLuint depthrenderbuffer;
glGenRenderbuffers(1, &depthrenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer);
// Set "renderedTexture" as our colour attachement #0
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);
// Set the list of draw buffers.
GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers
GLuint test = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if ( test != GL_FRAMEBUFFER_COMPLETE )
{
Log::error("OpenGL", "Failed to create render target");
return 0;
}
return frameBuffer;
}
What I haven’t done in this method yet is to also return the renderedTexture object back to the caller which you of course would need to do if you want to use the texture later on.
Multi-Sampling
Creating a multi-sampled FBO works pretty much like the normal one except for the glRenderbufferStorageMultisample calls:
GLuint createMultiSampledRenderTarget(uint16_t const width, uint16_t const height, uint8_t const samples)
{
GLuint frameBuffer = 0;
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
// Color buffer
GLuint colorBuffer = 0;
glGenRenderbuffers(1, &colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, samples, GL_RGB, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorBuffer);
// Depth buffer
GLuint depthrenderbuffer;
glGenRenderbuffers(1, &depthrenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, samples, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer);
GLuint test = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if ( test != GL_FRAMEBUFFER_COMPLETE )
{
Log::error("OpenGL", "Failed to create multi-sampled render target");
return 0;
}
return frameBuffer;
}
Using FBOs
When using an FBO, it becomes important what you want to use it for: reading or writing?
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferObject);
glBindFramebuffer(GL_READ_FRAMEBUFFER, frameBufferObject);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, frameBufferObject);
I was only aware of the first one, which activates the FBO for both reading and writing to. However, sooner or later you will want to read from one FBO and render to another FBO. Setting read FBO independent of write FBO can be done with the latter two.
Also of interest is figuring out which FBO is currently bound before changing it.
GLuint frameBufferObject = 0;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, (GLint*)&frameBufferObject);
glGetIntegerv(GL_DRAW_FRAMEBUFFER_BINDING, (GLint*)&frameBufferObject);
glGetIntegerv(GL_READ_FRAMEBUFFER_BINDING, (GLint*)&frameBufferObject);
Note: The main device framebuffer (i.e. what your window shows) can be bound by using the index 0.
Blitting FBOs
This is where things get interesting! There are three ways to actually use the content that you did render to an FBO:
- Use it as an ordinary texture to apply it to a geometry
- Draw it with a special rectangle that exactly matches the whole screen (this is how post-processing shaders are applied)
- Blit it to another FBO
A blit copies an area of an FBO to another FBO. There is little information to be found about this but there were some posts on the net that suggest that with some graphics cards, it’s the same as drawing a screen-filling overlay rectangle, some say its faster, some say its slower. I’ll guess you have to measure it yourself for your situation…
Blitting is done with the following call:
glBlitFramebuffer(src_x1, src_y1, src_x2, src_y2,
dst_x1, dsc_y1, dst_x2, dst_y2,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
What I found a bit weird (and did cost me about an hour or two of debugging until I noticed it) is that the call does not take a rectangle (as in x/y/width/height) but two points that span the area, each in the source and destination framebuffer. In my case I did put a width value in dst_x2 which was by chance exactly the same as dst_x1, resulting in the typical 1282 GL_INVALID_OPERATION error with no apparent reason what was going wrong. I was under the assumption that something relating to multi-sampling caused the blit to fail when in fact I simply hadn’t read the function signature carefully…
GL_COLOR_BUFFER_BIT can be replaced by a bit mask to also copy the depth or stencil buffer and GL_NEAREST is the filtering to be used if the size or multisampling of the FBOs does not match. Check out the OpenGL man pages on glBlitFramebuffer for details of the restrictions which filtering type is allowed in which situations
Resolving Multi-Sampling
So finally we get to where it all started for me: Assume we have a multi-sampled FBO with our rendered scene in it. How to show it on screen? Well, the multi-sampled FBO has to be blitted to a normal FBO (i.e. the window’s FBO) so we have one color value per pixel in the window. The NSOpenGLView does this automatically for you if multi-sampling is specified as part of the pixel format attributes. However, you can also do it yourself by NOT specifying multi-sampling in the pixel format and instead creating a multi-sampled FBO yourself. All you have to do is blit it to the window’s FBO when drawing is complete.
Why should you want to do that yourself? The reason is that you only need multi-sampling for the very first FBO you render your scene geometry to. When using post-processing shaders, one uses one FBO’s texture to render to another FBO, at least once for each post-processing effect. However, for all intermediate FBOs, multi-sampling doesn’t really make sense as there is no geometry involved, just running shaders over pixels.
So as a rule of thumb, do the multi-sample resolve yourself and have post-processing work on non-multi-sampled FBOs. Only use multi-sampling for the very first FBO the scene geometry is rendered to!
P.S.: Since I keep forgetting myself, this is the code snippet formatter I used for this post.
Leave a Reply