我的目标是将没有窗口的 OpenGL 场景直接渲染到文件中.场景可能比我的屏幕分辨率大.
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
我该怎么做?
如果可能,我希望能够将渲染区域大小选择为任意大小,例如 10000x10000?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
一切都始于 glReadPixels
,您将使用它来将存储在 GPU 特定缓冲区中的像素传输到主内存 (RAM).正如您将在文档中注意到的那样,没有选择哪个缓冲区的参数.与 OpenGL 一样,当前要读取的缓冲区是一个状态,您可以使用 glReadBuffer
设置该状态.
It all starts with glReadPixels
, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer
.
因此,一个非常基本的离屏渲染方法将类似于以下内容.我使用 c++ 伪代码,所以它可能包含错误,但应该使一般流程清晰:
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
这将读取当前的后台缓冲区(通常是您正在绘制的缓冲区).您应该在交换缓冲区之前调用它.请注意,您也可以使用上述方法完美读取后台缓冲区,在交换之前清除它并绘制完全不同的东西.从技术上讲,您也可以读取前端缓冲区,但通常不鼓励这样做,因为理论上允许实现进行一些可能使您的前端缓冲区包含垃圾的优化.
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
这有一些缺点.首先,我们并没有真正进行离屏渲染,对吧.我们渲染到屏幕缓冲区并从中读取.我们可以通过从不交换后台缓冲区来模拟离屏渲染,但感觉不对.除此之外,前端和后端缓冲区经过优化以显示像素,而不是读取它们.这就是 帧缓冲对象发挥作用的地方.
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
本质上,FBO 允许您创建一个非默认的帧缓冲区(如 FRONT 和 BACK 缓冲区),允许您绘制到内存缓冲区而不是屏幕缓冲区.在实践中,您可以绘制纹理或绘制到 renderbuffer.当您想将 OpenGL 本身中的像素重新用作纹理(例如游戏中的天真安全摄像头")时,第一个是最佳选择,如果您只想渲染/回读,后者是最佳选择.有了这个,上面的代码就会变成这样,又是伪代码,所以如果输入错误或忘记了一些语句,请不要杀我.
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
这是一个简单的示例,实际上您可能还需要深度(和模板)缓冲区的存储空间.您可能还想渲染到纹理,但我会将其留作练习.在任何情况下,您现在都将执行真正的离屏渲染,它的工作速度可能比读取后台缓冲区要快.
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
最后,您可以使用像素缓冲对象使读取像素异步.问题是 glReadPixels
会阻塞直到像素数据完全传输,这可能会使您的 CPU 停顿.使用 PBO 时,实现可能会立即返回,因为它无论如何都控制缓冲区.只有当您映射缓冲区时,管道才会阻塞.但是,PBO 可能会优化为仅在 RAM 上缓冲数据,因此该块可能花费更少的时间.读取像素代码会变成这样:
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels
blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
大写部分是必不可少的.如果您只是向 PBO 发出 glReadPixels
,然后是该 PBO 的 glMapBuffer
,那么您只会获得大量代码.当然 glReadPixels
可能会立即返回,但现在 glMapBuffer
将停止,因为它必须安全地将数据从读取缓冲区映射到 PBO 和主内存块内存.
The part in caps is essential. If you just issue a glReadPixels
to a PBO, followed by a glMapBuffer
of that PBO, you gained nothing but a lot of code. Sure the glReadPixels
might return immediately, but now the glMapBuffer
will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
还请注意,我到处都使用 GL_BGRA,这是因为许多显卡在内部使用它作为最佳渲染格式(或没有 alpha 的 GL_BGR 版本).它应该是像这样的像素传输最快的格式.我会试着找到我在几个月前读到的关于这个的 nvidia 文章.
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
当使用 OpenGL ES 2.0 时,GL_DRAW_FRAMEBUFFER
可能不可用,在这种情况下你应该只使用 GL_FRAMEBUFFER
.
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER
might not be available, you should just use GL_FRAMEBUFFER
in that case.
这篇关于如何在 OpenGL 上渲染离屏?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!