将 openGL 上下文保存为视频输出
我目前正在尝试将在 openGL
中制作的动画保存到视频文件中.我曾尝试使用 openCV
的 videowriter
但没有任何优势.我已经成功地生成了一个快照,并使用 SDL
库将它保存为 bmp
.如果我保存所有快照,然后使用 ffmpeg
生成视频,这就像收集 4 GB 的图像.不实用.如何在渲染过程中直接写入视频帧?这是我在需要时用来拍摄快照的代码:
I am currently trying to save the animation made in openGL
to a video file. I have tried using openCV
's videowriter
but to no advantage. I have successfully been able to generate a snapshot and save it as bmp
using the SDL
library. If I save all snapshots and then generate the video using ffmpeg
, that is like collecting 4 GB worth of images. Not practical.
How can I write video frames directly during rendering?
Here the code i use to take snapshots when I require:
void snapshot(){
SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);
char * pixels = new char [3 *WIDTH * HEIGHT];
glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
for (int i = 0 ; i <HEIGHT ; i++)
std::memcpy( ((char *) snap->pixels) + snap->pitch * i, pixels + 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );
delete [] pixels;
SDL_SaveBMP(snap, "snapshot.bmp");
SDL_FreeSurface(snap);
}
我需要视频输出.我发现 ffmpeg
可用于从 C++ 代码创建视频,但无法弄清楚该过程.请帮忙!
I need the video output. I have discovered that ffmpeg
can be used to create videos from C++ code but have not been able to figure out the process. Please help!
EDIT :我尝试使用 openCV
CvVideoWriter
类,但程序崩溃(segmentation fault
")声明的那一刻.编译显示没有错误.对此有什么建议吗?
EDIT : I have tried using openCV
CvVideoWriter
class but the program crashes ("segmentation fault
") the moment it is declared.Compilation shows no errors ofcourse. Any suggestions to that?
Python 用户解决方案(需要 Python2.7
、python-imaging
、python-opengl
、python-opencv
,你想写入的格式的编解码器,我在 Ubuntu 14.04 64-bit
):
SOLUTION FOR PYTHON USERS (Requires Python2.7
,python-imaging
,python-opengl
,python-opencv
, codecs of format you want to write to, I am on Ubuntu 14.04 64-bit
):
def snap():
pixels=[]
screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)
snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)
snapshot.save(os.path.dirname(videoPath) + "/temp.jpg")
load = cv2.cv.LoadImage(os.path.dirname(videoPath) + "/temp.jpg")
cv2.cv.WriteFrame(videoWriter,load)
这里的W
和H
是窗口尺寸(宽度、高度).发生的事情是我使用 PIL 将从 glReadPixels
命令读取的原始像素转换为 JPEG
图像.我正在将该 JPEG 加载到 openCV
图像中并写入视频编写器.我在将 PIL 图像直接使用到视频编写器中时遇到了某些问题(这将节省数百万个 I/O
的时钟周期),但现在我没有解决这个问题.Image
是一个 PIL
模块 cv2
是一个 python-opencv
模块.
Here W
and H
are the window dimensions (width,height). What is happening is I am using PIL to convert the raw pixels read from the glReadPixels
command into a JPEG
image. I am loading that JPEG into the openCV
image and writing to the videowriter. I was having certain issues by directly using the PIL image into the videowriter (which would save millions of clock cycles of I/O
), but right now I am not working on that. Image
is a PIL
module cv2
is a python-opencv
module.
推荐答案
听起来好像您在使用命令行实用程序:ffmpeg
.与其使用命令行从一组静止图像中编码视频,不如使用 libavcodec
和 libavformat
.这些是实际构建 ffmpeg
的库,允许您编码视频并将其存储为标准流/交换格式(例如 RIFF/AVI),而无需使用单独的程序.
It sounds as though you are using the command line utility: ffmpeg
. Rather than using the command-line to encode video from a collection of still images, you should use libavcodec
and libavformat
. These are the libraries upon which ffmpeg
is actually built, and will allow you to encode video and store it in a standard stream/interchange format (e.g. RIFF/AVI) without using a separate program.
您可能不会找到很多关于实现这一点的教程,因为传统上人们希望使用 ffmpeg
走另一条路;即解码各种视频格式以在 OpenGL 中显示.我认为随着在 PS4 和 Xbox One 游戏机中引入游戏视频编码,这种情况很快就会改变,对此功能的需求会突然飙升.
You probably will not find a lot of tutorials on implementing this because it has traditionally been the case that people wanted to use ffmpeg
to go the other way; that is, decode various video formats for display in OpenGL. I think this is going to change very soon with the introduction of gameplay video encoding to the PS4 and Xbox One consoles, suddenly demand for this functionality will skyrocket.
不过,一般流程是这样的:
The general process is this, however:
- 选择容器格式和编解码器
- 通常由一个决定另一个,(例如 MPEG-2 + MPEG Program Stream)
- 您将在缓冲区变满时或每 n-many ms 时执行此操作;根据您是否想要直播视频,您可能更喜欢其中一种.
这样做的一个好处是您实际上不需要写入文件.由于您定期对来自静止帧缓冲区的数据包进行编码,因此您可以根据需要通过网络流式传输编码视频 - 这就是编解码器和容器(交换)格式分开的原因.
One nice thing about this is you do not actually need to write to a file. Since you are periodically encoding packets of data from your buffer of still frames, you can stream your encoded video over a network if you want - this is why codec and container (interchange) format are separate.
另一件好事是您不必同步 CPU 和 GPU,您可以设置一个像素缓冲区对象,并让 OpenGL 将数据复制到 GPU 后面几帧的 CPU 内存中.这使得视频的实时编码要求更低,如果视频延迟要求不合理,您只需定期将视频编码并刷新到磁盘或网络.这在实时渲染中非常有效,因为您有足够大的数据池来让 CPU 线程始终忙于编码.
Another nice thing is you do not have to synchronize the CPU and GPU, you can setup a pixel buffer object and have OpenGL copy data into CPU memory a couple of frames behind the GPU. This makes real-time encoding of video much less demanding, you only have to encode and flush the video to disk or over the network periodically if video latency demands are not unreasonable. This works very well in real-time rendering, since you have a large enough pool of data to keep a CPU thread busy encoding at all times.
编码帧甚至可以在 GPU 上实时完成,为大量帧缓冲区提供足够的存储空间(因为最终编码数据必须从 GPU 复制到 CPU,并且您希望尽可能少地这样做).显然这不是使用 ffmpeg
完成的,有专门的库使用 CUDA/OpenCL/计算着色器用于此目的.我从未使用过它们,但它们确实存在.
Encoding frames can even be done in real-time on the GPU provided enough storage for a large buffer of frames (since ultimately the encoded data has to be copied from GPU to CPU and you want to do this as infrequently as possible). Obviously this is not done using ffmpeg
, there are specialized libraries using CUDA / OpenCL / compute shaders for this purpose. I have never used them, but they do exist.
为了可移植性,您应该坚持使用 libavcodec 和 Pixel Buffer Objects 进行异步 GPU->CPU 复制.如今的 CPU 拥有足够的内核,如果您缓冲足够多的帧并在多个并发线程中进行编码(这会增加同步开销并增加输出编码视频时的延迟),或者只是丢帧/降低分辨率(穷人的解决方案).
For portability sake, you should stick with libavcodec and Pixel Buffer Objects for asynchronous GPU->CPU copy. CPUs these days have enough cores that you can probably get away without GPU-assisted encoding if you buffer enough frames and encode in multiple simultaneous threads (this creates added synchronization overhead and increased latency when outputting encoded video) or simply drop frames / lower resolution (poor man's solution).
此处涵盖的许多概念远远超出了 SDL 的范围,但您确实询问了如何以比当前解决方案更好的性能来实现这一点.简而言之,使用OpenGL Pixel Buffer Objects传输数据,使用libavcodec进行编码.可以在 ffmpeg libavcodec 示例 页面.
There are a lot of concepts covered here that go well beyond the scope of SDL, but you did ask how to do this with better performance than your current solution. In short, use OpenGL Pixel Buffer Objects to transfer data, and libavcodec for encoding. An example application that encodes video can be found on the ffmpeg libavcodec examples page.
相关文章