我使用 Nitrogen6x 板和 ov5640 相机(mipi).
相机未使用标准的 v4l/v4l,但我们可以使用 GStreamer 为其驱动程序 (mfw_v4l) 流式传输视频:
gst-launch mfw_v4lsrc !自动视频接收器
我想通过 GStreamer(OpenCV 内的 GStreamer)调用它来使用 OpenCV 中的相机.我问了一个关于在 OpenCV 中调用 GStreamer 的问题这里,这是后续.>
如果我启用 GStreamer 支持,它会被检查 在源代码中,但 OpenCV 尝试将标准 V4L/V4L2 用于我想要更改的 GStreamer.关于调用 GStreamer 的部分在 cap_gstreamer.cpp:
CvCapture* cvCreateCapture_GStreamer(int type, const char* filename ){CvCapture_GStreamer* capture = new CvCapture_GStreamer;如果(捕获->打开(类型,文件名))返回捕获;删除捕获;返回0;}
我想这是我应该努力以某种方式指向相机驱动程序的部分.(此处的类型"可能是与驱动程序相关的数字(如 precomp.hpp 中所定义),但文件名"是什么?)
任何有关如何通过 GStreamer 访问相机的建议都会有所帮助和感激.谢谢!
看起来我们可以使用适当的 GStreamer 管道调用相机,如下所示:
VideoCapture cap("mfw_v4lsrc !ffmpegcolorspace !video/x-raw-rgb !appsink")
由于相机输出是 YUV,我们需要将其转换为 RGB 以将帧传递给 OpenCV.这是 OpenCV 确保它的地方获取 RGB 色彩空间.
I'm using Nitrogen6x board with ov5640 camera(mipi).
The camera is not using standard v4l/v4l, but we can stream video using GStreamer for its driver (mfw_v4l):
gst-launch mfw_v4lsrc ! autovideosink
I want to use the camera in OpenCV by calling it via GStreamer (GStreamer inside OpenCV). I asked a question about calling GStreamer inside OpenCV here, and this is the follow up.
If I enable GStreamer support, it's checked in the source code, but OpenCV tries to use standard V4L/V4L2 for GStreamer which I want to change. The section about calling GStreamer is in cap_gstreamer.cpp:
CvCapture* cvCreateCapture_GStreamer(int type, const char* filename )
{
CvCapture_GStreamer* capture = new CvCapture_GStreamer;
if( capture->open( type, filename ))
return capture;
delete capture;
return 0;
}
I guess this is the section I should work on to somehow point to the camera's driver. ("type" here probably is a number related to the driver(as defined in precomp.hpp), but what's the "filename"?)
Any suggestions about how to access the camera via GStreamer would be helpful and appreciated. Thanks!
Looks like we can call the camera using a proper GStreamer pipeline like below:
VideoCapture cap("mfw_v4lsrc ! ffmpegcolorspace ! video/x-raw-rgb ! appsink")
as the camera output is in YUV, we need to convert that to RGB to pass the frames to OpenCV. This is where OpenCV makes sure it gets RGB colorspace.
这篇关于在 OpenCV 中使用自定义相机(通过 GStreamer)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!