如何保存两台相机的数据但不影响它们的图片获

时间:2023-02-28
本文介绍了如何保存两台相机的数据但不影响它们的图片获取速度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用多光谱相机来收集数据.一种是近红外的,另一种是彩色的.不是两台相机,而是一台相机可以同时获得两种不同的图像.我可以使用一些 API 函数,例如 J_Image_OpenStream.两部分核心代码如下所示.一个是用来打开两个流(其实是在一个sample里面的,我不得不用,但是我不是很清楚它们的意思),设置两个avi文件的保存路径,开始采集.

I am using a multispectral camera to collect data. One is near-infrared and another is colorful. Not two cameras, but one camera can obtain two different kinds of images in the same time. There are some API functions I could use like J_Image_OpenStream. Two part of core codes are shown as follows. One is used to open two streams(actually they are in one sample and I have to use them, but I am not too clearly with their meanings) and set the two avi files' saving paths and begin the acquisition.

 // Open stream
 retval0 = J_Image_OpenStream(m_hCam[0], 0, reinterpret_cast<J_IMG_CALLBACK_OBJECT>(this), reinterpret_cast<J_IMG_CALLBACK_FUNCTION>(&COpenCVSample1Dlg::StreamCBFunc0), &m_hThread[0], (ViewSize0.cx*ViewSize0.cy*bpp0)/8);
if (retval0 != J_ST_SUCCESS) {
    AfxMessageBox(CString("Could not open stream0!"), MB_OK | MB_ICONEXCLAMATION);
    return;
}
TRACE("Opening stream0 succeeded
");
retval1 = J_Image_OpenStream(m_hCam[1], 0, reinterpret_cast<J_IMG_CALLBACK_OBJECT>(this), reinterpret_cast<J_IMG_CALLBACK_FUNCTION>(&COpenCVSample1Dlg::StreamCBFunc1), &m_hThread[1], (ViewSize1.cx*ViewSize1.cy*bpp1)/8);
if (retval1 != J_ST_SUCCESS) {
    AfxMessageBox(CString("Could not open stream1!"), MB_OK | MB_ICONEXCLAMATION);
    return;
}
TRACE("Opening stream1 succeeded
");

const char *filename0 = "C:\Users\shenyang\Desktop\test0.avi"; 
const char *filename1 = "C:\Users\shenyang\Desktop\test1.avi";
int fps = 10; //frame per second
int codec = -1;//choose the compression method

writer0 = cvCreateVideoWriter(filename0, codec, fps, CvSize(1296,966), 1);
writer1 = cvCreateVideoWriter(filename1, codec, fps, CvSize(1296,964), 1);

// Start Acquision
retval0 = J_Camera_ExecuteCommand(m_hCam[0], NODE_NAME_ACQSTART);
retval1 = J_Camera_ExecuteCommand(m_hCam[1], NODE_NAME_ACQSTART);


// Create two OpenCV named Windows used for displaying "BGR" and "INFRARED" images
cvNamedWindow("BGR");
cvNamedWindow("INFRARED");

另一个是两个流函数,它们看起来很相似.

Another one is the two stream functions, they look very similar.

void COpenCVSample1Dlg::StreamCBFunc0(J_tIMAGE_INFO * pAqImageInfo)
{
if (m_pImg0 == NULL)
{
    // Create the Image:
    // We assume this is a 8-bit monochrome image in this sample
    m_pImg0 = cvCreateImage(cvSize(pAqImageInfo->iSizeX, pAqImageInfo->iSizeY), IPL_DEPTH_8U, 1);
}

// Copy the data from the Acquisition engine image buffer into the OpenCV Image obejct
memcpy(m_pImg0->imageData, pAqImageInfo->pImageBuffer, m_pImg0->imageSize);

// Display in the "BGR" window
cvShowImage("INFRARED", m_pImg0);

frame0 = m_pImg0;
cvWriteFrame(writer0, frame0);

}

void COpenCVSample1Dlg::StreamCBFunc1(J_tIMAGE_INFO * pAqImageInfo)
{
if (m_pImg1 == NULL)
{
    // Create the Image:
    // We assume this is a 8-bit monochrome image in this sample
    m_pImg1 = cvCreateImage(cvSize(pAqImageInfo->iSizeX, pAqImageInfo->iSizeY), IPL_DEPTH_8U, 1);
}

// Copy the data from the Acquisition engine image buffer into the OpenCV Image obejct
memcpy(m_pImg1->imageData, pAqImageInfo->pImageBuffer, m_pImg1->imageSize);

// Display in the "BGR" window
cvShowImage("BGR", m_pImg1);

frame1 = m_pImg1;
cvWriteFrame(writer1, frame1);
}

问题是我是否不将 avi 文件保存为

The question is if I do not save the avi files, as

/*writer0 = cvCreateVideoWriter(filename0, codec, fps, CvSize(1296,966), 1);
writer1 = cvCreateVideoWriter(filename1, codec, fps, CvSize(1296,964), 1);*/
//cvWriteFrame(writer0, frame0);
//cvWriteFrame(writer0, frame0);

在两个显示窗口中,捕获的图片相似,这意味着它们是同步的.但是如果非要往avi文件中写入数据,由于两种图片的大小不同,而且尺寸都比较大,这会影响两台相机的获取速度,并且拍摄的图片是不同步的.但是我无法创建如此巨大的缓冲区来将整个数据存储在内存中,而且 I/O 设备相当慢.我该怎么办?非常感谢.

In the two display windows, the pictures captured like similarly which means they are synchronous. But if I have to write data to the avi files, due to the different size of two kinds of pictures and their large size, it turns out that this influence the two camera's acquire speed and pictures captured are non-synchronous. But I could not create such a huge buffer to store the whole data in the memory and the I/O device is rather slow. What should I do? Thank you very very much.

一些类变量是:

 public:
FACTORY_HANDLE  m_hFactory;             // Factory Handle
CAM_HANDLE      m_hCam[MAX_CAMERAS];    // Camera Handles
THRD_HANDLE     m_hThread[MAX_CAMERAS]; // Stream handles
char            m_sCameraId[MAX_CAMERAS][J_CAMERA_ID_SIZE]; // Camera IDs

IplImage        *m_pImg0 = NULL;        // OpenCV Images
IplImage        *m_pImg1 = NULL;        // OpenCV Images

CvVideoWriter* writer0;
IplImage *frame0;
CvVideoWriter* writer1;
IplImage *frame1;

BOOL OpenFactoryAndCamera();
void CloseFactoryAndCamera();
void StreamCBFunc0(J_tIMAGE_INFO * pAqImageInfo);
void StreamCBFunc1(J_tIMAGE_INFO * pAqImageInfo);
void InitializeControls();
void EnableControls(BOOL bIsCameraReady, BOOL bIsImageAcquiring);

推荐答案

在不丢帧的情况下录制视频的正确方法是将两个任务(帧获取和帧序列化)隔离,使它们不会相互影响(特别是为了让序列化的波动不会占用捕获帧的时间,捕获帧的时间必须及时发生以防止帧丢失).

The correct approach at recording the video without frame drops is to isolate the two tasks (frame acquisition, and frame serialization) such that they don't influence each other (specifically so that fluctuations in serialization don't eat away time from capturing the frames, which has to happen without delays to prevent frame loss).

这可以通过将序列化(帧编码并将它们写入视频文件)委托给单独的线程来实现,并使用某种同步队列将数据提供给工作线程.

This can be achieved by delegating the serialization (encoding of the frames and writing them into a video file) to separate threads, and using some kind of synchronized queue to feed the data to the worker threads.

以下是一个简单的例子,展示了如何做到这一点.由于我只有一个摄像头,而不是您拥有的那种,我将简单地使用网络摄像头并复制帧,但一般原则也适用于您的场景.

Following is a simple example showing how this could be done. Since I only have one camera and not the kind you have, I will simply use a webcam and duplicate the frames, but the general principle applies to your scenario as well.

一开始我们有一些包含:

In the beginning we have some includes:

#include <opencv2/opencv.hpp>

#include <chrono>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
// ============================================================================
using std::chrono::high_resolution_clock;
using std::chrono::duration_cast;
using std::chrono::microseconds;
// ============================================================================

<小时>

同步队列

第一步是定义我们的同步队列,我们​​将使用它与编写视频的工作线程进行通信.


Synchronized Queue

The first step is to define our synchronized queue, which we will use to communicate with the worker threads that write the video.

我们需要的主要功能是:

The primary functions we need is the ability to:

  • 将新图像推入队列
  • 从队列中弹出图像,等待队列为空.
  • 能够在我们完成后取消所有待处理的弹出窗口.

我们使用 std::queue 来保持 cv::Mat 实例,以及 std::mutex 提供同步.std::condition_variable 用于通知当图像已插入队列(或已设置取消标志)时,消费者将使用一个简单的布尔标志来通知取消.

We use std::queue to hold the cv::Mat instances, and std::mutex to provide synchronization. A std::condition_variable is used to notify the consumer when image has been inserted into the queue (or the cancellation flag set), and a simple boolean flag is used to notify cancellation.

最后,我们使用空的 struct cancelled 作为 pop() 抛出的异常,这样我们就可以通过取消队列干净地终止 worker.

Finally, we use the empty struct cancelled as an exception thrown from pop(), so we can cleanly terminate the worker by cancelling the queue.

// ============================================================================
class frame_queue
{
public:
    struct cancelled {};

public:
    frame_queue();

    void push(cv::Mat const& image);
    cv::Mat pop();

    void cancel();

private:
    std::queue<cv::Mat> queue_;
    std::mutex mutex_;
    std::condition_variable cond_;
    bool cancelled_;
};
// ----------------------------------------------------------------------------
frame_queue::frame_queue()
    : cancelled_(false)
{
}
// ----------------------------------------------------------------------------
void frame_queue::cancel()
{
    std::unique_lock<std::mutex> mlock(mutex_);
    cancelled_ = true;
    cond_.notify_all();
}
// ----------------------------------------------------------------------------
void frame_queue::push(cv::Mat const& image)
{
    std::unique_lock<std::mutex> mlock(mutex_);
    queue_.push(image);
    cond_.notify_one();
}
// ----------------------------------------------------------------------------
cv::Mat frame_queue::pop()
{
    std::unique_lock<std::mutex> mlock(mutex_);

    while (queue_.empty()) {
        if (cancelled_) {
            throw cancelled();
        }
        cond_.wait(mlock);
        if (cancelled_) {
            throw cancelled();
        }
    }

    cv::Mat image(queue_.front());
    queue_.pop();
    return image;
}
// ============================================================================

<小时>

存储工作者

下一步是定义一个简单的storage_worker,它将负责从同步队列中取出帧,并将它们编码成一个视频文件,直到队列被取消.


Storage Worker

The next step is to define a simple storage_worker, which will be responsible for taking the frames from the synchronized queue, and encode them into a video file until the queue has been cancelled.

我添加了简单的计时,因此我们对帧编码所花费的时间有了一些了解,以及对控制台的简单记录,因此我们对程序中发生的事情有了一些了解.

I've added simple timing, so we have some idea about how much time is spent encoding the frames, as well as simple logging to console, so we have some idea about what is happening in the program.

// ============================================================================
class storage_worker
{
public:
    storage_worker(frame_queue& queue
        , int32_t id
        , std::string const& file_name
        , int32_t fourcc
        , double fps
        , cv::Size frame_size
        , bool is_color = true);

    void run();

    double total_time_ms() const { return total_time_ / 1000.0; }

private:
    frame_queue& queue_;

    int32_t id_;

    std::string file_name_;
    int32_t fourcc_;
    double fps_;
    cv::Size frame_size_;
    bool is_color_;

    double total_time_;
};
// ----------------------------------------------------------------------------
storage_worker::storage_worker(frame_queue& queue
    , int32_t id
    , std::string const& file_name
    , int32_t fourcc
    , double fps
    , cv::Size frame_size
    , bool is_color)
    : queue_(queue)
    , id_(id)
    , file_name_(file_name)
    , fourcc_(fourcc)
    , fps_(fps)
    , frame_size_(frame_size)
    , is_color_(is_color)
    , total_time_(0.0)
{
}
// ----------------------------------------------------------------------------
void storage_worker::run()
{
    cv::VideoWriter writer(file_name_, fourcc_, fps_, frame_size_, is_color_);

    try {
        int32_t frame_count(0);
        for (;;) {
            cv::Mat image(queue_.pop());
            if (!image.empty()) {
                high_resolution_clock::time_point t1(high_resolution_clock::now());

                ++frame_count;
                writer.write(image);

                high_resolution_clock::time_point t2(high_resolution_clock::now());
                double dt_us(static_cast<double>(duration_cast<microseconds>(t2 - t1).count()));
                total_time_ += dt_us;

                std::cout << "Worker " << id_ << " stored image #" << frame_count
                    << " in " << (dt_us / 1000.0) << " ms" << std::endl;
            }
        }
    } catch (frame_queue::cancelled& /*e*/) {
        // Nothing more to process, we're done
        std::cout << "Queue " << id_ << " cancelled, worker finished." << std::endl;
    }
}
// ============================================================================

<小时>

处理

最后,我们可以把这一切放在一起.


Processing

Finally, we can put this all together.

我们首先初始化和配置我们的视频源.然后我们创建两个 frame_queue 实例,每个图像流一个.我们通过创建 storage_worker 的两个实例来遵循这一点,每个队列一个.为了让事情变得有趣,我为每个设置了不同的编解码器.

We begin by initializing and configuring our video source. Then we create two frame_queue instances, one for each stream of images. We follow this by creating two instances of storage_worker, one for each queue. To keep things interesting, I've set a different codec for each.

下一步是创建和启动工作线程,它将执行每个storage_workerrun()方法.准备好消费者后,我们可以开始从相机捕获帧,并将它们提供给 frame_queue 实例.如上所述,我只有一个源,所以我将同一帧的副本插入到两个队列中.

Next step is to create and start worker threads, which will execute the run() method of each storage_worker. Having our consumers ready, we can start capturing frames from the camera, and feed them to the frame_queue instances. As mentioned above, I have only single source, so I insert copies of the same frame into both queues.

注意:我需要使用cv::Matclone()方法进行深拷贝,否则我会插入对 OpenCV VideoCapture 出于性能原因使用的单个缓冲区的引用.这意味着工作线程将获得对这个单一图像的引用,并且没有同步访问这个共享图像缓冲区.您需要确保这种情况也不会发生在您的场景中.

NB: I need to use the clone() method of cv::Mat to do a deep copy, otherwise I would be inserting references to the single buffer OpenCV VideoCapture uses for performance reasons. That would mean that the worker threads would be getting references to this single image, and there would be no synchronization for access to this shared image buffer. You need to make sure this does not happen in your scenario as well.

一旦我们读取了适当数量的帧(您可以实现您想要的任何其他类型的停止条件),我们将取消工作队列,并等待工作线程完成.

Once we have read the appropriate number of frames (you can implement any other kind of stop-condition you desire), we cancel the work queues, and wait for the worker threads to complete.

最后我们写一些关于不同任务所需时间的统计数据.

Finally we write some statistics about the time required for the different tasks.

// ============================================================================
int main()
{
    // The video source -- for me this is a webcam, you use your specific camera API instead
    // I only have one camera, so I will just duplicate the frames to simulate your scenario
    cv::VideoCapture capture(0);

    // Let's make it decent sized, since my camera defaults to 640x480
    capture.set(CV_CAP_PROP_FRAME_WIDTH, 1920);
    capture.set(CV_CAP_PROP_FRAME_HEIGHT, 1080);
    capture.set(CV_CAP_PROP_FPS, 20.0);

    // And fetch the actual values, so we can create our video correctly
    int32_t frame_width(static_cast<int32_t>(capture.get(CV_CAP_PROP_FRAME_WIDTH)));
    int32_t frame_height(static_cast<int32_t>(capture.get(CV_CAP_PROP_FRAME_HEIGHT)));
    double video_fps(std::max(10.0, capture.get(CV_CAP_PROP_FPS))); // Some default in case it's 0

    std::cout << "Capturing images (" << frame_width << "x" << frame_height
        << ") at " << video_fps << " FPS." << std::endl;

    // The synchronized queues, one per video source/storage worker pair
    std::vector<frame_queue> queue(2);

    // Let's create our storage workers -- let's have two, to simulate your scenario
    // and to keep it interesting, have each one write a different format
    std::vector <storage_worker> storage;
    storage.emplace_back(std::ref(queue[0]), 0
        , std::string("foo_0.avi")
        , CV_FOURCC('I', 'Y', 'U', 'V')
        , video_fps
        , cv::Size(frame_width, frame_height)
        , true);

    storage.emplace_back(std::ref(queue[1]), 1
        , std::string("foo_1.avi")
        , CV_FOURCC('D', 'I', 'V', 'X')
        , video_fps
        , cv::Size(frame_width, frame_height)
        , true);

    // And start the worker threads for each storage worker
    std::vector<std::thread> storage_thread;
    for (auto& s : storage) {
        storage_thread.emplace_back(&storage_worker::run, &s);
    }

    // Now the main capture loop
    int32_t const MAX_FRAME_COUNT(10);
    double total_read_time(0.0);
    int32_t frame_count(0);
    for (; frame_count < MAX_FRAME_COUNT; ++frame_count) {
        high_resolution_clock::time_point t1(high_resolution_clock::now());

        // Try to read a frame
        cv::Mat image;
        if (!capture.read(image)) {
            std::cerr << "Failed to capture image.
";
            break;
        }

        // Insert a copy into all queues
        for (auto& q : queue) {
            q.push(image.clone());
        }        

        high_resolution_clock::time_point t2(high_resolution_clock::now());
        double dt_us(static_cast<double>(duration_cast<microseconds>(t2 - t1).count()));
        total_read_time += dt_us;

        std::cout << "Captured image #" << frame_count << " in "
            << (dt_us / 1000.0) << " ms" << std::endl;
    }

    // We're done reading, cancel all the queues
    for (auto& q : queue) {
        q.cancel();
    }

    // And join all the worker threads, waiting for them to finish
    for (auto& st : storage_thread) {
        st.join();
    }

    if (frame_count == 0) {
        std::cerr << "No frames captured.
";
        return -1;
    }

    // Report the timings
    total_read_time /= 1000.0;
    double total_write_time_a(storage[0].total_time_ms());
    double total_write_time_b(storage[1].total_time_ms());

    std::cout << "Completed processing " << frame_count << " images:
"
        << "  average capture time = " << (total_read_time / frame_count) << " ms
"
        << "  average write time A = " << (total_write_time_a / frame_count) << " ms
"
        << "  average write time B = " << (total_write_time_b / frame_count) << " ms
";

    return 0;
}
// ============================================================================

<小时>

控制台输出

运行这个小示例,我们在控制台中得到以下日志输出,以及磁盘上的两个视频文件.


Console Output

Running this little sample, we get the following log output in the console, as well as the two video files on the disk.

注意:由于这实际上编码比捕获快得多,我在 storage_worker 中添加了一些等待以更好地显示分离.

NB: Since this was actually encoding a lot faster than capturing, I've added some wait into the storage_worker to show the separation better.

Capturing images (1920x1080) at 20 FPS.
Captured image #0 in 111.009 ms
Captured image #1 in 67.066 ms
Worker 0 stored image #1 in 94.087 ms
Captured image #2 in 62.059 ms
Worker 1 stored image #1 in 193.186 ms
Captured image #3 in 60.059 ms
Worker 0 stored image #2 in 100.097 ms
Captured image #4 in 78.075 ms
Worker 0 stored image #3 in 87.085 ms
Captured image #5 in 62.061 ms
Worker 0 stored image #4 in 95.092 ms
Worker 1 stored image #2 in 193.187 ms
Captured image #6 in 75.074 ms
Worker 0 stored image #5 in 95.093 ms
Captured image #7 in 63.061 ms
Captured image #8 in 64.061 ms
Worker 0 stored image #6 in 102.098 ms
Worker 1 stored image #3 in 201.195 ms
Captured image #9 in 76.074 ms
Worker 0 stored image #7 in 90.089 ms
Worker 0 stored image #8 in 91.087 ms
Worker 1 stored image #4 in 185.18 ms
Worker 0 stored image #9 in 82.08 ms
Worker 0 stored image #10 in 94.092 ms
Queue 0 cancelled, worker finished.
Worker 1 stored image #5 in 179.174 ms
Worker 1 stored image #6 in 106.102 ms
Worker 1 stored image #7 in 105.104 ms
Worker 1 stored image #8 in 103.101 ms
Worker 1 stored image #9 in 104.102 ms
Worker 1 stored image #10 in 104.1 ms
Queue 1 cancelled, worker finished.
Completed processing 10 images:
  average capture time = 71.8599 ms
  average write time A = 93.09 ms
  average write time B = 147.443 ms
  average write time B = 176.673 ms

<小时>

可能的改进

目前,在序列化根本无法跟上相机生成新图像的速度的情况下,无法防止队列太满.为队列大小设置一些上限,并在推送帧之前检查生产者.您需要决定如何处理这种情况.


Possible Improvements

Currently there is no protection against the queue getting too full in the situation when the serialization simply can't keep up with the rate the camera generates new images. Set some upper limit for the queue size, and check in the producer before you push the frame. You will need to decide how exactly you want to handle this situation.

这篇关于如何保存两台相机的数据但不影响它们的图片获取速度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

上一篇:如何在 OpenCV 中设置 ROI? 下一篇:从 NumPy 数组到 Mat (OpenCV) 的 C++ 转换

相关文章

最新文章