I'm trying to detect wether a photo represents a predefined formular template filled with data.
I'm new to image processing and OpenCV but my first attempt is to use FlannBasedMatcher and compare the count of keypoints detected.
Is there a better way to do this?
filled-form.jpg
form-template.jpg
import numpy as np
import cv2
from matplotlib import pyplot as plt
MIN_MATCH_COUNT = 10
img1 = cv2.imread('filled-form.jpg',0) # queryImage
img2 = cv2.imread('template-form.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
if len(good)>MIN_MATCH_COUNT:
print "ALL GOOD!"
else:
print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
matchesMask = None
I think that using SIFT and a keypoints matcher is the most robust approach to this problem. It should work fine with many different form templates. However, SIFT algorithm being patented, here is another approach that should work well too:
THRESH_OTSU
tag.Mat
s with the bitwise_not
function.For the two binary Mat
s from Step 1:
approxPolyDP
to approximate the found contour to a quadrilateral (see picture above).In my code, this is done inside getQuadrilateral()
.
findHomography
Mat
using warpPerspective
(and the homography Mat
computed previously).Mat
.Mat
and the dilated template form's binary Mat
.This allows to extract the filled informations. But you can also do it the other way around:
Template form - Dilated Warped Mat
In this case, the result of the subtraction should be totally black. I would then use mean
to get the average pixel's value. Finally, if that value is smaller than (let's say) 2, I would assume the form on the photo is matching the template form.
Here is the C++ code, it shouldn't be too hard to translate to Python :)
vector<Point> getQuadrilateral(Mat & grayscale)
{
vector<vector<Point>> contours;
findContours(grayscale, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
vector<int> indices(contours.size());
iota(indices.begin(), indices.end(), 0);
sort(indices.begin(), indices.end(), [&contours](int lhs, int rhs) {
return contours[lhs].size() > contours[rhs].size();
});
vector<vector<Point>> polygon(1);
approxPolyDP(contours[indices[0]], polygon[0], 5, true);
if (polygon[0].size() == 4) // we have found a quadrilateral
{
return(polygon[0]);
}
return(vector<Point>());
}
int main(int argc, char** argv)
{
Mat templateImg, sampleImg;
templateImg = imread("template-form.jpg", 0);
sampleImg = imread("sample-form.jpg", 0);
Mat templateThresh, sampleTresh;
threshold(templateImg, templateThresh, 0, 255, THRESH_OTSU);
threshold(sampleImg, sampleTresh, 0, 255, THRESH_OTSU);
bitwise_not(templateThresh, templateThresh);
bitwise_not(sampleTresh, sampleTresh);
vector<Point> corners_template = getQuadrilateral(templateThresh);
vector<Point> corners_sample = getQuadrilateral(sampleTresh);
Mat homography = findHomography(corners_sample, corners_template);
Mat warpSample;
warpPerspective(sampleTresh, warpSample, homography, Size(templateThresh.cols, templateThresh.rows));
Mat element_dilate = getStructuringElement(MORPH_ELLIPSE, Size(8, 8));
dilate(templateThresh, templateThresh, element_dilate);
Mat diff = warpSample - templateThresh;
imshow("diff", diff);
waitKey(0);
return 0;
}
I Hope it is clear enough! ;)
P.S. This great answer helped me to retrieve the largest contour.
这篇关于OpenCV 图像匹配 - 表单照片与表单模板的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!