OpenCV基于DLCO描述子匹配
點擊上方“小白學視覺”,選擇加"星標"或“置頂”
重磅干貨,第一時間送達
2014年VGG發(fā)表了一篇基于凸優(yōu)化的局部特征描述子學習(DLCO)的論文,OpenCV3.2以后在擴展模塊中對該論文的完成了代碼實現(xiàn)并發(fā)布了API支持,提供了基于DLCO的描述子生成支持、基于生成的描述子,可以實現(xiàn)圖像特征匹配的對象識別。關(guān)于特征描述子學習相關(guān)的細節(jié)可以看這里:
http://www.robots.ox.ac.uk/~vgg/software/learn_desc/
提供了描述子模型,學習數(shù)據(jù),C++版本實現(xiàn)的源代碼下載
二:OpenCV程序演示
OpenCV中VGG的DLCO描述子生成支持下面幾種
VGG_120 = 100,
VGG_80 = 101,
VGG_64 = 102,
VGG_48 = 103
默認支持輸出描述子是120個向量即VGG_120。基于DLCO在OpenCV中代碼實現(xiàn)對象檢測與匹配大致分為如下幾步:
1.加載圖像
Mat box = imread("D:/vcprojects/images/box.png");
Mat scene = imread("D:/vcprojects/images/box_in_scene.png");
imshow("box image", box);
imshow("scene image", scene);
2.關(guān)鍵點檢測(SURF)
Ptr<SURF> detector = SURF::create();
int minHessian = 400;
vector<KeyPoint> keypoints_1, keypoints_2;
detector->setHessianThreshold(minHessian);
detector->detect(box, keypoints_1);
detector->detect(box_scene, keypoints_2);
3.描述子生成(DLCO)
Ptr<VGG> vgg_descriptor = VGG::create();
Mat descriptors_1, descriptors_2;
vgg_descriptor->compute(box, keypoints_1, descriptors_1);
vgg_descriptor->compute(box_scene, keypoints_2, descriptors_2);
4.特征匹配實現(xiàn)對象識別
// 計算匹配點
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match(descriptors_1, descriptors_2, matches);
double max_dist = 0; double min_dist = 100;
// 計算最大與最小距離
for (int i = 0; i < descriptors_1.rows; i++)
{
double dist = matches[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);
// 尋找最佳匹配,距離越小越好
std::vector< DMatch > good_matches;
for (int i = 0; i < descriptors_1.rows; i++)
{
if (matches[i].distance <= min(2 * min_dist, 1.5))
{
good_matches.push_back(matches[i]);
}
}
// 繪制最終匹配點
Mat img_matches;
drawMatches(box, keypoints_1, box_scene, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for (size_t i = 0; i < good_matches.size(); i++)
{
//-- Get the keypoints from the good matches
obj.push_back(keypoints_1[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_2[good_matches[i].trainIdx].pt);
}
Mat H = findHomography(obj, scene, RANSAC);
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(box.cols, 0);
obj_corners[2] = cvPoint(box.cols, box.rows); obj_corners[3] = cvPoint(0, box.rows);
std::vector<Point2f> scene_corners(4);
perspectiveTransform(obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line(img_matches, scene_corners[0] + Point2f(box.cols, 0), scene_corners[1] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[1] + Point2f(box.cols, 0), scene_corners[2] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[2] + Point2f(box.cols, 0), scene_corners[3] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);
line(img_matches, scene_corners[3] + Point2f(box.cols, 0), scene_corners[0] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);
//-- Show detected matches
imshow("Good Matches & Object detection", img_matches);
原圖:


特征匹配結(jié)果

好消息!
小白學視覺知識星球
開始面向外開放啦??????
下載1:OpenCV-Contrib擴展模塊中文版教程 在「小白學視覺」公眾號后臺回復(fù):擴展模塊中文教程,即可下載全網(wǎng)第一份OpenCV擴展模塊教程中文版,涵蓋擴展模塊安裝、SFM算法、立體視覺、目標跟蹤、生物視覺、超分辨率處理等二十多章內(nèi)容。 下載2:Python視覺實戰(zhàn)項目52講 在「小白學視覺」公眾號后臺回復(fù):Python視覺實戰(zhàn)項目,即可下載包括圖像分割、口罩檢測、車道線檢測、車輛計數(shù)、添加眼線、車牌識別、字符識別、情緒檢測、文本內(nèi)容提取、面部識別等31個視覺實戰(zhàn)項目,助力快速學校計算機視覺。 下載3:OpenCV實戰(zhàn)項目20講 在「小白學視覺」公眾號后臺回復(fù):OpenCV實戰(zhàn)項目20講,即可下載含有20個基于OpenCV實現(xiàn)20個實戰(zhàn)項目,實現(xiàn)OpenCV學習進階。 交流群
歡迎加入公眾號讀者群一起和同行交流,目前有SLAM、三維視覺、傳感器、自動駕駛、計算攝影、檢測、分割、識別、醫(yī)學影像、GAN、算法競賽等微信群(以后會逐漸細分),請掃描下面微信號加群,備注:”昵稱+學校/公司+研究方向“,例如:”張三 + 上海交大 + 視覺SLAM“。請按照格式備注,否則不予通過。添加成功后會根據(jù)研究方向邀請進入相關(guān)微信群。請勿在群內(nèi)發(fā)送廣告,否則會請出群,謝謝理解~

