使用OpenCV實(shí)現(xiàn)圖像增強(qiáng)
點(diǎn)擊上方“小白學(xué)視覺”,選擇加"星標(biāo)"或“置頂”
重磅干貨,第一時(shí)間送達(dá)

本期將介紹如何通過圖像處理從低分辨率/模糊/低對比度的圖像中提取有用信息。
下面讓我們一起來探究這個(gè)過程:
首先我們獲取了一個(gè)LPG氣瓶圖像,該圖像取自在傳送帶上運(yùn)行的倉庫。我們的目標(biāo)是找出LPG氣瓶的批號(hào),以便更新已檢測的LPG氣瓶數(shù)量。
步驟1:導(dǎo)入必要的庫
import cv2import numpy as npimport matplotlib.pyplot as plt
步驟2:加載圖像并顯示示例圖像。
img= cv2.imread('cylinder1.png')img1=cv2.imread('cylinder.png')images=np.concatenate(img(img,img1),axis=1)cv2.imshow("Images",images)cv2.waitKey(0)cv2.destroyAllWindows()
LPG氣瓶圖片(a)批次-D26(b)批次C27
該圖像的對比度非常差。我們幾乎看不到批號(hào)。這是在燈光條件不足的倉庫中的常見問題。接下來我們將討論對比度受限的自適應(yīng)直方圖均衡化,并嘗試對數(shù)據(jù)集使用不同的算法進(jìn)行實(shí)驗(yàn)。
步驟3:將圖像轉(zhuǎn)換為灰度圖像
gray_img=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)gray_img1=cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
步驟4:找到灰度圖像的直方圖后,尋找強(qiáng)度的分布。
hist=cv2.calcHist(gray_img,[0],None,[256],[0,256])hist1=cv2.calcHist(gray_img1,[0],None,[256],[0,256])plt.subplot(121)plt.title("Image1")plt.xlabel('bins')plt.ylabel("No of pixels")plt.plot(hist)plt.subplot(122)plt.title("Image2")plt.xlabel('bins')plt.ylabel("No of pixels")plt.plot(hist1)plt.show()

步驟5:現(xiàn)在,使用cv2.equalizeHist()函數(shù)來均衡給定灰度圖像的對比度。cv2.equalizeHist()函數(shù)可標(biāo)準(zhǔn)化亮度并增加對比度。
gray_img_eqhist=cv2.equalizeHist(gray_img)gray_img1_eqhist=cv2.equalizeHist(gray_img1)hist=cv2.calcHist(gray_img_eqhist,[0],None,[256],[0,256])hist1=cv2.calcHist(gray_img1_eqhist,[0],None,[256],[0,256])plt.subplot(121)plt.plot(hist)plt.subplot(122)plt.plot(hist1)plt.show()

步驟6:顯示灰度直方圖均衡圖像
eqhist_images=np.concatenate((gray_img_eqhist,gray_img1_eqhist),axis=1)cv2.imshow("Images",eqhist_images)cv2.waitKey(0)cv2.destroyAllWindows()

灰度直方圖均衡
讓我們進(jìn)一步深入了解CLAHE
步驟7:
對比度有限的自適應(yīng)直方圖均衡
該算法可以用于改善圖像的對比度。該算法通過創(chuàng)建圖像的多個(gè)直方圖來工作,并使用所有這些直方圖重新分配圖像的亮度。CLAHE可以應(yīng)用于灰度圖像和彩色圖像。有2個(gè)參數(shù)需要調(diào)整。
1. 限幅設(shè)置了對比度限制的閾值。默認(rèn)值為40
2. tileGridsize設(shè)置行和列中標(biāo)題的數(shù)量。在應(yīng)用CLAHE時(shí),為了執(zhí)行計(jì)算,圖像被分為稱為圖塊(8 * 8)的小塊。
clahe=cv2.createCLAHE(clipLimit=40)gray_img_clahe=clahe.apply(gray_img_eqhist)gray_img1_clahe=clahe.apply(gray_img1_eqhist)images=np.concatenate((gray_img_clahe,gray_img1_clahe),axis=1)cv2.imshow("Images",images)cv2.waitKey(0)cv2.destroyAllWindows()

步驟8:
門檻技術(shù)
閾值處理是一種將圖像劃分為前景和背景的簡單但有效的方法。如果像素強(qiáng)度小于某個(gè)預(yù)定義常數(shù)(閾值),則最簡單的閾值化方法將源圖像中的每個(gè)像素替換為黑色像素;如果像素強(qiáng)度大于閾值,則使用白色像素替換源像素。閾值的不同類型是:
cv2.THRESH_BINARY
cv2.THRESH_BINARY_INV
cv2.THRESH_TRUNC
cv2.THRESH_TOZERO
cv2.THRESH_TOZERO_INV
cv2.THRESH_OTSU
cv2.THRESH_TRIANGLE
嘗試更改閾值和max_val以獲得不同的結(jié)果。
th=80max_val=255ret, o1 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_BINARY)cv2.putText(o1,"Thresh_Binary",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)ret, o2 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_BINARY_INV)cv2.putText(o2,"Thresh_Binary_inv",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)ret, o3 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_TOZERO)cv2.putText(o3,"Thresh_Tozero",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)ret, o4 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_TOZERO_INV)cv2.putText(o4,"Thresh_Tozero_inv",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)ret, o5 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_TRUNC)cv2.putText(o5,"Thresh_trunc",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)ret ,o6= cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_OTSU)cv2.putText(o6,"Thresh_OSTU",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)final=np.concatenate((o1,o2,o3),axis=1)final1=np.concatenate((o4,o5,o6),axis=1)cv2.imwrite("Image1.jpg",final)cv2.imwrite("Image2.jpg",final1)

Thresh_Binary_inv,Thresh_Binary_inv,Thresh_Tozero

Thresh_Tozero_inv,Thresh_trunc,Thresh_OSTU
步驟9:自適應(yīng)閾值
在上一節(jié)中,我們使用了全局閾值來應(yīng)用cv2.threshold()。如我們所見,由于圖像不同區(qū)域的照明條件不同,因此獲得的結(jié)果不是很好。在這些情況下,您可以嘗試自適應(yīng)閾值化。在OpenCV中,自適應(yīng)閾值處理由cv2.adapativeThreshold()函數(shù)執(zhí)行
此功能將自適應(yīng)閾值應(yīng)用于src陣列(8位單通道圖像)。maxValue參數(shù)設(shè)置dst圖像中滿足條件的像素的值。adaptiveMethod參數(shù)設(shè)置要使用的自適應(yīng)閾值算法。
cv2.ADAPTIVE_THRESH_MEAN_C:將T(x,y)閾值計(jì)算為(x,y)的blockSize x blockSize鄰域的平均值減去C參數(shù)。
cv2.ADAPTIVE_THRESH_GAUSSIAN_C:將T(x,y)閾值計(jì)算為(x,y)的blockSize x blockSize鄰域的加權(quán)總和減去C參數(shù)。
blockSize參數(shù)設(shè)置用于計(jì)算像素閾值的鄰域的大小,它可以取值3、5、7等。
C參數(shù)只是從均值或加權(quán)均值中減去的常數(shù)(取決于adaptiveMethod參數(shù)設(shè)置的自適應(yīng)方法)。通常,此值為正,但可以為零或負(fù)。
gray_image = cv2.imread('cylinder1.png',0)gray_image1 = cv2.imread('cylinder.png',0)thresh1 = cv2.adaptiveThreshold(gray_image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2)thresh2 = cv2.adaptiveThreshold(gray_image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 31, 3)thresh3 = cv2.adaptiveThreshold(gray_image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 13, 5)thresh4 = cv2.adaptiveThreshold(gray_image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 4)thresh11 = cv2.adaptiveThreshold(gray_image1, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2)thresh21 = cv2.adaptiveThreshold(gray_image1, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 31, 5)thresh31 = cv2.adaptiveThreshold(gray_image1, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 21,5 )thresh41 = cv2.adaptiveThreshold(gray_image1, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 5)final=np.concatenate((thresh1,thresh2,thresh3,thresh4),axis=1)final1=np.concatenate((thresh11,thresh21,thresh31,thresh41),axis=1)cv2.imwrite('rect.jpg',final)cv2.imwrite('rect1.jpg',final1)

自適應(yīng)閾值

自適應(yīng)閾值
步驟10:OTSU二值化
gray_image = cv2.imread('cylinder1.png',0)gray_image1 = cv2.imread('cylinder.png',0)ret,thresh1 = cv2.threshold(gray_image,0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)ret,thresh2 = cv2.threshold(gray_image1,0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)cv2.imwrite('rect.jpeg',np.concatenate((thresh1,thresh2),axis=1))

OTSU二值化
現(xiàn)在,我們已經(jīng)從低對比度的圖像中清楚地識(shí)別出批號(hào)。
交流群
歡迎加入公眾號(hào)讀者群一起和同行交流,目前有SLAM、三維視覺、傳感器、自動(dòng)駕駛、計(jì)算攝影、檢測、分割、識(shí)別、醫(yī)學(xué)影像、GAN、算法競賽等微信群(以后會(huì)逐漸細(xì)分),請掃描下面微信號(hào)加群,備注:”昵稱+學(xué)校/公司+研究方向“,例如:”張三?+?上海交大?+?視覺SLAM“。請按照格式備注,否則不予通過。添加成功后會(huì)根據(jù)研究方向邀請進(jìn)入相關(guān)微信群。請勿在群內(nèi)發(fā)送廣告,否則會(huì)請出群,謝謝理解~
