<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          【項(xiàng)目實(shí)踐】車距+車輛+車道線+行人檢測項(xiàng)目實(shí)踐

          共 20910字,需瀏覽 42分鐘

           ·

          2021-04-12 10:20

          點(diǎn)擊上方小白學(xué)視覺”,選擇加"星標(biāo)"或“置頂

          重磅干貨,第一時(shí)間送達(dá)

          本文轉(zhuǎn)自 | AI算法與圖像處理

          1、項(xiàng)目流程的簡介

             項(xiàng)目的主題框架使用為Keras+OpenCV的形式實(shí)現(xiàn),而模型的選擇為基于DarkNet19的YOLO V2模型,權(quán)重為基于COCO2014訓(xùn)練的數(shù)據(jù)集,而車道線的檢測是基于OpenCV的傳統(tǒng)方法實(shí)現(xiàn)的。


          2、項(xiàng)目主題部分

          2.1、YOLO V2模型


              YoloV2的結(jié)構(gòu)是比較簡單的,這里要注意的地方有兩個(gè):

              1.輸出的是batchsize x (5+20)*5 x W x H的feature map;

                2.這里為了提取細(xì)節(jié),加了一個(gè) Fine-Grained connection layer,將前面的細(xì)節(jié)信息匯聚到了后面的層當(dāng)中。

          YOLOv2結(jié)構(gòu)示意圖


          2.1.1、DarkNet19模型



              YOLOv2采用了一個(gè)新的基礎(chǔ)模型(特征提取器),稱為Darknet-19,包括19個(gè)卷積層和5個(gè)maxpooling層;Darknet-19與VGG16模型設(shè)計(jì)原則是一致的,主要采用3*3卷積,采用 2*2的maxpooling層之后,特征圖維度降低2倍,而同時(shí)將特征圖的channles增加兩倍。


              與NIN(Network in Network)類似,Darknet-19最終采用global avgpooling做預(yù)測,并且在3*3卷積之間使用1*1卷積來壓縮特征圖channles以降低模型計(jì)算量和參數(shù)。


              Darknet-19每個(gè)卷積層后面同樣使用了batch norm層以加快收斂速度,降低模型過擬合。在ImageNet分類數(shù)據(jù)集上,Darknet-19的top-1準(zhǔn)確度為72.9%,top-5準(zhǔn)確度為91.2%,但是模型參數(shù)相對小一些。使用Darknet-19之后,YOLOv2的mAP值沒有顯著提升,但是計(jì)算量卻可以減少約33%。

          """Darknet19 Model Defined in Keras."""import functoolsfrom functools import partial
          from keras.layers import Conv2D, MaxPooling2Dfrom keras.layers.advanced_activations import LeakyReLUfrom keras.layers.normalization import BatchNormalizationfrom keras.models import Modelfrom keras.regularizers import l2
          from ..utils import compose
          # Partial wrapper for Convolution2D with static default argument._DarknetConv2D = partial(Conv2D, padding='same')

          @functools.wraps(Conv2D)def DarknetConv2D(*args, **kwargs): """Wrapper to set Darknet weight regularizer for Convolution2D.""" darknet_conv_kwargs = {'kernel_regularizer': l2(5e-4)} darknet_conv_kwargs.update(kwargs) return _DarknetConv2D(*args, **darknet_conv_kwargs)

          def DarknetConv2D_BN_Leaky(*args, **kwargs): """Darknet Convolution2D followed by BatchNormalization and LeakyReLU.""" no_bias_kwargs = {'use_bias': False} no_bias_kwargs.update(kwargs) return compose( DarknetConv2D(*args, **no_bias_kwargs), BatchNormalization(), LeakyReLU(alpha=0.1))

          def bottleneck_block(outer_filters, bottleneck_filters): """Bottleneck block of 3x3, 1x1, 3x3 convolutions.""" return compose( DarknetConv2D_BN_Leaky(outer_filters, (3, 3)), DarknetConv2D_BN_Leaky(bottleneck_filters, (1, 1)), DarknetConv2D_BN_Leaky(outer_filters, (3, 3)))

          def bottleneck_x2_block(outer_filters, bottleneck_filters): """Bottleneck block of 3x3, 1x1, 3x3, 1x1, 3x3 convolutions.""" return compose( bottleneck_block(outer_filters, bottleneck_filters), DarknetConv2D_BN_Leaky(bottleneck_filters, (1, 1)), DarknetConv2D_BN_Leaky(outer_filters, (3, 3)))

          def darknet_body(): """Generate first 18 conv layers of Darknet-19.""" return compose( DarknetConv2D_BN_Leaky(32, (3, 3)), MaxPooling2D(), DarknetConv2D_BN_Leaky(64, (3, 3)), MaxPooling2D(), bottleneck_block(128, 64), MaxPooling2D(), bottleneck_block(256, 128), MaxPooling2D(), bottleneck_x2_block(512, 256), MaxPooling2D(), bottleneck_x2_block(1024, 512))

          def darknet19(inputs): """Generate Darknet-19 model for Imagenet classification.""" body = darknet_body()(inputs) logits = DarknetConv2D(1000, (1, 1), activation='softmax')(body) return Model(inputs, logits)


          2.1.2、Fine-Grained Features



              YOLOv2的輸入圖片大小為416*416,經(jīng)過5次maxpooling之后得到13*13大小的特征圖,并以此特征圖采用卷積做預(yù)測。13*13大小的特征圖對檢測大物體是足夠了,但是對于小物體還需要更精細(xì)的特征圖(Fine-Grained Features)。因此SSD使用了多尺度的特征圖來分別檢測不同大小的物體,前面更精細(xì)的特征圖可以用來預(yù)測小物體。


              YOLOv2提出了一種passthrough層來利用更精細(xì)的特征圖。YOLOv2所利用的Fine-Grained Features是26*26大小的特征圖(最后一個(gè)maxpooling層的輸入),對于Darknet-19模型來說就是大小為 26*26*512的特征圖。passthrough層與ResNet網(wǎng)絡(luò)的shortcut類似,以前面更高分辨率的特征圖為輸入,然后將其連接到后面的低分辨率特征圖上。前面的特征圖維度是后面的特征圖的2倍,passthrough層抽取前面層的每個(gè)2*2的局部區(qū)域,然后將其轉(zhuǎn)化為channel維度,對于26*26*512的特征圖,經(jīng)passthrough層處理之后就變成了13*13*2048的新特征圖(特征圖大小降低4倍,而channles增加4倍,圖6為一個(gè)實(shí)例),這樣就可以與后面的13*13*1024特征圖連接在一起形成13*13*3072大小的特征圖,然后在此特征圖基礎(chǔ)上卷積做預(yù)測。

          passthrough層實(shí)例

              另外,作者在后期的實(shí)現(xiàn)中借鑒了ResNet網(wǎng)絡(luò),不是直接對高分辨特征圖處理,而是增加了一個(gè)中間卷積層,先采用64個(gè) 1*1卷積核進(jìn)行卷積,然后再進(jìn)行passthrough處理,這樣26*26*512的特征圖得到13*13*256的特征圖。


              這算是實(shí)現(xiàn)上的一個(gè)小細(xì)節(jié)。使用Fine-Grained Features之后YOLOv2的性能有1%的提升。


          2.1.3、Dimension Clusters



              在Faster R-CNN和SSD中,先驗(yàn)框的維度(長和寬)都是手動設(shè)定的,帶有一定的主觀性。如果選取的先驗(yàn)框維度比較合適,那么模型更容易學(xué)習(xí),從而做出更好的預(yù)測。因此,YOLOv2采用k-means聚類方法對訓(xùn)練集中的邊界框做了聚類分析。


              因?yàn)樵O(shè)置先驗(yàn)框的主要目的是為了使得預(yù)測框與ground truth的IOU更好,所以聚類分析時(shí)選用box與聚類中心box之間的IOU值作為距離指標(biāo)。

          數(shù)據(jù)集VOC和COCO上的邊界框聚類分析結(jié)果


          2.1.4、YOLOv2的訓(xùn)練



              YOLOv2的訓(xùn)練主要包括三個(gè)階段。第一階段就是先在coco分類數(shù)據(jù)集上預(yù)訓(xùn)練Darknet-19,此時(shí)模型輸入為224*224,共訓(xùn)練160個(gè)epochs。然后第二階段將網(wǎng)絡(luò)的輸入調(diào)整為448*448,繼續(xù)在ImageNet數(shù)據(jù)集上finetune分類模型,訓(xùn)練10個(gè)epochs,此時(shí)分類模型的top-1準(zhǔn)確度為76.5%,而top-5準(zhǔn)確度為93.3%。第三個(gè)階段就是修改Darknet-19分類模型為檢測模型,并在檢測數(shù)據(jù)集上繼續(xù)finetune網(wǎng)絡(luò)。

          YOLOv2訓(xùn)練的三個(gè)階段

          loss計(jì)算公式:

          def yolo_loss(args,              anchors,              num_classes,              rescore_confidence=False,              print_loss=False):    """YOLO localization loss function.
          Parameters ---------- yolo_output : tensor Final convolutional layer features.
          true_boxes : tensor Ground truth boxes tensor with shape [batch, num_true_boxes, 5] containing box x_center, y_center, width, height, and class.
          detectors_mask : array 0/1 mask for detector positions where there is a matching ground truth.
          matching_true_boxes : array Corresponding ground truth boxes for positive detector positions. Already adjusted for conv height and width.
          anchors : tensor Anchor boxes for model.
          num_classes : int Number of object classes.
          rescore_confidence : bool, default=False If true then set confidence target to IOU of best predicted box with the closest matching ground truth box.
          print_loss : bool, default=False If True then use a tf.Print() to print the loss components.
          Returns ------- mean_loss : float mean localization loss across minibatch """ (yolo_output, true_boxes, detectors_mask, matching_true_boxes) = args num_anchors = len(anchors) object_scale = 5 no_object_scale = 1 class_scale = 1 coordinates_scale = 1 pred_xy, pred_wh, pred_confidence, pred_class_prob = yolo_head( yolo_output, anchors, num_classes)
          # Unadjusted box predictions for loss. # TODO: Remove extra computation shared with yolo_head. yolo_output_shape = K.shape(yolo_output) feats = K.reshape(yolo_output, [ -1, yolo_output_shape[1], yolo_output_shape[2], num_anchors, num_classes + 5 ]) pred_boxes = K.concatenate( (K.sigmoid(feats[..., 0:2]), feats[..., 2:4]), axis=-1)
          # TODO: Adjust predictions by image width/height for non-square images? # IOUs may be off due to different aspect ratio.
          # Expand pred x,y,w,h to allow comparison with ground truth. # batch, conv_height, conv_width, num_anchors, num_true_boxes, box_params pred_xy = K.expand_dims(pred_xy, 4) pred_wh = K.expand_dims(pred_wh, 4)
          pred_wh_half = pred_wh / 2. pred_mins = pred_xy - pred_wh_half pred_maxes = pred_xy + pred_wh_half
          true_boxes_shape = K.shape(true_boxes)
          # batch, conv_height, conv_width, num_anchors, num_true_boxes, box_params true_boxes = K.reshape(true_boxes, [ true_boxes_shape[0], 1, 1, 1, true_boxes_shape[1], true_boxes_shape[2] ]) true_xy = true_boxes[..., 0:2] true_wh = true_boxes[..., 2:4]
          # Find IOU of each predicted box with each ground truth box. true_wh_half = true_wh / 2. true_mins = true_xy - true_wh_half true_maxes = true_xy + true_wh_half
          intersect_mins = K.maximum(pred_mins, true_mins) intersect_maxes = K.minimum(pred_maxes, true_maxes) intersect_wh = K.maximum(intersect_maxes - intersect_mins, 0.) intersect_areas = intersect_wh[..., 0] * intersect_wh[..., 1]
          pred_areas = pred_wh[..., 0] * pred_wh[..., 1] true_areas = true_wh[..., 0] * true_wh[..., 1]
          union_areas = pred_areas + true_areas - intersect_areas iou_scores = intersect_areas / union_areas
          # Best IOUs for each location. best_ious = K.max(iou_scores, axis=4) # Best IOU scores. best_ious = K.expand_dims(best_ious)
          # A detector has found an object if IOU > thresh for some true box. object_detections = K.cast(best_ious > 0.6, K.dtype(best_ious))
          # TODO: Darknet region training includes extra coordinate loss for early # training steps to encourage predictions to match anchor priors.
          # Determine confidence weights from object and no_object weights. # NOTE: YOLO does not use binary cross-entropy here. no_object_weights = (no_object_scale * (1 - object_detections) * (1 - detectors_mask)) no_objects_loss = no_object_weights * K.square(-pred_confidence)
          if rescore_confidence: objects_loss = (object_scale * detectors_mask * K.square(best_ious - pred_confidence)) else: objects_loss = (object_scale * detectors_mask * K.square(1 - pred_confidence)) confidence_loss = objects_loss + no_objects_loss
          # Classification loss for matching detections. # NOTE: YOLO does not use categorical cross-entropy loss here. matching_classes = K.cast(matching_true_boxes[..., 4], 'int32') matching_classes = K.one_hot(matching_classes, num_classes) classification_loss = (class_scale * detectors_mask * K.square(matching_classes - pred_class_prob))
          # Coordinate loss for matching detection boxes. matching_boxes = matching_true_boxes[..., 0:4] coordinates_loss = (coordinates_scale * detectors_mask * K.square(matching_boxes - pred_boxes))
          confidence_loss_sum = K.sum(confidence_loss) classification_loss_sum = K.sum(classification_loss) coordinates_loss_sum = K.sum(coordinates_loss) total_loss = 0.5 * ( confidence_loss_sum + classification_loss_sum + coordinates_loss_sum) if print_loss: total_loss = tf.Print( total_loss, [ total_loss, confidence_loss_sum, classification_loss_sum, coordinates_loss_sum ], message='yolo_loss, conf_loss, class_loss, box_coord_loss:')
          return total_loss


          2.2、車距的計(jì)算


              通過YOLO進(jìn)行檢測車量,然后返回的車輛檢測框的坐標(biāo)與當(dāng)前坐標(biāo)進(jìn)行透視變換獲取大約的距離作為車輛之間的距離。

              所使用的函數(shù)API接口為:

          cv2.perspectiveTransform(src, m[, dst]) → dst

          參數(shù)解釋

              ?src:輸入的2通道或者3通道的圖片

              ?m:變換矩陣

              返回距離

          代碼:


          2.3、車道線的分割


          車道線檢測的流程:

          實(shí)現(xiàn)步驟:

          1. 圖片校正(對于相機(jī)畸變較大的需要先計(jì)算相機(jī)的畸變矩陣和失真系數(shù),對圖片進(jìn)行校正);

          2. 截取感興趣區(qū)域,僅對包含車道線信息的圖像區(qū)域進(jìn)行處理;

          3. 使用透視變換,將感興趣區(qū)域圖片轉(zhuǎn)換成鳥瞰圖;

          4. 針對不同顏色的車道線,不同光照條件下的車道線,不同清晰度的車道線,根據(jù)不同的顏色空間使用不同的梯度閾值,顏色閾值進(jìn)行不同的處理。并將每一種處理方式進(jìn)行融合,得到車道線的二進(jìn)制圖;

          5. 提取二進(jìn)制圖中屬于車道線的像素;

          6. 對二進(jìn)制圖片的像素進(jìn)行直方圖統(tǒng)計(jì),統(tǒng)計(jì)左右兩側(cè)的峰值點(diǎn)作為左右車道線的起始點(diǎn)坐標(biāo)進(jìn)行曲線擬合;

          7. 使用二次多項(xiàng)式分別擬合左右車道線的像素點(diǎn)(對于噪聲較大的像素點(diǎn),可以進(jìn)行濾波處理,或者使用隨機(jī)采樣一致性算法進(jìn)行曲線擬合);

          8. 計(jì)算車道曲率及車輛相對車道中央的偏離位置;

          9. 效果顯示(可行域顯示,曲率和位置顯示)。

          # class that finds the whole laneclass LaneFinder:    def __init__(self, img_size, warped_size, cam_matrix, dist_coeffs, transform_matrix, pixels_per_meter,                 warning_icon):        self.found = False        self.cam_matrix = cam_matrix        self.dist_coeffs = dist_coeffs        self.img_size = img_size        self.warped_size = warped_size        self.mask = np.zeros((warped_size[1], warped_size[0], 3), dtype=np.uint8)        self.roi_mask = np.ones((warped_size[1], warped_size[0], 3), dtype=np.uint8)        self.total_mask = np.zeros_like(self.roi_mask)        self.warped_mask = np.zeros((self.warped_size[1], self.warped_size[0]), dtype=np.uint8)        self.M = transform_matrix        self.count = 0        self.left_line = LaneLineFinder(warped_size, pixels_per_meter, -1.8288)  # 6 feet in meters        self.right_line = LaneLineFinder(warped_size, pixels_per_meter, 1.8288)        if (warning_icon is not None):            self.warning_icon = np.array(mpimg.imread(warning_icon) * 255, dtype=np.uint8)        else:            self.warning_icon = None
          def undistort(self, img): return cv2.undistort(img, self.cam_matrix, self.dist_coeffs)
          def warp(self, img): return cv2.warpPerspective(img, self.M, self.warped_size, flags=cv2.WARP_FILL_OUTLIERS + cv2.INTER_CUBIC)
          def unwarp(self, img): return cv2.warpPerspective(img, self.M, self.img_size, flags=cv2.WARP_FILL_OUTLIERS + cv2.INTER_CUBIC + cv2.WARP_INVERSE_MAP)
          def equalize_lines(self, alpha=0.9): mean = 0.5 * (self.left_line.coeff_history[:, 0] + self.right_line.coeff_history[:, 0]) self.left_line.coeff_history[:, 0] = alpha * self.left_line.coeff_history[:, 0] + \ (1 - alpha) * (mean - np.array([0, 0, 1.8288], dtype=np.uint8)) self.right_line.coeff_history[:, 0] = alpha * self.right_line.coeff_history[:, 0] + \ (1 - alpha) * (mean + np.array([0, 0, 1.8288], dtype=np.uint8))
          def find_lane(self, img, distorted=True, reset=False): # undistort, warp, change space, filter if distorted: img = self.undistort(img) if reset: self.left_line.reset_lane_line() self.right_line.reset_lane_line() img = self.warp(img) img_hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) img_hls = cv2.medianBlur(img_hls, 5) img_lab = cv2.cvtColor(img, cv2.COLOR_RGB2LAB) img_lab = cv2.medianBlur(img_lab, 5) big_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (31, 31)) small_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (7, 7)) greenery = (img_lab[:, :, 2].astype(np.uint8) > 130) & cv2.inRange(img_hls, (0, 0, 50), (138, 43, 226)) road_mask = np.logical_not(greenery).astype(np.uint8) & (img_hls[:, :, 1] < 250) road_mask = cv2.morphologyEx(road_mask, cv2.MORPH_OPEN, small_kernel) road_mask = cv2.dilate(road_mask, big_kernel) img2, contours, hierarchy = cv2.findContours(road_mask, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) biggest_area = 0 for contour in contours: area = cv2.contourArea(contour) if area > biggest_area: biggest_area = area biggest_contour = contour road_mask = np.zeros_like(road_mask) cv2.fillPoly(road_mask, [biggest_contour], 1) self.roi_mask[:, :, 0] = (self.left_line.line_mask | self.right_line.line_mask) & road_mask self.roi_mask[:, :, 1] = self.roi_mask[:, :, 0] self.roi_mask[:, :, 2] = self.roi_mask[:, :, 0] kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7, 3)) black = cv2.morphologyEx(img_lab[:, :, 0], cv2.MORPH_TOPHAT, kernel) lanes = cv2.morphologyEx(img_hls[:, :, 1], cv2.MORPH_TOPHAT, kernel) kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (13, 13)) lanes_yellow = cv2.morphologyEx(img_lab[:, :, 2], cv2.MORPH_TOPHAT, kernel) self.mask[:, :, 0] = cv2.adaptiveThreshold(black, 1, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 13, -6) self.mask[:, :, 1] = cv2.adaptiveThreshold(lanes, 1, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 13, -4) self.mask[:, :, 2] = cv2.adaptiveThreshold(lanes_yellow, 1, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY,13, -1.5) self.mask *= self.roi_mask small_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) self.total_mask = np.any(self.mask, axis=2).astype(np.uint8) self.total_mask = cv2.morphologyEx(self.total_mask.astype(np.uint8), cv2.MORPH_ERODE, small_kernel) left_mask = np.copy(self.total_mask) right_mask = np.copy(self.total_mask) if self.right_line.found: left_mask = left_mask & np.logical_not(self.right_line.line_mask) & self.right_line.other_line_mask if self.left_line.found: right_mask = right_mask & np.logical_not(self.left_line.line_mask) & self.left_line.other_line_mask self.left_line.find_lane_line(left_mask, reset) self.right_line.find_lane_line(right_mask, reset) self.found = self.left_line.found and self.right_line.found if self.found: self.equalize_lines(0.875)
          def draw_lane_weighted(self, img, thickness=5, alpha=0.8, beta=1, gamma=0): left_line = self.left_line.get_line_points() right_line = self.right_line.get_line_points() both_lines = np.concatenate((left_line, np.flipud(right_line)), axis=0) lanes = np.zeros((self.warped_size[1], self.warped_size[0], 3), dtype=np.uint8) if self.found: cv2.fillPoly(lanes, [both_lines.astype(np.int32)], (138, 43, 226)) cv2.polylines(lanes, [left_line.astype(np.int32)], False, (255, 0, 255), thickness=thickness) cv2.polylines(lanes, [right_line.astype(np.int32)], False, (34, 139, 34), thickness=thickness) cv2.fillPoly(lanes, [both_lines.astype(np.int32)], (138, 43, 226)) mid_coef = 0.5 * (self.left_line.poly_coeffs + self.right_line.poly_coeffs) curve = get_curvature(mid_coef, img_size=self.warped_size, pixels_per_meter=self.left_line.pixels_per_meter) shift = get_center_shift(mid_coef, img_size=self.warped_size, pixels_per_meter=self.left_line.pixels_per_meter) cv2.putText(img, "Road Curvature: {:6.2f}m".format(curve), (20, 50), cv2.FONT_HERSHEY_PLAIN, fontScale=2.5, thickness=5, color=(255, 0, 0)) cv2.putText(img, "Road Curvature: {:6.2f}m".format(curve), (20, 50), cv2.FONT_HERSHEY_PLAIN, fontScale=2.5, thickness=3, color=(0, 0, 0)) cv2.putText(img, "Car Position: {:4.2f}m".format(shift), (60, 100), cv2.FONT_HERSHEY_PLAIN, fontScale=2.5, thickness=5, color=(255, 0, 0)) cv2.putText(img, "Car Position: {:4.2f}m".format(shift), (60, 100), cv2.FONT_HERSHEY_PLAIN, fontScale=2.5, thickness=3, color=(0, 0, 0)) else: warning_shape = self.warning_icon.shape corner = (10, (img.shape[1] - warning_shape[1]) // 2) patch = img[corner[0]:corner[0] + warning_shape[0], corner[1]:corner[1] + warning_shape[1]] patch[self.warning_icon[:, :, 3] > 0] = self.warning_icon[self.warning_icon[:, :, 3] > 0, 0:3] img[corner[0]:corner[0] + warning_shape[0], corner[1]:corner[1] + warning_shape[1]] = patch cv2.putText(img, "Lane lost!", (50, 170), cv2.FONT_HERSHEY_PLAIN, fontScale=2.5, thickness=5, color=(255, 0, 0)) cv2.putText(img, "Lane lost!", (50, 170), cv2.FONT_HERSHEY_PLAIN, fontScale=2.5, thickness=3, color=(0, 0, 0)) lanes_unwarped = self.unwarp(lanes) return cv2.addWeighted(img, alpha, lanes_unwarped, beta, gamma)
          def process_image(self, img, reset=False, show_period=10, blocking=False): self.find_lane(img, distorted=True, reset=reset) lane_img = self.draw_lane_weighted(img) self.count += 1 if show_period > 0 and (self.count % show_period == 1 or show_period == 1): start = 231 plt.clf() for i in range(3): plt.subplot(start + i) plt.imshow(lf.mask[:, :, i] * 255, cmap='gray') plt.subplot(234) plt.imshow((lf.left_line.line + lf.right_line.line) * 255)
          ll = cv2.merge((lf.left_line.line, lf.left_line.line * 0, lf.right_line.line)) lm = cv2.merge((lf.left_line.line_mask, lf.left_line.line * 0, lf.right_line.line_mask)) plt.subplot(235) plt.imshow(lf.roi_mask * 255, cmap='gray') plt.subplot(236) plt.imshow(lane_img) if blocking: plt.show() else: plt.draw() plt.pause(0.000001)        return lane_img


          2.4、測試過程和結(jié)果


          Gif文件由于壓縮問題看上不不是很好,后續(xù)會對每一部分的內(nèi)容進(jìn)行更加細(xì)致的實(shí)踐和講解。


          參考:

          https://zhuanlan.zhihu.com/p/35325884

          https://www.cnblogs.com/YiXiaoZhou/p/7429481.html

          https://github.com/yhcc/yolo2

          https://github.com/allanzelener/yad2k

          https://zhuanlan.zhihu.com/p/74597564

          https://zhuanlan.zhihu.com/p/46295711

          https://blog.csdn.net/weixin_38746685/article/details/81613065?depth_1-

          https://github.com/yang1688899/CarND-Advanced-Lane-Lines


          end



          下載1:OpenCV-Contrib擴(kuò)展模塊中文版教程
          在「小白學(xué)視覺」公眾號后臺回復(fù):擴(kuò)展模塊中文教程,即可下載全網(wǎng)第一份OpenCV擴(kuò)展模塊教程中文版,涵蓋擴(kuò)展模塊安裝、SFM算法、立體視覺、目標(biāo)跟蹤、生物視覺、超分辨率處理等二十多章內(nèi)容。

          下載2:Python視覺實(shí)戰(zhàn)項(xiàng)目52講
          小白學(xué)視覺公眾號后臺回復(fù):Python視覺實(shí)戰(zhàn)項(xiàng)目,即可下載包括圖像分割、口罩檢測、車道線檢測、車輛計(jì)數(shù)、添加眼線、車牌識別、字符識別、情緒檢測、文本內(nèi)容提取、面部識別等31個(gè)視覺實(shí)戰(zhàn)項(xiàng)目,助力快速學(xué)校計(jì)算機(jī)視覺。

          下載3:OpenCV實(shí)戰(zhàn)項(xiàng)目20講
          小白學(xué)視覺公眾號后臺回復(fù):OpenCV實(shí)戰(zhàn)項(xiàng)目20講,即可下載含有20個(gè)基于OpenCV實(shí)現(xiàn)20個(gè)實(shí)戰(zhàn)項(xiàng)目,實(shí)現(xiàn)OpenCV學(xué)習(xí)進(jìn)階。

          交流群


          歡迎加入公眾號讀者群一起和同行交流,目前有SLAM、三維視覺、傳感器自動駕駛、計(jì)算攝影、檢測、分割、識別、醫(yī)學(xué)影像、GAN、算法競賽等微信群(以后會逐漸細(xì)分),請掃描下面微信號加群,備注:”昵稱+學(xué)校/公司+研究方向“,例如:”張三 + 上海交大 + 視覺SLAM“。請按照格式備注,否則不予通過。添加成功后會根據(jù)研究方向邀請進(jìn)入相關(guān)微信群。請勿在群內(nèi)發(fā)送廣告,否則會請出群,謝謝理解~


          瀏覽 52
          點(diǎn)贊
          評論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評論
          圖片
          表情
          推薦
          點(diǎn)贊
          評論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  少妇白洁在线播放 | 婷婷少妇激情 | 美女扒开嫩嫩的尿囗让人桶出白浆 | www.俺也去 | 久久午夜无码鲁丝 |