<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          OpenCV深度神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)人體姿態(tài)評(píng)估

          共 7357字,需瀏覽 15分鐘

           ·

          2021-11-15 22:10

          點(diǎn)擊上方小白學(xué)視覺(jué)”,選擇加"星標(biāo)"或“置頂

          重磅干貨,第一時(shí)間送達(dá)

          OpenCV DNN模塊介紹

          OpenCV自從發(fā)布了DNN模塊之后,就開始以開掛的方式支持各種深度學(xué)習(xí)預(yù)訓(xùn)練模型的調(diào)用,DNN模塊的全稱為深度神經(jīng)網(wǎng)絡(luò),但是并不是所有深度學(xué)習(xí)模型導(dǎo)出到OpenCV DNN模塊中都可以使用,只有那些OpenCV聲明支持的層與網(wǎng)絡(luò)模型才會(huì)被DNN模塊接受,當(dāng)期OpenCV支持的模型與層類型可以在下面鏈接中找到相關(guān)文檔

          https://github.com/opencv/opencv/wiki/Deep-Learning-in-OpenCV

          模型下載

          OpenCV3.4.x的版本開始支持在OpenCV DNN模塊中使用openopse的深度學(xué)習(xí)模型,實(shí)現(xiàn)人體單人姿態(tài)評(píng)估, 首先需要下載人體姿態(tài)評(píng)估的預(yù)訓(xùn)練模型。基于COCO數(shù)據(jù)集訓(xùn)練的模型下載地址如下:

          http://posefs1.perception.cs.cmu.edu/OpenPose/models/pose/coco/pose_iter_440000.caffemodel

          基于MPI數(shù)據(jù)集訓(xùn)練的模型下載地址如下:

          http://posefs1.perception.cs.cmu.edu/OpenPose/models/pose/mpi/pose_iter_160000.caffemodel

          代碼實(shí)現(xiàn)

          下面只需要如下幾步就可以實(shí)現(xiàn)基于OpenCV的單人姿態(tài)評(píng)估:
          1.定義COCO數(shù)據(jù)集支持的18點(diǎn)人體位置與關(guān)系位置

          BODY_PARTS?=?{?"Nose":?0,?"Neck":?1,?"RShoulder":?2,?"RElbow":?3,?"RWrist":?4,
          ???????????????"LShoulder":?5,?"LElbow":?6,?"LWrist":?7,?"RHip":?8,?"RKnee":?9,
          ???????????????"RAnkle":?10,?"LHip":?11,?"LKnee":?12,?"LAnkle":?13,?"REye":?14,
          ???????????????"LEye":?15,?"REar":?16,?"LEar":?17,?"Background":?18?}

          POSE_PAIRS?=?[?["Neck",?"RShoulder"],?["Neck",?"LShoulder"],?["RShoulder",?"RElbow"],
          ???????????????["RElbow",?"RWrist"],?["LShoulder",?"LElbow"],?["LElbow",?"LWrist"],
          ???????????????["Neck",?"RHip"],?["RHip",?"RKnee"],?["RKnee",?"RAnkle"],?["Neck",?"LHip"],
          ???????????????["LHip",?"LKnee"],?["LKnee",?"LAnkle"],?["Neck",?"Nose"],?["Nose",?"REye"],
          ???????????????["REye",?"REar"],?["Nose",?"LEye"],?["LEye",?"LEar"]?]

          2.定義MPI數(shù)據(jù)集支持的15點(diǎn)人體位置與關(guān)系位置

          BODY_PARTS?=?{?"Head":?0,?"Neck":?1,?"RShoulder":?2,?"RElbow":?3,?"RWrist":?4,
          ???????????????"LShoulder":?5,?"LElbow":?6,?"LWrist":?7,?"RHip":?8,?"RKnee":?9,
          ???????????????"RAnkle":?10,?"LHip":?11,?"LKnee":?12,?"LAnkle":?13,?"Chest":?14,
          ???????????????"Background":?15?}

          POSE_PAIRS?=?[?["Head",?"Neck"],?["Neck",?"RShoulder"],?["RShoulder",?"RElbow"],
          ???????????????["RElbow",?"RWrist"],?["Neck",?"LShoulder"],?["LShoulder",?"LElbow"],
          ???????????????["LElbow",?"LWrist"],?["Neck",?"Chest"],?["Chest",?"RHip"],?["RHip",?"RKnee"],
          ???????????????["RKnee",?"RAnkle"],?["Chest",?"LHip"],?["LHip",?"LKnee"],?["LKnee",?"LAnkle"]?]

          3.根據(jù)不同數(shù)據(jù)集調(diào)用DNN模塊加載指定的預(yù)訓(xùn)練模型

          inWidth?=?368
          inHeight?=?368
          thr?=?0.1
          protoc?=?"D:/projects/pose_body/mpi/pose_deploy_linevec_faster_4_stages.prototxt"
          model?=?"D:/projects/pose_body/mpi/pose_iter_160000.caffemodel"
          net?=?cv.dnn.readNetFromCaffe(protoc,?model)

          4.調(diào)用OpenCV打開攝像頭

          cap?=?cv.VideoCapture(0)
          height?=?cap.get(cv.CAP_PROP_FRAME_HEIGHT)
          width?=?cap.get(cv.CAP_PROP_FRAME_WIDTH)

          5.使用前饋網(wǎng)絡(luò)模型預(yù)測(cè)

          frameWidth?=?frame.shape[1]
          frameHeight?=?frame.shape[0]
          inp?=?cv.dnn.blobFromImage(frame,?1.0?/?255,?(inWidth,?inHeight),
          ??????????????????????????(0,?0,?0),?swapRB=False,?crop=False)
          net.setInput(inp)
          out?=?net.forward()

          6.繪制檢測(cè)到人體姿態(tài)關(guān)鍵點(diǎn)位置

          points?=?[]
          for?i?in?range(len(BODY_PARTS)):
          ????#?Slice?heatmap?of?corresponging?body's?part.
          ????heatMap?=?out[0,?i,?:,?:]

          ????#?Originally,?we?try?to?find?all?the?local?maximums.?To?simplify?a?sample
          ????#?we?just?find?a?global?one.?However?only?a?single?pose?at?the?same?time
          ????#?could?be?detected?this?way.
          ????_,?conf,?_,?point?=?cv.minMaxLoc(heatMap)
          ????x?=?(frameWidth?*?point[0])?/?out.shape[3]
          ????y?=?(frameHeight?*?point[1])?/?out.shape[2]

          ????#?Add?a?point?if?it's?confidence?is?higher?than?threshold.
          ????points.append((x,?y)?if?conf?>?thr?else?None)

          for?pair?in?POSE_PAIRS:
          ????partFrom?=?pair[0]
          ????partTo?=?pair[1]
          ????assert(partFrom?in?BODY_PARTS)
          ????assert(partTo?in?BODY_PARTS)

          ????idFrom?=?BODY_PARTS[partFrom]
          ????idTo?=?BODY_PARTS[partTo]
          ????if?points[idFrom]?and?points[idTo]:
          ????????x1,?y1?=?points[idFrom]
          ????????x2,?y2?=?points[idTo]
          ????????cv.line(frame,?(np.int32(x1),?np.int32(y1)),?(np.int32(x2),?np.int32(y2)),?(0,?255,?0),?3)
          ????????cv.ellipse(frame,?(np.int32(x1),?np.int32(y1)),?(3,?3),?0,?0,?360,?(0,?0,?255),?cv.FILLED)
          ????????cv.ellipse(frame,?(np.int32(x2),?np.int32(y2)),?(3,?3),?0,?0,?360,?(0,?0,?255),?cv.FILLED)

          完整的代碼如下:

          import?cv2?as?cv
          import?numpy?as?np


          dataset?=?'MPI'
          if?dataset?==?'COCO':
          ????BODY_PARTS?=?{?"Nose":?0,?"Neck":?1,?"RShoulder":?2,?"RElbow":?3,?"RWrist":?4,
          ???????????????????"LShoulder":?5,?"LElbow":?6,?"LWrist":?7,?"RHip":?8,?"RKnee":?9,
          ???????????????????"RAnkle":?10,?"LHip":?11,?"LKnee":?12,?"LAnkle":?13,?"REye":?14,
          ???????????????????"LEye":?15,?"REar":?16,?"LEar":?17,?"Background":?18?}

          ????POSE_PAIRS?=?[?["Neck",?"RShoulder"],?["Neck",?"LShoulder"],?["RShoulder",?"RElbow"],
          ???????????????????["RElbow",?"RWrist"],?["LShoulder",?"LElbow"],?["LElbow",?"LWrist"],
          ???????????????????["Neck",?"RHip"],?["RHip",?"RKnee"],?["RKnee",?"RAnkle"],?["Neck",?"LHip"],
          ???????????????????["LHip",?"LKnee"],?["LKnee",?"LAnkle"],?["Neck",?"Nose"],?["Nose",?"REye"],
          ???????????????????["REye",?"REar"],?["Nose",?"LEye"],?["LEye",?"LEar"]?]
          else:
          ????assert(dataset?==?'MPI')
          ????BODY_PARTS?=?{?"Head":?0,?"Neck":?1,?"RShoulder":?2,?"RElbow":?3,?"RWrist":?4,
          ???????????????????"LShoulder":?5,?"LElbow":?6,?"LWrist":?7,?"RHip":?8,?"RKnee":?9,
          ???????????????????"RAnkle":?10,?"LHip":?11,?"LKnee":?12,?"LAnkle":?13,?"Chest":?14,
          ???????????????????"Background":?15?}

          ????POSE_PAIRS?=?[?["Head",?"Neck"],?["Neck",?"RShoulder"],?["RShoulder",?"RElbow"],
          ???????????????????["RElbow",?"RWrist"],?["Neck",?"LShoulder"],?["LShoulder",?"LElbow"],
          ???????????????????["LElbow",?"LWrist"],?["Neck",?"Chest"],?["Chest",?"RHip"],?["RHip",?"RKnee"],
          ???????????????????["RKnee",?"RAnkle"],?["Chest",?"LHip"],?["LHip",?"LKnee"],?["LKnee",?"LAnkle"]?]

          inWidth?=?368
          inHeight?=?368
          thr?=?0.1
          protoc?=?"D:/projects/pose_body/mpi/pose_deploy_linevec_faster_4_stages.prototxt"
          model?=?"D:/projects/pose_body/mpi/pose_iter_160000.caffemodel"
          net?=?cv.dnn.readNetFromCaffe(protoc,?model)

          cap?=?cv.VideoCapture(0)
          height?=?cap.get(cv.CAP_PROP_FRAME_HEIGHT)
          width?=?cap.get(cv.CAP_PROP_FRAME_WIDTH)
          video_writer?=?cv.VideoWriter("D:/pose_estimation_demo.mp4",?cv.VideoWriter_fourcc('D',?'I',?'V',?'X'),?15,?(640,?480),?True)
          while?cv.waitKey(1)?0:
          ????hasFrame,?frame?=?cap.read()
          ????if?not?hasFrame:
          ????????cv.waitKey()
          ????????break

          ????frameWidth?=?frame.shape[1]
          ????frameHeight?=?frame.shape[0]
          ????inp?=?cv.dnn.blobFromImage(frame,?1.0?/?255,?(inWidth,?inHeight),
          ??????????????????????????????(0,?0,?0),?swapRB=False,?crop=False)
          ????net.setInput(inp)
          ????out?=?net.forward()

          ????print(len(BODY_PARTS),?out.shape[0])
          ????#?assert(len(BODY_PARTS)?==?out.shape[1])

          ????points?=?[]
          ????for?i?in?range(len(BODY_PARTS)):
          ????????#?Slice?heatmap?of?corresponging?body's?part.
          ????????heatMap?=?out[0,?i,?:,?:]

          ????????#?Originally,?we?try?to?find?all?the?local?maximums.?To?simplify?a?sample
          ????????#?we?just?find?a?global?one.?However?only?a?single?pose?at?the?same?time
          ????????#?could?be?detected?this?way.
          ????????_,?conf,?_,?point?=?cv.minMaxLoc(heatMap)
          ????????x?=?(frameWidth?*?point[0])?/?out.shape[3]
          ????????y?=?(frameHeight?*?point[1])?/?out.shape[2]

          ????????#?Add?a?point?if?it's?confidence?is?higher?than?threshold.
          ????????points.append((x,?y)?if?conf?>?thr?else?None)

          ????for?pair?in?POSE_PAIRS:
          ????????partFrom?=?pair[0]
          ????????partTo?=?pair[1]
          ????????assert(partFrom?in?BODY_PARTS)
          ????????assert(partTo?in?BODY_PARTS)

          ????????idFrom?=?BODY_PARTS[partFrom]
          ????????idTo?=?BODY_PARTS[partTo]
          ????????if?points[idFrom]?and?points[idTo]:
          ????????????x1,?y1?=?points[idFrom]
          ????????????x2,?y2?=?points[idTo]
          ????????????cv.line(frame,?(np.int32(x1),?np.int32(y1)),?(np.int32(x2),?np.int32(y2)),?(0,?255,?0),?3)
          ????????????cv.ellipse(frame,?(np.int32(x1),?np.int32(y1)),?(3,?3),?0,?0,?360,?(0,?0,?255),?cv.FILLED)
          ????????????cv.ellipse(frame,?(np.int32(x2),?np.int32(y2)),?(3,?3),?0,?0,?360,?(0,?0,?255),?cv.FILLED)

          ????t,?_?=?net.getPerfProfile()
          ????freq?=?cv.getTickFrequency()?/?1000
          ????cv.putText(frame,?'%.2fms'?%?(t?/?freq),?(10,?20),?cv.FONT_HERSHEY_SIMPLEX,?0.5,?(0,?0,?0))
          ????#?video_writer.write(frame);
          ????#?cv.imwrite("D:/pose.png",?frame)
          ????cv.imshow('OpenPose?using?OpenCV',?frame)

          運(yùn)行結(jié)果如下:


          下載1:OpenCV-Contrib擴(kuò)展模塊中文版教程
          在「小白學(xué)視覺(jué)」公眾號(hào)后臺(tái)回復(fù):擴(kuò)展模塊中文教程即可下載全網(wǎng)第一份OpenCV擴(kuò)展模塊教程中文版,涵蓋擴(kuò)展模塊安裝、SFM算法、立體視覺(jué)、目標(biāo)跟蹤、生物視覺(jué)、超分辨率處理等二十多章內(nèi)容。

          下載2:Python視覺(jué)實(shí)戰(zhàn)項(xiàng)目52講
          小白學(xué)視覺(jué)公眾號(hào)后臺(tái)回復(fù):Python視覺(jué)實(shí)戰(zhàn)項(xiàng)目即可下載包括圖像分割、口罩檢測(cè)、車道線檢測(cè)、車輛計(jì)數(shù)、添加眼線、車牌識(shí)別、字符識(shí)別、情緒檢測(cè)、文本內(nèi)容提取、面部識(shí)別等31個(gè)視覺(jué)實(shí)戰(zhàn)項(xiàng)目,助力快速學(xué)校計(jì)算機(jī)視覺(jué)。

          下載3:OpenCV實(shí)戰(zhàn)項(xiàng)目20講
          小白學(xué)視覺(jué)公眾號(hào)后臺(tái)回復(fù):OpenCV實(shí)戰(zhàn)項(xiàng)目20講即可下載含有20個(gè)基于OpenCV實(shí)現(xiàn)20個(gè)實(shí)戰(zhàn)項(xiàng)目,實(shí)現(xiàn)OpenCV學(xué)習(xí)進(jìn)階。

          交流群


          歡迎加入公眾號(hào)讀者群一起和同行交流,目前有SLAM、三維視覺(jué)、傳感器自動(dòng)駕駛、計(jì)算攝影、檢測(cè)、分割、識(shí)別、醫(yī)學(xué)影像、GAN算法競(jìng)賽等微信群(以后會(huì)逐漸細(xì)分),請(qǐng)掃描下面微信號(hào)加群,備注:”昵稱+學(xué)校/公司+研究方向“,例如:”張三?+?上海交大?+?視覺(jué)SLAM“。請(qǐng)按照格式備注,否則不予通過(guò)。添加成功后會(huì)根據(jù)研究方向邀請(qǐng)進(jìn)入相關(guān)微信群。請(qǐng)勿在群內(nèi)發(fā)送廣告,否則會(huì)請(qǐng)出群,謝謝理解~


          瀏覽 44
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評(píng)論
          圖片
          表情
          推薦
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  一级性爱网站 | 婷婷aV无码| 天天爱天天干天天爽 | 在线免费精品福利 | 欧美精品XXXX在线 |