<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          給 Android WebRTC 增加美顏濾鏡功能

          共 5312字,需瀏覽 11分鐘

           ·

          2021-11-08 13:40

          • 視頻采集渲染流程分析


          在增加濾鏡功能之前,需要對(duì) WebRTC 視頻采集的流程有一定了解。


          WebRTC 中定義了 VideoCapture 接口類,其中定義了相機(jī)的初始化,預(yù)覽,停止預(yù)覽銷毀等操作。


          實(shí)現(xiàn)類是 CameraCapture,并且封裝了Camera1Capture、Camera2Capture 兩個(gè)子類,甚至還有屏幕共享。


          WebRTC 中開始視頻采集非常的簡(jiǎn)單:


          val?videoCapture?=?createVideoCapture()
          videoSource?=?videoCapture.isScreencast.let?{?factory.createVideoSource(it)?}
          videoCapture.initialize(surfaceTextureHelper,applicationContext,videoSource?.capturerObserver)
          videoCapture.startCapture(480,?640,?30)


          這里主要看一下 VideoSource類和capturerObserver。


          VideoSource 中有以下方法


          @Override
          ????public?void?onFrameCaptured(VideoFrame?frame)?{
          ??????final?VideoProcessor.FrameAdaptationParameters?parameters?=
          ??????????nativeAndroidVideoTrackSource.adaptFrame(frame);
          ??????synchronized?(videoProcessorLock)?{
          ????????if?(videoProcessor?!=?null)?{
          ??????????videoProcessor.onFrameCaptured(frame,?parameters);
          ??????????return;
          ????????}
          ??????}
          ??????VideoFrame?adaptedFrame?=?VideoProcessor.applyFrameAdaptationParameters(frame,?parameters);
          ??????if?(adaptedFrame?!=?null)?{
          ????????nativeAndroidVideoTrackSource.onFrameCaptured(adaptedFrame);
          ????????adaptedFrame.release();
          ??????}
          ????}


          采集到的視頻幀數(shù)據(jù)會(huì)回調(diào)給 onFrameCaptured,在這里會(huì)做一下對(duì)視頻的裁切縮放處理,并通過 nativeAndroidVideoTrackSource 傳遞給 Native層。


          重點(diǎn)是 VideoProcessor 對(duì)象,據(jù)查是在2019年2月新增的。VideoSource里面有 setVideoProcessor 方法用于設(shè)置VideoProcessor,在上面方法中可知,如果設(shè)置了VideoProcessor,視頻幀則走VideoProcessor的onFrameCaptured,否則的話直接傳入 Native。


          用 VideoProcessor 來實(shí)現(xiàn)處理發(fā)送前的視頻幀非常方便,我們先來看下VideoProcessor類。


          public?interface?VideoProcessor?extends?CapturerObserver?{
          ??public?static?class?FrameAdaptationParameters?{
          ???...

          ????public?FrameAdaptationParameters(int?cropX,?int?cropY,?int?cropWidth,?int?cropHeight,
          ????????int?scaleWidth,?int?scaleHeight,?long?timestampNs,?boolean?drop)
          ?
          {
          ??????...
          ????}
          ??}

          ??default?void?onFrameCaptured(VideoFrame?frame,?FrameAdaptationParameters?parameters)?{
          ????VideoFrame?adaptedFrame?=?applyFrameAdaptationParameters(frame,?parameters);
          ????if?(adaptedFrame?!=?null)?{
          ??????onFrameCaptured(adaptedFrame);
          ??????adaptedFrame.release();
          ????}
          ??}
          ....
          ?}


          VideoSource中調(diào)用的?

          onFrameCaptured(frame, parameters)?

          并非CapturerObserver的onFrameCaptured,也就是暫時(shí)不會(huì)傳入Native增,它在這個(gè)方法中也做了對(duì)ViewFrame的裁切縮放,之后再傳入底層。


          所以我們可以在這里實(shí)現(xiàn)對(duì)視頻幀的美顏濾鏡處理。


          ?class?FilterProcessor?:?VideoProcessor{

          ???????????????private?var?videoSink:VideoSink

          ????????override?fun?onCapturerStarted(success:?Boolean)?{
          ????????}

          ????????override?fun?onCapturerStopped()?{
          ????????}

          ????????override?fun?onFrameCaptured(frame:?VideoFrame?)?{?
          ??????????val?newFrame?=?//?TODO:?在這對(duì)VideoFrame進(jìn)行視頻濾鏡美顏處理?
          ??????????sink.onFrame(newFrame)
          ????????}

          ????????override?fun?setSink(sink:?VideoSink?)?{
          ????????????//設(shè)置視頻接收器?用來渲染并將frame傳入Native
          ??????????videoSink?=?sink
          ????????}
          ????}

          val?videoCapture?=?createVideoCapture()
          videoSource?=?videoCapture.isScreencast.let?{?factory.createVideoSource(it)?}
          videoSource.setVideoProcessor(FilterProcessor())//設(shè)置處理器
          videoCapture.initialize(surfaceTextureHelper,applicationContext,videoSource?.capturerObserver)
          videoCapture.startCapture(480,?640,?30)


          美顏的話可以用 GPUImage,也可以用商用SDK。


          以上是在應(yīng)用層的實(shí)現(xiàn),利用 WebRTC自帶的類就行。如果是NDK開發(fā),道理也是一樣的。


          創(chuàng)建一個(gè)代理類 CapturerObserverProxy 實(shí)現(xiàn) CapturerObserver,并將真正的 nativeCapturerObserver傳進(jìn)來,Native會(huì)回調(diào)視頻幀數(shù)據(jù)給 CapturerObserverProxy的 onFrameCaptured。


          然后在 onFrameCaptured 中對(duì)視頻進(jìn)行美顏濾鏡處理,再將處理好的 VideoFrame 用 nativeCapturerObserver 傳給底層編碼傳輸。


          public?class?CapturerObserverProxy?implements?CapturerObserver?{
          ????public?static?final?String?TAG?=?CapturerObserverProxy.class.getSimpleName();

          ????private?CapturerObserver?originalObserver;
          ????private?RTCVideoEffector?videoEffector;

          ????public?CapturerObserverProxy(final?SurfaceTextureHelper?surfaceTextureHelper,
          ?????????????????????????????????CapturerObserver?observer,
          ?????????????????????????????????RTCVideoEffector?effector)?{

          ????????this.originalObserver?=?observer;
          ????????this.videoEffector?=?effector;

          ????????final?Handler?handler?=?surfaceTextureHelper.getHandler();
          ????????ThreadUtils.invokeAtFrontUninterruptibly(handler,?()?->
          ????????????????videoEffector.init(surfaceTextureHelper)
          ????????);
          ????}

          ????@Override
          ????public?void?onCapturerStarted(boolean?success)?{
          ????????this.originalObserver.onCapturerStarted(success);
          ????}

          ????@Override
          ????public?void?onCapturerStopped()?{
          ????????this.originalObserver.onCapturerStopped();
          ????}

          ????@Override
          ????public?void?onFrameCaptured(VideoFrame?frame)?{
          ????????if?(this.videoEffector.needToProcessFrame())?{
          ????????????VideoFrame.I420Buffer?originalI420Buffer?=?frame.getBuffer().toI420();
          ????????????VideoFrame.I420Buffer?effectedI420Buffer?=
          ????????????????????this.videoEffector.processByteBufferFrame(
          ????????????????????????????originalI420Buffer,?frame.getRotation(),?frame.getTimestampNs());

          ????????????VideoFrame?effectedVideoFrame?=?new?VideoFrame(
          ????????????????????effectedI420Buffer,?frame.getRotation(),?frame.getTimestampNs());
          ????????????originalI420Buffer.release();
          ????????????this.originalObserver.onFrameCaptured(effectedVideoFrame);
          ????????}?else?{
          ????????????this.originalObserver.onFrameCaptured(frame);
          ????????}
          ????}
          }

          ?videoCapturer.initialize(videoCapturerSurfaceTextureHelper,?context,?observerProxy);


          以上就是給 WebRTC 增加美顏功能的實(shí)現(xiàn)~


          作者:anyRTC
          鏈接:https://juejin.cn/post/7020641037592821790

          推薦閱讀:

          Android FFmpeg 實(shí)現(xiàn)帶濾鏡的微信小視頻錄制功能

          全網(wǎng)最全的 Android 音視頻和 OpenGL ES 干貨,都在這了

          抖音傳送帶特效是怎么實(shí)現(xiàn)的?

          所有你想要的圖片轉(zhuǎn)場(chǎng)效果,都在這了

          瀏覽 95
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評(píng)論
          圖片
          表情
          推薦
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  草草永久地址发布页①免费 | 黄片一二三 | 日韩怡春院 | 一本色道久久综合无码欧美 | 日日夜夜精品视品 |