<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          MTK 多幀算法集成實(shí)現(xiàn)流程

          共 37494字,需瀏覽 75分鐘

           ·

          2024-04-10 18:50

          c8d267e9b097e9bd4898ac8fa61025c1.webp

          和你一起終身學(xué) 習(xí),這里是程序員Android

          經(jīng)典好文推薦,通過閱讀本文,您將收獲以下知識點(diǎn):

          一、選擇feature和配置feature table
          二、 掛載算法
          三、自定義metadata
          四、APP調(diào)用算法
          五、結(jié)語

          一、選擇feature和配置feature table

          1.1 選擇feature

          多幀降噪算法(MFNR)是一種很常見的多幀算法,在MTK已預(yù)置的feature中有MTK_FEATURE_MFNR和TP_FEATURE_MFNR。因此,我們可以對號入座,不用再額外添加feature。這里我們是第三方算法,所以我們選擇TP_FEATURE_MFNR。

          1.2 配置feature table

          確定了feature為TP_FEATURE_MFNR后,我們還需要將其添加到feature table中:

              diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
          index f14ff8a6e2..38365e0602 100755
          --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
          +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
          @@ -106,6 +106,7 @@ using namespace NSCam::v3::pipeline::policy::scenariomgr;
          #define MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
          #define MTK_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_FUSION| TP_FEATURE_WATERMARK)
          #define MTK_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_PUREBOKEH| TP_FEATURE_WATERMARK)
          +#define MTK_FEATURE_COMBINATION_TP_MFNR (TP_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| MTK_FEATURE_MFNR)

          // streaming feature combination (TODO: it should be refined by streaming scenario feature)
          #define MTK_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK)
          @@ -136,6 +137,7 @@ const std::vector<std::unordered_map<int32_t, ScenarioFeatures>> gMtkScenarioFe
          ADD_CAMERA_FEATURE_SET(TP_FEATURE_HDR, MTK_FEATURE_COMBINATION_HDR)
          ADD_CAMERA_FEATURE_SET(MTK_FEATURE_AINR, MTK_FEATURE_COMBINATION_AINR)
          ADD_CAMERA_FEATURE_SET(MTK_FEATURE_MFNR, MTK_FEATURE_COMBINATION_MFNR)
          + ADD_CAMERA_FEATURE_SET(TP_FEATURE_MFNR, MTK_FEATURE_COMBINATION_TP_MFNR)
          ADD_CAMERA_FEATURE_SET(MTK_FEATURE_REMOSAIC, MTK_FEATURE_COMBINATION_REMOSAIC)
          ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL, MTK_FEATURE_COMBINATION_SINGLE)
          CAMERA_SCENARIO_END

          注意:

          MTK在Android Q(10.0)及更高版本上優(yōu)化了scenario配置表的客制化,Android Q及更高版本,feature需要在:
          vendor/mediatek/proprietary/custom/[platform]/hal/camera/camera_custom_feature_table.cpp中配置,[platform]是諸如mt6580,mt6763之類的。

          二、 掛載算法

          2.1 為算法選擇plugin

          MTK HAL3在vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/plugin/PipelinePluginType.h 中將三方算法的掛載點(diǎn)大致分為以下幾類:

          • BokehPlugin:Bokeh算法掛載點(diǎn),雙攝景深算法的虛化部分。

          • DepthPlugin:Depth算法掛載點(diǎn),雙攝景深算法的計算深度部分。

          • FusionPlugin:Depth和Bokeh放在1個算法中,即合并的雙攝景深算法掛載點(diǎn)。

          • JoinPlugin:Streaming相關(guān)算法掛載點(diǎn),預(yù)覽算法都掛載在這里。

          • MultiFramePlugin:多幀算法掛載點(diǎn),包括YUV與RAW,例如MFNR/HDR

          • RawPlugin:RAW算法掛載點(diǎn),例如remosaic

          • YuvPlugin:Yuv單幀算法掛載點(diǎn),例如美顏、廣角鏡頭畸變校正等。

          對號入座,為要集成的算法選擇相應(yīng)的plugin。這里是多幀算法,只能選擇MultiFramePlugin。并且,一般情況下多幀算法只用于拍照,不用于預(yù)覽。

          2.2 添加全局宏控

          為了能控制某個項(xiàng)目是否集成此算法,我們在device/mediateksample/[platform]/ProjectConfig.mk中添加一個宏,用于控制新接入算法的編譯:

              QXT_MFNR_SUPPORT = yes

          當(dāng)某個項(xiàng)目不需要這個算法時,將device/mediateksample/[platform]/ProjectConfig.mk的QXT_MFNR_SUPPORT的值設(shè)為 no 就可以了。

          2.3 編寫算法集成文件

          參照vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mfnr/MFNRImpl.cpp中實(shí)現(xiàn)MFNR拍照。目錄結(jié)構(gòu)如下:
          vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/cp_tp_mfnr/
          ├── Android.mk
          ├── include
          │ └── mf_processor.h
          ├── lib
          │ ├── arm64-v8a
          │ │ └── libmultiframe.so
          │ └── armeabi-v7a
          │ └── libmultiframe.so
          └── MFNRImpl.cpp

          文件說明:

          • Android.mk中配置算法庫、頭文件、集成的源代碼MFNRImpl.cpp文件,將它們編譯成庫libmtkcam.plugin.tp_mfnr,供libmtkcam_3rdparty.customer依賴調(diào)用。

          • libmultiframe.so實(shí)現(xiàn)了將連續(xù)4幀圖像縮小,并拼接成一張圖的功能,libmultiframe.so用來模擬需要接入的第三方多幀算法庫。mf_processor.h是頭文件。

          • MFNRImpl.cpp是集成的源代碼CPP文件。

          2.3.1 mtkcam3/3rdparty/customer/cp_tp_mfnr/Android.mk
              ifeq ($(QXT_MFNR_SUPPORT),yes)
          LOCAL_PATH := $(call my-dir)

          include $(CLEAR_VARS)
          LOCAL_MODULE := libmultiframe
          LOCAL_SRC_FILES_32 := lib/armeabi-v7a/libmultiframe.so
          LOCAL_SRC_FILES_64 := lib/arm64-v8a/libmultiframe.so
          LOCAL_MODULE_TAGS := optional
          LOCAL_MODULE_CLASS := SHARED_LIBRARIES
          LOCAL_MODULE_SUFFIX := .so
          LOCAL_PROPRIETARY_MODULE := true
          LOCAL_MULTILIB := both
          include $(BUILD_PREBUILT)

          ################################################################################
          #
          ################################################################################
          include $(CLEAR_VARS)

          #-----------------------------------------------------------
          -include $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/mtkcam.mk

          #-----------------------------------------------------------
          LOCAL_SRC_FILES += MFNRImpl.cpp

          #-----------------------------------------------------------
          LOCAL_C_INCLUDES += $(MTKCAM_C_INCLUDES)
          LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/include $(MTK_PATH_SOURCE)/hardware/mtkcam/include
          LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_COMMON)/hal/inc
          LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_CUSTOM_PLATFORM)/hal/inc
          LOCAL_C_INCLUDES += $(TOP)/external/libyuv/files/include/
          LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/3rdparty/customer/cp_tp_mfnr/include
          #
          LOCAL_C_INCLUDES += system/media/camera/include

          #-----------------------------------------------------------
          LOCAL_CFLAGS += $(MTKCAM_CFLAGS)
          #

          #-----------------------------------------------------------
          LOCAL_STATIC_LIBRARIES +=
          #
          LOCAL_WHOLE_STATIC_LIBRARIES +=

          #-----------------------------------------------------------
          LOCAL_SHARED_LIBRARIES += liblog
          LOCAL_SHARED_LIBRARIES += libutils
          LOCAL_SHARED_LIBRARIES += libcutils
          LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper
          LOCAL_SHARED_LIBRARIES += libmtkcam_stdutils
          LOCAL_SHARED_LIBRARIES += libmtkcam_pipeline
          LOCAL_SHARED_LIBRARIES += libmtkcam_metadata
          LOCAL_SHARED_LIBRARIES += libmtkcam_metastore
          LOCAL_SHARED_LIBRARIES += libmtkcam_streamutils
          LOCAL_SHARED_LIBRARIES += libmtkcam_imgbuf
          LOCAL_SHARED_LIBRARIES += libmtkcam_exif
          #LOCAL_SHARED_LIBRARIES += libmtkcam_3rdparty

          #-----------------------------------------------------------
          LOCAL_HEADER_LIBRARIES := libutils_headers liblog_headers libhardware_headers

          #-----------------------------------------------------------
          LOCAL_MODULE := libmtkcam.plugin.tp_mfnr
          LOCAL_PROPRIETARY_MODULE := true
          LOCAL_MODULE_OWNER := mtk
          LOCAL_MODULE_TAGS := optional
          include $(MTK_STATIC_LIBRARY)

          ################################################################################
          #
          ################################################################################
          include $(call all-makefiles-under,$(LOCAL_PATH))
          endif

          2.3.2 mtkcam3/3rdparty/customer/cp_tp_mfnr/include/mf_processor.h
              #ifndef QXT_MULTI_FRAME_H
          #define QXT_MULTI_FRAME_H

          class MFProcessor {

          public:
          virtual ~MFProcessor() {}

          virtual void setFrameCount(int num) = 0;

          virtual void setParams() = 0;

          virtual void addFrame(unsigned char *src, int srcWidth, int srcHeight) = 0;

          virtual void addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
          int srcWidth, int srcHeight) = 0;

          virtual void scale(unsigned char *src, int srcWidth, int srcHeight,
          unsigned char *dst, int dstWidth, int dstHeight) = 0;

          virtual void process(unsigned char *output, int outputWidth, int outputHeight) = 0;

          virtual void process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
          int outputWidth, int outputHeight) = 0;

          static MFProcessor* createInstance(int width, int height);
          };

          #endif //QXT_MULTI_FRAME_H

          頭文件中的接口函數(shù)介紹:

          • setFrameCount:沒有實(shí)際作用,用于模擬設(shè)置第三方多幀算法的幀數(shù)。因?yàn)椴糠值谌蕉鄮惴ㄔ诓煌瑘鼍跋滦枰膸瑪?shù)可能是不同的。

          • setParams:也沒有實(shí)際作用,用于模擬設(shè)置第三方多幀算法所需的參數(shù)。

          • addFrame:用于添加一幀圖像數(shù)據(jù),用于模擬第三方多幀算法添加圖像數(shù)據(jù)。

          • process:將前面添加的4幀圖像數(shù)據(jù),縮小并拼接成一張?jiān)笮〉膱D。

          • createInstance:創(chuàng)建接口類對象。

          為了方便有興趣的童鞋們,實(shí)現(xiàn)代碼mf_processor_impl.cpp也一并貼上:

              #include <libyuv/scale.h>
          #include <cstring>
          #include "mf_processor.h"

          using namespace std;
          using namespace libyuv;

          class MFProcessorImpl : public MFProcessor {
          private:
          int frameCount = 4;
          int currentIndex = 0;
          unsigned char *dstBuf = nullptr;
          unsigned char *tmpBuf = nullptr;

          public:
          MFProcessorImpl();

          MFProcessorImpl(int width, int height);

          ~MFProcessorImpl() override;

          void setFrameCount(int num) override;

          void setParams() override;

          void addFrame(unsigned char *src, int srcWidth, int srcHeight) override;

          void addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
          int srcWidth, int srcHeight) override;

          void scale(unsigned char *src, int srcWidth, int srcHeight,
          unsigned char *dst, int dstWidth, int dstHeight) override;

          void process(unsigned char *output, int outputWidth, int outputHeight) override;

          void process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
          int outputWidth, int outputHeight) override;

          static MFProcessor *createInstance(int width, int height);
          };

          MFProcessorImpl::MFProcessorImpl() = default;

          MFProcessorImpl::MFProcessorImpl(int width, int height) {
          if (dstBuf == nullptr) {
          dstBuf = new unsigned char[width * height * 3 / 2];
          }
          if (tmpBuf == nullptr) {
          tmpBuf = new unsigned char[width / 2 * height / 2 * 3 / 2];
          }
          }

          MFProcessorImpl::~MFProcessorImpl() {
          if (dstBuf != nullptr) {
          delete[] dstBuf;
          }

          if (tmpBuf != nullptr) {
          delete[] tmpBuf;
          }
          }

          void MFProcessorImpl::setFrameCount(int num) {
          frameCount = num;
          }

          void MFProcessorImpl::setParams() {

          }

          void MFProcessorImpl::addFrame(unsigned char *src, int srcWidth, int srcHeight) {
          int srcYCount = srcWidth * srcHeight;
          int srcUVCount = srcWidth * srcHeight / 4;
          int tmpWidth = srcWidth >> 1;
          int tmpHeight = srcHeight >> 1;
          int tmpYCount = tmpWidth * tmpHeight;
          int tmpUVCount = tmpWidth * tmpHeight / 4;
          //scale
          I420Scale(src, srcWidth,
          src + srcYCount, srcWidth >> 1,
          src + srcYCount + srcUVCount, srcWidth >> 1,
          srcWidth, srcHeight,
          tmpBuf, tmpWidth,
          tmpBuf + tmpYCount, tmpWidth >> 1,
          tmpBuf + tmpYCount + tmpUVCount, tmpWidth >> 1,
          tmpWidth, tmpHeight,
          kFilterNone);

          //merge
          unsigned char *pDstY;
          unsigned char *pTmpY;
          for (int i = 0; i < tmpHeight; i++) {
          pTmpY = tmpBuf + i * tmpWidth;
          if (currentIndex == 0) {
          pDstY = dstBuf + i * srcWidth;
          } else if (currentIndex == 1) {
          pDstY = dstBuf + i * srcWidth + tmpWidth;
          } else if (currentIndex == 2) {
          pDstY = dstBuf + (i + tmpHeight) * srcWidth;
          } else {
          pDstY = dstBuf + (i + tmpHeight) * srcWidth + tmpWidth;
          }
          memcpy(pDstY, pTmpY, tmpWidth);
          }

          int uvHeight = tmpHeight / 2;
          int uvWidth = tmpWidth / 2;
          unsigned char *pDstU;
          unsigned char *pDstV;
          unsigned char *pTmpU;
          unsigned char *pTmpV;
          for (int i = 0; i < uvHeight; i++) {
          pTmpU = tmpBuf + tmpYCount + uvWidth * i;
          pTmpV = tmpBuf + tmpYCount + tmpUVCount + uvWidth * i;
          if (currentIndex == 0) {
          pDstU = dstBuf + srcYCount + i * tmpWidth;
          pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth;
          } else if (currentIndex == 1) {
          pDstU = dstBuf + srcYCount + i * tmpWidth + uvWidth;
          pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth + uvWidth;
          } else if (currentIndex == 2) {
          pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth;
          pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth;
          } else {
          pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth + uvWidth;
          pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth + uvWidth;
          }
          memcpy(pDstU, pTmpU, uvWidth);
          memcpy(pDstV, pTmpV, uvWidth);
          }
          if (currentIndex < frameCount) currentIndex++;
          }

          void MFProcessorImpl::addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
          int srcWidth, int srcHeight) {
          int srcYCount = srcWidth * srcHeight;
          int srcUVCount = srcWidth * srcHeight / 4;
          int tmpWidth = srcWidth >> 1;
          int tmpHeight = srcHeight >> 1;
          int tmpYCount = tmpWidth * tmpHeight;
          int tmpUVCount = tmpWidth * tmpHeight / 4;
          //scale
          I420Scale(srcY, srcWidth,
          srcU, srcWidth >> 1,
          srcV, srcWidth >> 1,
          srcWidth, srcHeight,
          tmpBuf, tmpWidth,
          tmpBuf + tmpYCount, tmpWidth >> 1,
          tmpBuf + tmpYCount + tmpUVCount, tmpWidth >> 1,
          tmpWidth, tmpHeight,
          kFilterNone);

          //merge
          unsigned char *pDstY;
          unsigned char *pTmpY;
          for (int i = 0; i < tmpHeight; i++) {
          pTmpY = tmpBuf + i * tmpWidth;
          if (currentIndex == 0) {
          pDstY = dstBuf + i * srcWidth;
          } else if (currentIndex == 1) {
          pDstY = dstBuf + i * srcWidth + tmpWidth;
          } else if (currentIndex == 2) {
          pDstY = dstBuf + (i + tmpHeight) * srcWidth;
          } else {
          pDstY = dstBuf + (i + tmpHeight) * srcWidth + tmpWidth;
          }
          memcpy(pDstY, pTmpY, tmpWidth);
          }

          int uvHeight = tmpHeight / 2;
          int uvWidth = tmpWidth / 2;
          unsigned char *pDstU;
          unsigned char *pDstV;
          unsigned char *pTmpU;
          unsigned char *pTmpV;
          for (int i = 0; i < uvHeight; i++) {
          pTmpU = tmpBuf + tmpYCount + uvWidth * i;
          pTmpV = tmpBuf + tmpYCount + tmpUVCount + uvWidth * i;
          if (currentIndex == 0) {
          pDstU = dstBuf + srcYCount + i * tmpWidth;
          pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth;
          } else if (currentIndex == 1) {
          pDstU = dstBuf + srcYCount + i * tmpWidth + uvWidth;
          pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth + uvWidth;
          } else if (currentIndex == 2) {
          pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth;
          pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth;
          } else {
          pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth + uvWidth;
          pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth + uvWidth;
          }
          memcpy(pDstU, pTmpU, uvWidth);
          memcpy(pDstV, pTmpV, uvWidth);
          }
          if (currentIndex < frameCount) currentIndex++;
          }

          void MFProcessorImpl::scale(unsigned char *src, int srcWidth, int srcHeight,
          unsigned char *dst, int dstWidth, int dstHeight) {
          I420Scale(src, srcWidth,//Y
          src + srcWidth * srcHeight, srcWidth >> 1,//U
          src + srcWidth * srcHeight * 5 / 4, srcWidth >> 1,//V
          srcWidth, srcHeight,
          dst, dstWidth,//Y
          dst + dstWidth * dstHeight, dstWidth >> 1,//U
          dst + dstWidth * dstHeight * 5 / 4, dstWidth >> 1,//V
          dstWidth, dstHeight,
          kFilterNone);
          }

          void MFProcessorImpl::process(unsigned char *output, int outputWidth, int outputHeight) {
          memcpy(output, dstBuf, outputWidth * outputHeight * 3 / 2);
          currentIndex = 0;
          }

          void MFProcessorImpl::process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
          int outputWidth, int outputHeight) {
          int yCount = outputWidth * outputHeight;
          int uvCount = yCount / 4;
          memcpy(outputY, dstBuf, yCount);
          memcpy(outputU, dstBuf + yCount, uvCount);
          memcpy(outputV, dstBuf + yCount + uvCount, uvCount);
          currentIndex = 0;
          }

          MFProcessor* MFProcessor::createInstance(int width, int height) {
          return new MFProcessorImpl(width, height);
          }

          2.3.3 mtkcam3/3rdparty/customer/cp_tp_mfnr/MFNRImpl.cpp
              #ifdef LOG_TAG
          #undef LOG_TAG
          #endif // LOG_TAG
          #define LOG_TAG "MFNRProvider"
          static const char *__CALLERNAME__ = LOG_TAG;

          //
          #include <mtkcam/utils/std/Log.h>
          //
          #include <stdlib.h>
          #include <utils/Errors.h>
          #include <utils/List.h>
          #include <utils/RefBase.h>
          #include <sstream>
          #include <unordered_map> // std::unordered_map
          //
          #include <mtkcam/utils/metadata/client/mtk_metadata_tag.h>
          #include <mtkcam/utils/metadata/hal/mtk_platform_metadata_tag.h>
          //zHDR
          #include <mtkcam/utils/hw/HwInfoHelper.h> // NSCamHw::HwInfoHelper
          #include <mtkcam3/feature/utils/FeatureProfileHelper.h> //ProfileParam
          #include <mtkcam/drv/IHalSensor.h>
          //
          #include <mtkcam/utils/imgbuf/IIonImageBufferHeap.h>
          //
          #include <mtkcam/utils/std/Format.h>
          #include <mtkcam/utils/std/Time.h>
          //
          #include <mtkcam3/pipeline/hwnode/NodeId.h>
          //
          #include <mtkcam/utils/metastore/IMetadataProvider.h>
          #include <mtkcam/utils/metastore/ITemplateRequest.h>
          #include <mtkcam/utils/metastore/IMetadataProvider.h>
          #include <mtkcam3/3rdparty/plugin/PipelinePlugin.h>
          #include <mtkcam3/3rdparty/plugin/PipelinePluginType.h>
          //
          #include <isp_tuning/isp_tuning.h> //EIspProfile_T, EOperMode_*

          //
          #include <custom_metadata/custom_metadata_tag.h>

          //
          #include <libyuv.h>
          #include <mf_processor.h>

          using namespace NSCam;
          using namespace android;
          using namespace std;
          using namespace NSCam::NSPipelinePlugin;
          using namespace NSIspTuning;
          /******************************************************************************
          *
          ******************************************************************************/

          #define MY_LOGV(fmt, arg...) CAM_LOGV("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
          #define MY_LOGD(fmt, arg...) CAM_LOGD("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
          #define MY_LOGI(fmt, arg...) CAM_LOGI("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
          #define MY_LOGW(fmt, arg...) CAM_LOGW("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
          #define MY_LOGE(fmt, arg...) CAM_LOGE("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
          //
          #define MY_LOGV_IF(cond, ...) do { if ( (cond) ) { MY_LOGV(__VA_ARGS__); } }while(0)
          #define MY_LOGD_IF(cond, ...) do { if ( (cond) ) { MY_LOGD(__VA_ARGS__); } }while(0)
          #define MY_LOGI_IF(cond, ...) do { if ( (cond) ) { MY_LOGI(__VA_ARGS__); } }while(0)
          #define MY_LOGW_IF(cond, ...) do { if ( (cond) ) { MY_LOGW(__VA_ARGS__); } }while(0)
          #define MY_LOGE_IF(cond, ...) do { if ( (cond) ) { MY_LOGE(__VA_ARGS__); } }while(0)
          //
          #define ASSERT(cond, msg) do { if (!(cond)) { printf("Failed: %s\n", msg); return; } }while(0)

          #define __DEBUG // enable debug

          #ifdef __DEBUG
          #include <memory>
          #define FUNCTION_SCOPE \
          auto __scope_logger__ = [](char const* f)->std::shared_ptr<const char>{ \
          CAM_LOGD("(%d)[%s] + ", ::gettid(), f); \
          return std::shared_ptr<const char>(f, [](char const* p){CAM_LOGD("(%d)[%s] -", ::gettid(), p);}); \
          }(__FUNCTION__)

          #else
          #define FUNCTION_SCOPE
          #endif

          template <typename T>
          inline MBOOL
          tryGetMetadata(
          IMetadata* pMetadata,
          MUINT32 const tag,
          T & rVal
          )
          {
          if (pMetadata == NULL) {
          MY_LOGW("pMetadata == NULL");
          return MFALSE;
          }

          IMetadata::IEntry entry = pMetadata->entryFor(tag);
          if (!entry.isEmpty()) {
          rVal = entry.itemAt(0, Type2Type<T>());
          return MTRUE;
          }
          return MFALSE;
          }

          #define MFNR_FRAME_COUNT 4
          /******************************************************************************
          *
          ******************************************************************************/

          class MFNRProviderImpl : public MultiFramePlugin::IProvider {
          typedef MultiFramePlugin::Property Property;
          typedef MultiFramePlugin::Selection Selection;
          typedef MultiFramePlugin::Request::Ptr RequestPtr;
          typedef MultiFramePlugin::RequestCallback::Ptr RequestCallbackPtr;

          public:

          virtual void set(MINT32 iOpenId, MINT32 iOpenId2) {
          MY_LOGD("set openId:%d openId2:%d", iOpenId, iOpenId2);
          mOpenId = iOpenId;
          }

          virtual const Property& property() {
          FUNCTION_SCOPE;
          static Property prop;
          static bool inited;

          if (!inited) {
          prop.mName = "TP_MFNR";
          prop.mFeatures = TP_FEATURE_MFNR;
          prop.mThumbnailTiming = eTiming_P2;
          prop.mPriority = ePriority_Highest;
          prop.mZsdBufferMaxNum = 8; // maximum frames requirement
          prop.mNeedRrzoBuffer = MTRUE; // rrzo requirement for BSS
          inited = MTRUE;
          }
          return prop;
          };

          virtual MERROR negotiate(Selection& sel) {
          FUNCTION_SCOPE;

          IMetadata* appInMeta = sel.mIMetadataApp.getControl().get();
          tryGetMetadata<MINT32>(appInMeta, QXT_FEATURE_MFNR, mEnable);
          MY_LOGD("mEnable: %d", mEnable);
          if (!mEnable) {
          MY_LOGD("Force off TP_MFNR shot");
          return BAD_VALUE;
          }

          sel.mRequestCount = MFNR_FRAME_COUNT;

          MY_LOGD("mRequestCount=%d", sel.mRequestCount);
          sel.mIBufferFull
          .setRequired(MTRUE)
          .addAcceptedFormat(eImgFmt_I420) // I420 first
          .addAcceptedFormat(eImgFmt_YV12)
          .addAcceptedFormat(eImgFmt_NV21)
          .addAcceptedFormat(eImgFmt_NV12)
          .addAcceptedSize(eImgSize_Full);
          //sel.mIBufferSpecified.setRequired(MTRUE).setAlignment(16, 16);
          sel.mIMetadataDynamic.setRequired(MTRUE);
          sel.mIMetadataApp.setRequired(MTRUE);
          sel.mIMetadataHal.setRequired(MTRUE);
          if (sel.mRequestIndex == 0) {
          sel.mOBufferFull
          .setRequired(MTRUE)
          .addAcceptedFormat(eImgFmt_I420) // I420 first
          .addAcceptedFormat(eImgFmt_YV12)
          .addAcceptedFormat(eImgFmt_NV21)
          .addAcceptedFormat(eImgFmt_NV12)
          .addAcceptedSize(eImgSize_Full);
          sel.mOMetadataApp.setRequired(MTRUE);
          sel.mOMetadataHal.setRequired(MTRUE);
          } else {
          sel.mOBufferFull.setRequired(MFALSE);
          sel.mOMetadataApp.setRequired(MFALSE);
          sel.mOMetadataHal.setRequired(MFALSE);
          }

          return OK;
          };

          virtual void init() {
          FUNCTION_SCOPE;
          mDump = property_get_bool("vendor.debug.camera.mfnr.dump", 0);
          //nothing to do for MFNR
          };

          virtual MERROR process(RequestPtr pRequest, RequestCallbackPtr pCallback) {
          FUNCTION_SCOPE;
          MERROR ret = 0;
          // restore callback function for abort API
          if (pCallback != nullptr) {
          m_callbackprt = pCallback;
          }
          //maybe need to keep a copy in member<sp>
          IMetadata* pAppMeta = pRequest->mIMetadataApp->acquire();
          IMetadata* pHalMeta = pRequest->mIMetadataHal->acquire();
          IMetadata* pHalMetaDynamic = pRequest->mIMetadataDynamic->acquire();
          MINT32 processUniqueKey = 0;
          IImageBuffer* pInImgBuffer = NULL;
          uint32_t width = 0;
          uint32_t height = 0;
          if (!IMetadata::getEntry<MINT32>(pHalMeta, MTK_PIPELINE_UNIQUE_KEY, processUniqueKey)) {
          MY_LOGE("cannot get unique about MFNR capture");
          return BAD_VALUE;
          }

          if (pRequest->mIBufferFull != nullptr) {
          pInImgBuffer = pRequest->mIBufferFull->acquire();
          width = pInImgBuffer->getImgSize().w;
          height = pInImgBuffer->getImgSize().h;
          MY_LOGD("[IN] Full image VA: 0x%p, Size(%dx%d), Format: %s",
          pInImgBuffer->getBufVA(0), width, height, format2String(pInImgBuffer->getImgFormat()));
          if (mDump) {
          char path[256];
          snprintf(path, sizeof(path), "/data/vendor/camera_dump/mfnr_capture_in_%d_%dx%d.%s",
          pRequest->mRequestIndex, width, height, format2String(pInImgBuffer->getImgFormat()));
          pInImgBuffer->saveToFile(path);
          }
          }
          if (pRequest->mIBufferSpecified != nullptr) {
          IImageBuffer* pImgBuffer = pRequest->mIBufferSpecified->acquire();
          MY_LOGD("[IN] Specified image VA: 0x%p, Size(%dx%d)", pImgBuffer->getBufVA(0), pImgBuffer->getImgSize().w, pImgBuffer->getImgSize().h);
          }
          if (pRequest->mOBufferFull != nullptr) {
          mOutImgBuffer = pRequest->mOBufferFull->acquire();
          MY_LOGD("[OUT] Full image VA: 0x%p, Size(%dx%d)", mOutImgBuffer->getBufVA(0), mOutImgBuffer->getImgSize().w, mOutImgBuffer->getImgSize().h);
          }
          if (pRequest->mIMetadataDynamic != nullptr) {
          IMetadata *meta = pRequest->mIMetadataDynamic->acquire();
          if (meta != NULL)
          MY_LOGD("[IN] Dynamic metadata count: ", meta->count());
          else
          MY_LOGD("[IN] Dynamic metadata Empty");
          }

          MY_LOGD("frame:%d/%d, width:%d, height:%d", pRequest->mRequestIndex, pRequest->mRequestCount, width, height);

          if (pInImgBuffer != NULL && mOutImgBuffer != NULL) {
          uint32_t yLength = pInImgBuffer->getBufSizeInBytes(0);
          uint32_t uLength = pInImgBuffer->getBufSizeInBytes(1);
          uint32_t vLength = pInImgBuffer->getBufSizeInBytes(2);
          uint32_t yuvLength = yLength + uLength + vLength;

          if (pRequest->mRequestIndex == 0) {//First frame
          //When width or height changed, recreate multiFrame
          if (mLatestWidth != width || mLatestHeight != height) {
          if (mMFProcessor != NULL) {
          delete mMFProcessor;
          mMFProcessor = NULL;
          }
          mLatestWidth = width;
          mLatestHeight = height;
          }
          if (mMFProcessor == NULL) {
          MY_LOGD("create mMFProcessor %dx%d", mLatestWidth, mLatestHeight);
          mMFProcessor = MFProcessor::createInstance(mLatestWidth, mLatestHeight);
          mMFProcessor->setFrameCount(pRequest->mRequestCount);
          }
          }

          mMFProcessor->addFrame((uint8_t *)pInImgBuffer->getBufVA(0),
          (uint8_t *)pInImgBuffer->getBufVA(1),
          (uint8_t *)pInImgBuffer->getBufVA(2),
          mLatestWidth, mLatestHeight);

          if (pRequest->mRequestIndex == pRequest->mRequestCount - 1) {//Last frame
          if (mMFProcessor != NULL) {
          mMFProcessor->process((uint8_t *)mOutImgBuffer->getBufVA(0),
          (uint8_t *)mOutImgBuffer->getBufVA(1),
          (uint8_t *)mOutImgBuffer->getBufVA(2),
          mLatestWidth, mLatestHeight);
          if (mDump) {
          char path[256];
          snprintf(path, sizeof(path), "/data/vendor/camera_dump/mfnr_capture_out_%d_%dx%d.%s",
          pRequest->mRequestIndex, mOutImgBuffer->getImgSize().w, mOutImgBuffer->getImgSize().h,
          format2String(mOutImgBuffer->getImgFormat()));
          mOutImgBuffer->saveToFile(path);
          }
          } else {
          memcpy((uint8_t *)mOutImgBuffer->getBufVA(0),
          (uint8_t *)pInImgBuffer->getBufVA(0),
          pInImgBuffer->getBufSizeInBytes(0));
          memcpy((uint8_t *)mOutImgBuffer->getBufVA(1),
          (uint8_t *)pInImgBuffer->getBufVA(1),
          pInImgBuffer->getBufSizeInBytes(1));
          memcpy((uint8_t *)mOutImgBuffer->getBufVA(2),
          (uint8_t *)pInImgBuffer->getBufVA(2),
          pInImgBuffer->getBufSizeInBytes(2));
          }
          mOutImgBuffer = NULL;
          }
          }

          if (pRequest->mIBufferFull != nullptr) {
          pRequest->mIBufferFull->release();
          }
          if (pRequest->mIBufferSpecified != nullptr) {
          pRequest->mIBufferSpecified->release();
          }
          if (pRequest->mOBufferFull != nullptr) {
          pRequest->mOBufferFull->release();
          }
          if (pRequest->mIMetadataDynamic != nullptr) {
          pRequest->mIMetadataDynamic->release();
          }

          mvRequests.push_back(pRequest);
          MY_LOGD("collected request(%d/%d)", pRequest->mRequestIndex, pRequest->mRequestCount);
          if (pRequest->mRequestIndex == pRequest->mRequestCount - 1) {
          for (auto req : mvRequests) {
          MY_LOGD("callback request(%d/%d) %p", req->mRequestIndex, req->mRequestCount, pCallback.get());
          if (pCallback != nullptr) {
          pCallback->onCompleted(req, 0);
          }
          }
          mvRequests.clear();
          }
          return ret;
          };

          virtual void abort(vector<RequestPtr>& pRequests) {
          FUNCTION_SCOPE;

          bool bAbort = false;
          IMetadata *pHalMeta;
          MINT32 processUniqueKey = 0;

          for (auto req:pRequests) {
          bAbort = false;
          pHalMeta = req->mIMetadataHal->acquire();
          if (!IMetadata::getEntry<MINT32>(pHalMeta, MTK_PIPELINE_UNIQUE_KEY, processUniqueKey)) {
          MY_LOGW("cannot get unique about MFNR capture");
          }

          if (m_callbackprt != nullptr) {
          MY_LOGD("m_callbackprt is %p", m_callbackprt.get());
          /*MFNR plugin callback request to MultiFrameNode */
          for (Vector<RequestPtr>::iterator it = mvRequests.begin() ; it != mvRequests.end(); it++) {
          if ((*it) == req) {
          mvRequests.erase(it);
          m_callbackprt->onAborted(req);
          bAbort = true;
          break;
          }
          }
          } else {
          MY_LOGW("callbackptr is null");
          }

          if (!bAbort) {
          MY_LOGW("Desire abort request[%d] is not found", req->mRequestIndex);
          }

          }
          };

          virtual void uninit() {
          FUNCTION_SCOPE;
          if (mMFProcessor != NULL) {
          delete mMFProcessor;
          mMFProcessor = NULL;
          }
          mLatestWidth = 0;
          mLatestHeight = 0;
          };

          virtual ~MFNRProviderImpl() {
          FUNCTION_SCOPE;
          };

          const char * format2String(MINT format) {
          switch(format) {
          case NSCam::eImgFmt_RGBA8888: return "rgba";
          case NSCam::eImgFmt_RGB888: return "rgb";
          case NSCam::eImgFmt_RGB565: return "rgb565";
          case NSCam::eImgFmt_STA_BYTE: return "byte";
          case NSCam::eImgFmt_YVYU: return "yvyu";
          case NSCam::eImgFmt_UYVY: return "uyvy";
          case NSCam::eImgFmt_VYUY: return "vyuy";
          case NSCam::eImgFmt_YUY2: return "yuy2";
          case NSCam::eImgFmt_YV12: return "yv12";
          case NSCam::eImgFmt_YV16: return "yv16";
          case NSCam::eImgFmt_NV16: return "nv16";
          case NSCam::eImgFmt_NV61: return "nv61";
          case NSCam::eImgFmt_NV12: return "nv12";
          case NSCam::eImgFmt_NV21: return "nv21";
          case NSCam::eImgFmt_I420: return "i420";
          case NSCam::eImgFmt_I422: return "i422";
          case NSCam::eImgFmt_Y800: return "y800";
          case NSCam::eImgFmt_BAYER8: return "bayer8";
          case NSCam::eImgFmt_BAYER10: return "bayer10";
          case NSCam::eImgFmt_BAYER12: return "bayer12";
          case NSCam::eImgFmt_BAYER14: return "bayer14";
          case NSCam::eImgFmt_FG_BAYER8: return "fg_bayer8";
          case NSCam::eImgFmt_FG_BAYER10: return "fg_bayer10";
          case NSCam::eImgFmt_FG_BAYER12: return "fg_bayer12";
          case NSCam::eImgFmt_FG_BAYER14: return "fg_bayer14";
          default: return "unknown";
          };
          };

          private:

          MINT32 mUniqueKey;
          MINT32 mOpenId;
          MINT32 mRealIso;
          MINT32 mShutterTime;
          MBOOL mZSDMode;
          MBOOL mFlashOn;

          Vector<RequestPtr> mvRequests;

          RequestCallbackPtr m_callbackprt;
          MFProcessor* mMFProcessor = NULL;
          IImageBuffer* mOutImgBuffer = NULL;
          uint32_t mLatestWidth = 0;
          uint32_t mLatestHeight = 0;
          MINT32 mEnable = 0;
          MINT32 mDump = 0;
          // add end
          };

          REGISTER_PLUGIN_PROVIDER(MultiFrame, MFNRProviderImpl);

          主要函數(shù)介紹:

          • 在property函數(shù)中feature類型設(shè)置成TP_FEATURE_MFNR,并設(shè)置名稱、優(yōu)先級、最大幀數(shù)等等屬性。尤其注意mNeedRrzoBuffer屬性,一般情況下,多幀算法必須要設(shè)置為MTRUE。

          • 在negotiate函數(shù)中配置算法需要的輸入、輸出圖像的格式、尺寸。注意,多幀算法有多幀輸入,但是只需要一幀輸出。因此這里設(shè)置了mRequestIndex == 0時才需要mOBufferFull。也就是只有第一幀才有輸入和輸出,其它幀只有輸入。
            另外,還在negotiate函數(shù)中獲取上層傳下來的metadata參數(shù),根據(jù)參數(shù)決定算法是否運(yùn)行。

          • 在process函數(shù)中接入算法。第一幀時創(chuàng)建算法接口類對象,然后每一幀都調(diào)用算法接口函數(shù)addFrame加入,最后一幀再調(diào)用算法接口函數(shù)process進(jìn)行處理并獲取輸出。

          2.3.4 mtkcam3/3rdparty/customer/Android.mk

          最終vendor.img需要的目標(biāo)共享庫是libmtkcam_3rdparty.customer.so。因此,我們還需要修改Android.mk,使模塊libmtkcam_3rdparty.customer依賴libmtkcam.plugin.tp_mfnr。

              diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
          index ff5763d3c2..5e5dd6524f 100755
          --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
          +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
          @@ -77,6 +77,12 @@ LOCAL_SHARED_LIBRARIES += libyuv.vendor
          LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_watermark
          endif

          +ifeq ($(QXT_MFNR_SUPPORT), yes)
          +LOCAL_SHARED_LIBRARIES += libmultiframe
          +LOCAL_SHARED_LIBRARIES += libyuv.vendor
          +LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_mfnr
          +endif
          +
          # for app super night ev decision (experimental for customer only)
          LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.control.customersupernightevdecision
          ################################################################################

          2.3.5 移除MTK示例的MFNR算法

          一般情況下,MFNR 算法同一時間只允許運(yùn)行一個。因此,需要移除 MTK 示例的 MFNR 算法。我們可以使用宏控來移除,這里就簡單粗暴,直接注釋掉了。

              diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
          index 4e2bc68dff..da98ebd0ad 100644
          --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
          +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
          @@ -118,7 +118,7 @@ LOCAL_SHARED_LIBRARIES += libfeature.stereo.provider

          #-----------------------------------------------------------
          ifneq ($(strip $(MTKCAM_HAVE_MFB_SUPPORT)),0)
          -LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.mfnr
          +#LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.mfnr
          endif
          #4 Cell
          LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.remosaic

          三、自定義metadata

          添加metadata是為了讓APP層能夠通過metadata傳遞相應(yīng)的參數(shù)給HAL層,以此來控制算法在運(yùn)行時是否啟用。APP層是通過CaptureRequest.Builder.set(@NonNull Key<T> key, T value)來設(shè)置參數(shù)的。由于MTK原生相機(jī)APP沒有多幀降噪模式,因此,我們自定義metadata來驗(yàn)證集成效果。

          vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h:

              diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
          index b020352092..714d05f350 100755
          --- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
          +++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
          @@ -602,6 +602,7 @@ typedef enum mtk_camera_metadata_tag {
          MTK_FLASH_FEATURE_END,

          QXT_FEATURE_WATERMARK = QXT_FEATURE_START,
          + QXT_FEATURE_MFNR,
          QXT_FEATURE_END,
          } mtk_camera_metadata_tag_t;

          vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl:

              diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
          index 1b4fc75a0e..cba4511511 100755
          --- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
          +++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
          @@ -95,6 +95,8 @@ _IMP_SECTION_INFO_(QXT_FEATURE, "com.qxt.camera")

          _IMP_TAG_INFO_( QXT_FEATURE_WATERMARK,
          MINT32, "watermark")
          +_IMP_TAG_INFO_( QXT_FEATURE_MFNR,
          + MINT32, "mfnr")

          /******************************************************************************
          *

          vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h :

              diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
          index 33e581adfd..4f4772424d 100755
          --- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
          +++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
          @@ -383,6 +383,8 @@ static auto& _QxtFeature_()
          sInst = {
          _TAG_(QXT_FEATURE_WATERMARK,
          "watermark", TYPE_INT32),
          + _TAG_(QXT_FEATURE_MFNR,
          + "mfnr", TYPE_INT32),
          };
          //
          return sInst;

          vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp :

              diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
          index 591b25b162..9c3db8b1d1 100755
          --- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
          +++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
          @@ -583,10 +583,12 @@ updateData(IMetadata &rMetadata)
          {
          IMetadata::IEntry qxtAvailRequestEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_REQUEST_KEYS);
          qxtAvailRequestEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >());
          + qxtAvailRequestEntry.push_back(QXT_FEATURE_MFNR , Type2Type< MINT32 >());
          rMetadata.update(qxtAvailRequestEntry.tag(), qxtAvailRequestEntry);

          IMetadata::IEntry qxtAvailSessionEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_SESSION_KEYS);
          qxtAvailSessionEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >());
          + qxtAvailSessionEntry.push_back(QXT_FEATURE_MFNR , Type2Type< MINT32 >());
          rMetadata.update(qxtAvailSessionEntry.tag(), qxtAvailSessionEntry);
          }
          #endif
          @@ -605,7 +607,7 @@ updateData(IMetadata &rMetadata)
          // to store manual update metadata for sensor driver.
          IMetadata::IEntry availCharactsEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_CHARACTERISTICS_KEYS);
          availCharactsEntry.push_back(MTK_MULTI_CAM_FEATURE_SENSOR_MANUAL_UPDATED , Type2Type< MINT32 >());
          - rMetadata.update(availCharactsEntry.tag(), availCharactsEntry);
          + rMetadata.update(availCharactsEntry.tag(), availCharactsEntry);
          }
          if(physicIdsList.size() > 1)
          {

          前面這些步驟完成之后,集成工作就基本完成了。我們需要重新編譯一下系統(tǒng)源碼,為節(jié)約時間,也可以只編譯vendor.img。

          四、APP調(diào)用算法

          驗(yàn)證算法我們無需再重新寫APP,繼續(xù)使用《MTK HAL算法集成之單幀算法》中的APP代碼,只需要將KEY_WATERMARK的值改為"com.qxt.camera.mfnr"即可。為樣機(jī)刷入系統(tǒng)整包或者vendor.img,開機(jī)后,安裝demo驗(yàn)證。我們來拍一張看看效果:

          f708262ab9259f17fb94e081a37b8b55.webp

          image

          可以看到,集成后,這個模擬MFNR的多幀算法已經(jīng)將連續(xù)的4幀圖像縮小并拼接成一張圖了。

          五、結(jié)語

          真正的多幀算法要復(fù)雜一些,例如,MFNR算法可能會根據(jù)曝光值決定是否啟用,光線好就不啟用,光線差就啟用;HDR算法,可能會要求獲取連續(xù)幾幀不同曝光的圖像。可能還會有智能的場景檢測等等。但是不管怎么變,多幀算法大體上的集成步驟都是類似的。如果遇到不同的需求,可能要根據(jù)需求靈活調(diào)整一下代碼。

          原文鏈接:https://www.jianshu.com/p/f0

          參考文獻(xiàn):

          【騰訊文檔】Camera學(xué)習(xí)知識庫
          https://docs.qq.com/doc/DSWZ6dUlNemtUWndv

          至此,本篇已結(jié)束。轉(zhuǎn)載網(wǎng)絡(luò)的文章,小編覺得很優(yōu)秀,歡迎點(diǎn)擊閱讀原文,支持原創(chuàng)作者,如有侵權(quán),懇請聯(lián)系小編刪除,歡迎您的建議與指正。同時期待您的關(guān)注,感謝您的閱讀,謝謝!

          點(diǎn)個在看,方便您使用時快速查找!

          瀏覽 95
          點(diǎn)贊
          評論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報
          評論
          圖片
          表情
          推薦
          點(diǎn)贊
          評論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  www豆花视频 | 无码免费播放 | 激情综合色 | 亚洲插穴 | 在线操逼视频 |