<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          Camera 原理之拍照流程zsl優(yōu)化方案

          共 13512字,需瀏覽 28分鐘

           ·

          2021-03-04 00:16

          和你一起終身學習,這里是程序員Android

          經(jīng)典好文推薦,通過閱讀本文,您將收獲以下知識點:

          一、背景介紹

          拍照的手機基本的功能,優(yōu)化拍照性能,主要是優(yōu)化點擊拍照到生成照片的這一段時間,看看可以在什么地方減少耗時。下面將打開camera到拍照完成這段時間拆解一下。

          這段過程主要分為:

          • capture session配置階段:這是預覽之前的階段。

          • 預覽流程:這段時間,camera不斷出幀,顯示在TextureView 上。

          • 拍照流程:點擊拍照到最終生效圖片的流程。

          Note:將預覽流程拍照流程合成一個大的流程,因為我們本文所說的優(yōu)化重點就在這里。

          二、核心思想


          預覽出幀是為了讓用戶感覺到此時camera正在運行,但是預覽的幀數(shù)據(jù)是不能直接用作拍照的幀數(shù)據(jù),為什么?因為預覽的幀數(shù)據(jù)太小,拍照的幀數(shù)據(jù)很大,所以不能直接復用。那如果能直接復用呢?就是預覽的幀數(shù)據(jù)可以直接被拍照來使用。
          這也是我們本文討論的重點,直接復用預覽的幀數(shù)據(jù)。


          直接復用預覽的幀數(shù)據(jù),那么首先需要保證的是 預覽幀的大小必須和 實際拍照的幀大小是相同的,不然獲取的預覽幀數(shù)據(jù)也是沒用的,沒有意義。
          預覽的surface我們需要自定義,而且大小要和拍照的ImageReader的surface大小相同的。

          2.1 定義Yuv Full ImageReader

          private ImageReader mYuv1ImageReader;

          初始化的時候需要創(chuàng)建這個 ImageReader的實例:

                  mYuv1ImageReader = ImageReader.newInstance(
          mCameraInfoCache.getYuvStream1Size().getWidth(),
          mCameraInfoCache.getYuvStream1Size().getHeight(),
          ImageFormat.YUV_420_888,
          YUV1_IMAGEREADER_SIZE);
          mYuv1ImageReader.setOnImageAvailableListener(mYuv1ImageListener, mOpsHandler);

          2.2 ImageReader的監(jiān)聽回調(diào)

              ImageReader.OnImageAvailableListener mYuv1ImageListener =
          new ImageReader.OnImageAvailableListener() {
          @Override
          public void onImageAvailable(ImageReader reader) {
          Image img = reader.acquireLatestImage();
          if (img == null) {
          Log.e(TAG, "Null image returned YUV1");
          return;
          }
          if (mYuv1LastReceivedImage != null) {
          mYuv1LastReceivedImage.close();
          }
          mYuv1LastReceivedImage = img;
          if (++mYuv1ImageCounter % LOG_NTH_FRAME == 0) {
          Log.v(TAG, "YUV1 buffer available, Frame #=" + mYuv1ImageCounter + " w=" + img.getWidth() + " h=" + img.getHeight() + " time=" + img.getTimestamp());
          }

          }
          };

          只要是處于預覽狀態(tài),底層的sensor會一直出幀數(shù)據(jù),這個onImageAvailable(ImageReader reader)會一直回調(diào),發(fā)現(xiàn)我們在其中又定義了一個Image變量。

          2.3 定義實時的Image返回值

              // Handle to last received Image: allows ZSL to be implemented.
          private Image mYuv1LastReceivedImage = null;

          這個mYuv1LastReceivedImage從定義的變量名上就能看出來,是預覽的最后一幀的數(shù)據(jù),顯然這個幀數(shù)據(jù)是完全的,和出圖的大小完全一樣的。

          mYuv1LastReceivedImage保證本地總是存儲預覽的最后一幀數(shù)據(jù)。

          2.4 創(chuàng)建captureSession

          Camera打開的時候onOpened回調(diào)的時候,開始創(chuàng)建captureSession:

              private CameraDevice.StateCallback mCameraStateCallback = new LoggingCallbacks.DeviceStateCallback() {
          @Override
          public void onOpened(CameraDevice camera) {
          super.onOpened(camera);
          startCaptureSession();
          }
          };

              // Create CameraCaptureSession. Callback will start repeating request with current parameters.
          private void startCaptureSession() {
          Log.v(TAG, "Configuring session..");
          List<Surface> outputSurfaces = new ArrayList<Surface>(4);

          outputSurfaces.add(mPreviewSurface);
          Log.v(TAG, " .. added SurfaceView " + mCameraInfoCache.getPreviewSize().getWidth() +
          " x " + mCameraInfoCache.getPreviewSize().getHeight());

          outputSurfaces.add(mYuv1ImageReader.getSurface());
          Log.v(TAG, " .. added YUV ImageReader " + mCameraInfoCache.getYuvStream1Size().getWidth() +
          " x " + mCameraInfoCache.getYuvStream1Size().getHeight());

          if (mIsDepthCloudSupported) {
          outputSurfaces.add(mDepthCloudImageReader.getSurface());
          Log.v(TAG, " .. added Depth cloud ImageReader");
          }

          if (SECOND_YUV_IMAGEREADER_STREAM) {
          outputSurfaces.add(mYuv2ImageReader.getSurface());
          Log.v(TAG, " .. added YUV ImageReader " + mCameraInfoCache.getYuvStream2Size().getWidth() +
          " x " + mCameraInfoCache.getYuvStream2Size().getHeight());
          }

          if (SECOND_SURFACE_TEXTURE_STREAM) {
          outputSurfaces.add(mSurfaceTextureSurface);
          Log.v(TAG, " .. added SurfaceTexture");
          }

          if (RAW_STREAM_ENABLE && mCameraInfoCache.rawAvailable()) {
          outputSurfaces.add(mRawImageReader.getSurface());
          Log.v(TAG, " .. added Raw ImageReader " + mCameraInfoCache.getRawStreamSize().getWidth() +
          " x " + mCameraInfoCache.getRawStreamSize().getHeight());
          }

          if (USE_REPROCESSING_IF_AVAIL && mCameraInfoCache.isYuvReprocessingAvailable()) {
          outputSurfaces.add(mJpegImageReader.getSurface());
          Log.v(TAG, " .. added JPEG ImageReader " + mCameraInfoCache.getJpegStreamSize().getWidth() +
          " x " + mCameraInfoCache.getJpegStreamSize().getHeight());
          }

          try {
          if (USE_REPROCESSING_IF_AVAIL && mCameraInfoCache.isYuvReprocessingAvailable()) {
          InputConfiguration inputConfig = new InputConfiguration(mCameraInfoCache.getYuvStream1Size().getWidth(),
          mCameraInfoCache.getYuvStream1Size().getHeight(), ImageFormat.YUV_420_888);
          mCameraDevice.createReprocessableCaptureSession(inputConfig, outputSurfaces,
          mSessionStateCallback, null);
          Log.v(TAG, " Call to createReprocessableCaptureSession complete.");
          } else {
          mCameraDevice.createCaptureSession(outputSurfaces, mSessionStateCallback, null);
          Log.v(TAG, " Call to createCaptureSession complete.");
          }

          } catch (CameraAccessException e) {
          Log.e(TAG, "Error configuring ISP.");
          }
          }

          使用zsl的方式的話,就需要輸入InputConfiguration配置數(shù)據(jù),好讓底層的camera hal復用這部分數(shù)據(jù),我們也能真正達到zsl的目的。

                          InputConfiguration inputConfig = new InputConfiguration(mCameraInfoCache.getYuvStream1Size().getWidth(),
          mCameraInfoCache.getYuvStream1Size().getHeight(), ImageFormat.YUV_420_888);
          mCameraDevice.createReprocessableCaptureSession(inputConfig, outputSurfaces,
          mSessionStateCallback, null);

          mSessionStateCallback是當前captureSession所處狀態(tài)的回調(diào),我們會在captureSession的onReady回調(diào)函數(shù)中設置ImageWriter對象:

              ImageWriter mImageWriter;

          private CameraCaptureSession.StateCallback mSessionStateCallback = new LoggingCallbacks.SessionStateCallback() {
          @Override
          public void onReady(CameraCaptureSession session) {
          Log.v(TAG, "capture session onReady(). HAL capture session took: (" + (SystemClock.elapsedRealtime() - CameraTimer.t_session_go) + " ms)");
          mCurrentCaptureSession = session;
          issuePreviewCaptureRequest(false);

          if (session.isReprocessable()) {
          mImageWriter = ImageWriter.newInstance(session.getInputSurface(), IMAGEWRITER_SIZE);
          mImageWriter.setOnImageReleasedListener(
          new ImageWriter.OnImageReleasedListener() {
          @Override
          public void onImageReleased(ImageWriter writer) {
          Log.v(TAG, "ImageWriter.OnImageReleasedListener onImageReleased()");
          }
          }, null);
          Log.v(TAG, "Created ImageWriter.");
          }
          super.onReady(session);
          }
          };

          session.getInputSurface() 表示之前輸入的inputConfiguration數(shù)據(jù),這個數(shù)據(jù)暫時初始化放在ImageWriter中。后續(xù)每次得到的預覽的最后一幀數(shù)據(jù)都會放在ImageWriter對象中,直接送入到底層。

          2.5 設置預覽

                  try {
          CaptureRequest.Builder b1 = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
          b1.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_USE_SCENE_MODE);
          b1.set(CaptureRequest.CONTROL_SCENE_MODE, CameraMetadata.CONTROL_SCENE_MODE_FACE_PRIORITY);
          if (AFtrigger) {
          b1.set(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_AUTO);
          } else {
          b1.set(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
          }

          b1.set(CaptureRequest.NOISE_REDUCTION_MODE, mCaptureNoiseMode);
          b1.set(CaptureRequest.EDGE_MODE, mCaptureEdgeMode);
          b1.set(CaptureRequest.STATISTICS_FACE_DETECT_MODE, mCaptureFace ? mCameraInfoCache.bestFaceDetectionMode() : CaptureRequest.STATISTICS_FACE_DETECT_MODE_OFF);

          Log.v(TAG, " .. NR=" + mCaptureNoiseMode + " Edge=" + mCaptureEdgeMode + " Face=" + mCaptureFace);

          if (mCaptureYuv1) {
          b1.addTarget(mYuv1ImageReader.getSurface());
          Log.v(TAG, " .. YUV1 on");
          }

          if (mCaptureRaw) {
          b1.addTarget(mRawImageReader.getSurface());
          }

          b1.addTarget(mPreviewSurface);

          if (mIsDepthCloudSupported && !mCaptureYuv1 && !mCaptureYuv2 && !mCaptureRaw) {
          b1.addTarget(mDepthCloudImageReader.getSurface());
          }

          if (mCaptureYuv2) {
          if (SECOND_SURFACE_TEXTURE_STREAM) {
          b1.addTarget(mSurfaceTextureSurface);
          }
          if (SECOND_YUV_IMAGEREADER_STREAM) {
          b1.addTarget(mYuv2ImageReader.getSurface());
          }
          Log.v(TAG, " .. YUV2 on");
          }

          if (AFtrigger) {
          b1.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_START);
          mCurrentCaptureSession.capture(b1.build(), mCaptureCallback, mOpsHandler);
          b1.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_IDLE);
          }
          mCurrentCaptureSession.setRepeatingRequest(b1.build(), mCaptureCallback, mOpsHandler);
          } catch (CameraAccessException e) {
          Log.e(TAG, "Could not access camera for issuePreviewCaptureRequest.");
          }

          這兒很多代碼,核心的代碼只有3行:

          CaptureRequest.Builder b1 = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
          b1.addTarget(mYuv1ImageReader.getSurface());
          mCurrentCaptureSession.setRepeatingRequest(b1.build(), mCaptureCallback, mOpsHandler);

          傳入了初始定義的full yuv的ImageReader的surface結(jié)構(gòu),然后在CaptureCallback中需要獲取captureResult,這個數(shù)據(jù)在拍照的時候還有用處。

          2.6 CaptureCallback處理

              private CameraCaptureSession.CaptureCallback mCaptureCallback = new LoggingCallbacks.SessionCaptureCallback() {
          @Override
          public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
          if (!mFirstFrameArrived) {
          mFirstFrameArrived = true;
          long now = SystemClock.elapsedRealtime();
          long dt = now - CameraTimer.t0;
          long camera_dt = now - CameraTimer.t_session_go + CameraTimer.t_open_end - CameraTimer.t_open_start;
          long repeating_req_dt = now - CameraTimer.t_burst;
          Log.v(TAG, "App control to first frame: (" + dt + " ms)");
          Log.v(TAG, "HAL request to first frame: (" + repeating_req_dt + " ms) " + " Total HAL wait: (" + camera_dt + " ms)");
          mMyCameraCallback.receivedFirstFrame();
          mMyCameraCallback.performanceDataAvailable((int) dt, (int) camera_dt, null);
          }
          publishFrameData(result);
          // Used for reprocessing.
          mLastTotalCaptureResult = result;
          super.onCaptureCompleted(session, request, result);
          }
          };

          這個mLastTotalCaptureResult是預覽capture的時候捕獲的一個captureResult,后續(xù)處理的時候會用到

              // Last total capture result
          TotalCaptureResult mLastTotalCaptureResult;

          2.7 拍照處理

          終于來到了最核心的步驟,這兒的拍照處理,當然不會像之前那樣直接調(diào)用CaptureSession的capture方法,因為執(zhí)行capture方法,就必定要重新發(fā)送capture request,重新獲取幀數(shù)據(jù)。
          但是我們現(xiàn)在已經(jīng)有了幀數(shù)據(jù),就是之前保存的幀數(shù)據(jù),這時候幀數(shù)據(jù)就起到了非常重要的作用。

              void runReprocessing() {
          if (mYuv1LastReceivedImage == null) {
          Log.e(TAG, "No YUV Image available.");
          return;
          }
          mImageWriter.queueInputImage(mYuv1LastReceivedImage);
          Log.v(TAG, " Sent YUV1 image to ImageWriter.queueInputImage()");
          try {
          CaptureRequest.Builder b1 = mCameraDevice.createReprocessCaptureRequest(mLastTotalCaptureResult);
          // Todo: Read current orientation instead of just assuming device is in native orientation
          b1.set(CaptureRequest.JPEG_ORIENTATION, mCameraInfoCache.sensorOrientation());
          b1.set(CaptureRequest.JPEG_QUALITY, (byte) 95);
          b1.set(CaptureRequest.NOISE_REDUCTION_MODE, mReprocessingNoiseMode);
          b1.set(CaptureRequest.EDGE_MODE, mReprocessingEdgeMode);
          b1.addTarget(mJpegImageReader.getSurface());
          mCurrentCaptureSession.capture(b1.build(), mReprocessingCaptureCallback, mOpsHandler);
          mReprocessingRequestNanoTime = System.nanoTime();
          } catch (CameraAccessException e) {
          Log.e(TAG, "Could not access camera for issuePreviewCaptureRequest.");
          }
          mYuv1LastReceivedImage = null;
          Log.v(TAG, " Reprocessing request submitted.");
          }

          mImageWriter.queueInputImage(mYuv1LastReceivedImage);將預覽最后一幀數(shù)據(jù)放入ImageWriter的input 隊列中。

              // Reprocessing capture completed.
          private CameraCaptureSession.CaptureCallback mReprocessingCaptureCallback = new LoggingCallbacks.SessionCaptureCallback() {
          @Override
          public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
          Log.v(TAG, "Reprocessing onCaptureCompleted()");
          }
          };

          處理完成之后回調(diào)onCaptureCompleted(...)函數(shù)。

          三、總結(jié)

          zsl方案有多快:原圖拍照一張150ms,快得一筆
          下面是截圖樣例:


          優(yōu)化之后的流程可以總結(jié)成如下:

          作者:碼上就說
          鏈接:https://www.jianshu.com/p/3beb7403025f

          友情推薦:

          Android 開發(fā)干貨集錦

          至此,本篇已結(jié)束。轉(zhuǎn)載網(wǎng)絡的文章,小編覺得很優(yōu)秀,歡迎點擊閱讀原文,支持原創(chuàng)作者,如有侵權(quán),懇請聯(lián)系小編刪除,歡迎您的建議與指正。同時期待您的關注,感謝您的閱讀,謝謝!

          點個在看,方便您使用時快速查找!


          瀏覽 45
          點贊
          評論
          收藏
          分享

          手機掃一掃分享

          分享
          舉報
          評論
          圖片
          表情
          推薦
          點贊
          評論
          收藏
          分享

          手機掃一掃分享

          分享
          舉報
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  亚洲AV无码成人精品区国产 | 亚洲国产AV天堂 | 国产黄色在线视频 | 91精品色 | 免费看黄A级毛片成人片 |