<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          CVPR 2022 論文和開源項(xiàng)目合集

          共 19845字,需瀏覽 40分鐘

           ·

          2022-03-28 05:12

          b5404611c7337f248109a8ab3dc00577.webp

          向AI轉(zhuǎn)型的程序員都關(guān)注了這個號??????

          機(jī)器學(xué)習(xí)AI算法工程?? 公眾號:datayx


          【CVPR 2022 論文開源目錄】

          • Backbone

          • CLIP

          • GAN

          • NAS

          • NeRF

          • Visual Transformer

          • 視覺和語言(Vision-Language)

          • 自監(jiān)督學(xué)習(xí)(Self-supervised Learning)

          • 數(shù)據(jù)增強(qiáng)(Data Augmentation)

          • 目標(biāo)檢測(Object Detection)

          • 目標(biāo)跟蹤(Visual Tracking)

          • 語義分割(Semantic Segmentation)

          • 實(shí)例分割(Instance Segmentation)

          • 小樣本分割(Few-Shot Segmentation)

          • 視頻理解(Video Understanding)

          • 圖像編輯(Image Editing)

          • Low-level Vision

          • 超分辨率(Super-Resolution)

          • 3D點(diǎn)云(3D Point Cloud)

          • 3D目標(biāo)檢測(3D Object Detection)

          • 3D語義分割(3D Semantic Segmentation)

          • 3D目標(biāo)跟蹤(3D Object Tracking)

          • 3D人體姿態(tài)估計(jì)(3D Human Pose Estimation)

          • 3D語義場景補(bǔ)全(3D Semantic Scene Completion)

          • 3D重建(3D Reconstruction)

          • 偽裝物體檢測(Camouflaged Object Detection)

          • 深度估計(jì)(Depth Estimation)

          • 立體匹配(Stereo Matching)

          • 車道線檢測(Lane Detection)

          • 圖像修復(fù)(Image Inpainting)

          • 人群計(jì)數(shù)(Crowd Counting)

          • 醫(yī)學(xué)圖像(Medical Image)

          • 場景圖生成(Scene Graph Generation)

          • 弱監(jiān)督物體檢測(Weakly Supervised Object Localization)

          • 高光譜圖像重建(Hyperspectral Image Reconstruction)

          • 水印(Watermarking)

          • 數(shù)據(jù)集(Datasets)

          • 新任務(wù)(New Tasks)

          • 其他(Others)


          Backbone

          A ConvNet for the 2020s

          Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs

          MPViT : Multi-Path Vision Transformer for Dense Prediction


          CLIP

          HairCLIP: Design Your Hair by Text and Reference Image

          • Paper:?https://arxiv.org/abs/2112.05142

          • Code:?https://github.com/wty-ustc/HairCLIP

          PointCLIP: Point Cloud Understanding by CLIP

          • Paper:?https://arxiv.org/abs/2112.02413

          • Code:?https://github.com/ZrrSkywalker/PointCLIP

          Blended Diffusion for Text-driven Editing of Natural Images

          • Paper:?https://arxiv.org/abs/2111.14818

          • Code:?https://github.com/omriav/blended-diffusion


          GAN

          SemanticStyleGAN: Learning Compositional Generative Priors for Controllable Image Synthesis and Editing

          • Homepage:?https://semanticstylegan.github.io/

          • Paper:?https://arxiv.org/abs/2112.02236

          • Demo:?https://semanticstylegan.github.io/videos/demo.mp4

          Style Transformer for Image Inversion and Editing

          • Paper:?https://arxiv.org/abs/2203.07932

          • Code:?https://github.com/sapphire497/style-transformer


          NAS

          β-DARTS: Beta-Decay Regularization for Differentiable Architecture Search

          • Paper:?https://arxiv.org/abs/2203.01665

          • Code:?https://github.com/Sunshine-Ye/Beta-DARTS

          ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image Prior

          • Paper:?https://arxiv.org/abs/2111.15362

          • Code: None


          NeRF

          Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

          • Homepage:?https://jonbarron.info/mipnerf360/

          • Paper:?https://arxiv.org/abs/2111.12077

          • Demo:?https://youtu.be/YStDS2-Ln1s

          Point-NeRF: Point-based Neural Radiance Fields

          • Homepage:?https://xharlie.github.io/projects/project_sites/pointnerf/

          • Paper:?https://arxiv.org/abs/2201.08845

          • Code:?https://github.com/Xharlie/point-nerf

          NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images

          • Paper:?https://arxiv.org/abs/2111.13679

          • Homepage:?https://bmild.github.io/rawnerf/

          • Demo:?https://www.youtube.com/watch?v=JtBS4KBcKVc

          Urban Radiance Fields

          • Homepage:?https://urban-radiance-fields.github.io/

          • Paper:?https://arxiv.org/abs/2111.14643

          • Demo:?https://youtu.be/qGlq5DZT6uc

          Pix2NeRF: Unsupervised Conditional π-GAN for Single Image to Neural Radiance Fields Translation

          • Paper:?https://arxiv.org/abs/2202.13162

          • Code:?https://github.com/HexagonPrime/Pix2NeRF

          HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video

          • Homepage:?https://grail.cs.washington.edu/projects/humannerf/

          • Paper:?https://arxiv.org/abs/2201.04127

          • Demo:?https://youtu.be/GM-RoZEymmw


          Visual Transformer

          Backbone

          MPViT : Multi-Path Vision Transformer for Dense Prediction

          • Paper:?https://arxiv.org/abs/2112.11010

          • Code:?https://github.com/youngwanLEE/MPViT

          應(yīng)用(Application)

          Language-based Video Editing via Multi-Modal Multi-Level Transformer

          • Paper:?https://arxiv.org/abs/2104.01122

          • Code: None

          MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video

          • Paper:?https://arxiv.org/abs/2203.00859

          • Code: None

          Embracing Single Stride 3D Object Detector with Sparse Transformer

          • Paper:?https://arxiv.org/abs/2112.06375

          • Code:?https://github.com/TuSimple/SST

          Multi-class Token Transformer for Weakly Supervised Semantic Segmentation

          • Paper:?https://arxiv.org/abs/2203.02891

          • Code:?https://github.com/xulianuwa/MCTformer

          Spatio-temporal Relation Modeling for Few-shot Action Recognition

          • Paper:?https://arxiv.org/abs/2112.05132

          • Code:?https://github.com/Anirudh257/strm

          Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction

          • Paper:?https://arxiv.org/abs/2111.07910

          • Code:?https://github.com/caiyuanhao1998/MST

          Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling

          • Homepage:?https://point-bert.ivg-research.xyz/

          • Paper:?https://arxiv.org/abs/2111.14819

          • Code:?https://github.com/lulutang0608/Point-BERT

          GroupViT: Semantic Segmentation Emerges from Text Supervision

          • Homepage:?https://jerryxu.net/GroupViT/

          • Paper:?https://arxiv.org/abs/2202.11094

          • Demo:?https://youtu.be/DtJsWIUTW-Y

          Restormer: Efficient Transformer for High-Resolution Image Restoration

          • Paper:?https://arxiv.org/abs/2111.09881

          • Code:?https://github.com/swz30/Restormer

          Splicing ViT Features for Semantic Appearance Transfer

          • Homepage:?https://splice-vit.github.io/

          • Paper:?https://arxiv.org/abs/2201.00424

          • Code:?https://github.com/omerbt/Splice

          Self-supervised Video Transformer

          • Homepage:?https://kahnchana.github.io/svt/

          • Paper:?https://arxiv.org/abs/2112.01514

          • Code:?https://github.com/kahnchana/svt

          Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers

          • Paper:?https://arxiv.org/abs/2203.02664

          • Code:?https://github.com/rulixiang/afa

          Accelerating DETR Convergence via Semantic-Aligned Matching

          • Paper:?https://arxiv.org/abs/2203.06883

          • Code:?https://github.com/ZhangGongjie/SAM-DETR

          DN-DETR: Accelerate DETR Training by Introducing Query DeNoising

          Style Transformer for Image Inversion and Editing

          • Paper:?https://arxiv.org/abs/2203.07932

          • Code:?https://github.com/sapphire497/style-transformer

          MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer

          • Paper:?https://arxiv.org/abs/2203.10981

          • Code:?https://github.com/kuanchihhuang/MonoDTR

          Mask Transfiner for High-Quality Instance Segmentation

          • Paper:?https://arxiv.org/abs/2111.13673

          • Code:?https://github.com/SysCV/transfiner


          視覺和語言(Vision-Language)

          Conditional Prompt Learning for Vision-Language Models

          • Paper:?https://arxiv.org/abs/2203.05557

          • Code:?https://github.com/KaiyangZhou/CoOp


          自監(jiān)督學(xué)習(xí)(Self-supervised Learning)

          UniVIP: A Unified Framework for Self-Supervised Visual Pre-training

          • Paper:?https://arxiv.org/abs/2203.06965

          • Code: None

          Crafting Better Contrastive Views for Siamese Representation Learning

          HCSC: Hierarchical Contrastive Selective Coding


          數(shù)據(jù)增強(qiáng)(Data Augmentation)

          TeachAugment: Data Augmentation Optimization Using Teacher Knowledge

          • Paper:?https://arxiv.org/abs/2202.12513

          • Code:?https://github.com/DensoITLab/TeachAugment

          AlignMix: Improving representation by interpolating aligned features

          • Paper:?https://arxiv.org/abs/2103.15375

          • Code: None


          目標(biāo)檢測(Object Detection)

          DN-DETR: Accelerate DETR Training by Introducing Query DeNoising

          Accelerating DETR Convergence via Semantic-Aligned Matching

          • Paper:?https://arxiv.org/abs/2203.06883

          • Code:?https://github.com/ZhangGongjie/SAM-DETR

          Localization Distillation for Dense Object Detection

          Focal and Global Knowledge Distillation for Detectors

          A Dual Weighting Label Assignment Scheme for Object Detection

          • Paper:?https://arxiv.org/abs/2203.09730

          • Code:?https://github.com/strongwolf/DW


          目標(biāo)跟蹤(Visual Tracking)

          Correlation-Aware Deep Tracking

          • Paper:?https://arxiv.org/abs/2203.01666

          • Code: None

          TCTrack: Temporal Contexts for Aerial Tracking

          • Paper:?https://arxiv.org/abs/2203.01885

          • Code:?https://github.com/vision4robotics/TCTrack


          語義分割(Semantic Segmentation)

          弱監(jiān)督語義分割

          Class Re-Activation Maps for Weakly-Supervised Semantic Segmentation

          • Paper:?https://arxiv.org/abs/2203.00962

          • Code:?https://github.com/zhaozhengChen/ReCAM

          Multi-class Token Transformer for Weakly Supervised Semantic Segmentation

          • Paper:?https://arxiv.org/abs/2203.02891

          • Code:?https://github.com/xulianuwa/MCTformer

          Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers

          • Paper:?https://arxiv.org/abs/2203.02664

          • Code:?https://github.com/rulixiang/afa

          半監(jiān)督語義分割

          ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

          Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

          無監(jiān)督語義分割

          GroupViT: Semantic Segmentation Emerges from Text Supervision

          • Homepage:?https://jerryxu.net/GroupViT/

          • Paper:?https://arxiv.org/abs/2202.11094

          • Demo:?https://youtu.be/DtJsWIUTW-Y


          實(shí)例分割(Instance Segmentation)

          E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation

          • Paper:?https://arxiv.org/abs/2203.04074

          • Code:?https://github.com/zhang-tao-whu/e2ec

          Mask Transfiner for High-Quality Instance Segmentation

          • Paper:?https://arxiv.org/abs/2111.13673

          • Code:?https://github.com/SysCV/transfiner

          自監(jiān)督實(shí)例分割

          FreeSOLO: Learning to Segment Objects without Annotations

          • Paper:?https://arxiv.org/abs/2202.12181

          • Code: None

          視頻實(shí)例分割

          Efficient Video Instance Segmentation via Tracklet Query and Proposal

          • Homepage:?https://jialianwu.com/projects/EfficientVIS.html

          • Paper:?https://arxiv.org/abs/2203.01853

          • Demo:?https://youtu.be/sSPMzgtMKCE


          小樣本分割(Few-Shot Segmentation)

          Learning What Not to Segment: A New Perspective on Few-Shot Segmentation

          • Paper:?https://arxiv.org/abs/2203.07615

          • Code:?https://github.com/chunbolang/BAM


          視頻理解(Video Understanding)

          Self-supervised Video Transformer

          • Homepage:?https://kahnchana.github.io/svt/

          • Paper:?https://arxiv.org/abs/2112.01514

          • Code:?https://github.com/kahnchana/svt

          行為識別(Action Recognition)

          Spatio-temporal Relation Modeling for Few-shot Action Recognition

          • Paper:?https://arxiv.org/abs/2112.05132

          • Code:?https://github.com/Anirudh257/strm

          動作檢測(Action Detection)

          End-to-End Semi-Supervised Learning for Video Action Detection

          • Paper:?https://arxiv.org/abs/2203.04251

          • Code: None


          圖像編輯(Image Editing)

          Style Transformer for Image Inversion and Editing

          • Paper:?https://arxiv.org/abs/2203.07932

          • Code:?https://github.com/sapphire497/style-transformer

          Blended Diffusion for Text-driven Editing of Natural Images

          • Paper:?https://arxiv.org/abs/2111.14818

          • Code:?https://github.com/omriav/blended-diffusion

          SemanticStyleGAN: Learning Compositional Generative Priors for Controllable Image Synthesis and Editing

          • Homepage:?https://semanticstylegan.github.io/

          • Paper:?https://arxiv.org/abs/2112.02236

          • Demo:?https://semanticstylegan.github.io/videos/demo.mp4


          Low-level Vision

          ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image Prior

          • Paper:?https://arxiv.org/abs/2111.15362

          • Code: None

          Restormer: Efficient Transformer for High-Resolution Image Restoration

          • Paper:?https://arxiv.org/abs/2111.09881

          • Code:?https://github.com/swz30/Restormer


          超分辨率(Super-Resolution)

          圖像超分辨率(Image Super-Resolution)

          Learning the Degradation Distribution for Blind Image Super-Resolution

          • Paper:?https://arxiv.org/abs/2203.04962

          • Code:?https://github.com/greatlog/UnpairedSR

          視頻超分辨率(Video Super-Resolution)

          BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment


          3D點(diǎn)云(3D Point Cloud)

          Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling

          • Homepage:?https://point-bert.ivg-research.xyz/

          • Paper:?https://arxiv.org/abs/2111.14819

          • Code:?https://github.com/lulutang0608/Point-BERT

          A Unified Query-based Paradigm for Point Cloud Understanding

          • Paper:?https://arxiv.org/abs/2203.01252

          • Code: None

          CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding

          • Paper:?https://arxiv.org/abs/2203.00680

          • Code:?https://github.com/MohamedAfham/CrossPoint

          PointCLIP: Point Cloud Understanding by CLIP

          • Paper:?https://arxiv.org/abs/2112.02413

          • Code:?https://github.com/ZrrSkywalker/PointCLIP


          3D目標(biāo)檢測(3D Object Detection)

          Embracing Single Stride 3D Object Detector with Sparse Transformer

          • Paper:?https://arxiv.org/abs/2112.06375

          • Code:?https://github.com/TuSimple/SST

          Canonical Voting: Towards Robust Oriented Bounding Box Detection in 3D Scenes

          • Paper:?https://arxiv.org/abs/2011.12001

          • Code:?https://github.com/qq456cvb/CanonicalVoting

          MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer

          • Paper:?https://arxiv.org/abs/2203.10981

          • Code:?https://github.com/kuanchihhuang/MonoDTR


          3D語義分割(3D Semantic Segmentation)

          Scribble-Supervised LiDAR Semantic Segmentation

          • Paper:?https://arxiv.org/abs/2203.08537

          • Dataset:?https://github.com/ouenal/scribblekitti


          3D目標(biāo)跟蹤(3D Object Tracking)

          Beyond 3D Siamese Tracking: A Motion-Centric Paradigm for 3D Single Object Tracking in Point Clouds

          • Paper:?https://arxiv.org/abs/2203.01730

          • Code:?https://github.com/Ghostish/Open3DSOT


          3D人體姿態(tài)估計(jì)(3D Human Pose Estimation)

          MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation

          • Paper:?https://arxiv.org/abs/2111.12707

          • Code:?https://github.com/Vegetebird/MHFormer

          • 中文解讀:?https://zhuanlan.zhihu.com/p/439459426

          MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video

          • Paper:?https://arxiv.org/abs/2203.00859

          • Code: None


          3D語義場景補(bǔ)全(3D Semantic Scene Completion)

          MonoScene: Monocular 3D Semantic Scene Completion

          • Paper:?https://arxiv.org/abs/2112.00726

          • Code:?https://github.com/cv-rits/MonoScene


          3D重建(3D Reconstruction)

          BANMo: Building Animatable 3D Neural Models from Many Casual Videos


          偽裝物體檢測(Camouflaged Object Detection)

          Zoom In and Out: A Mixed-scale Triplet Network for Camouflaged Object Detection

          • Paper:?https://arxiv.org/abs/2203.02688

          • Code:?https://github.com/lartpang/ZoomNet


          深度估計(jì)(Depth Estimation)

          單目深度估計(jì)

          NeW CRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation

          • Paper:?https://arxiv.org/abs/2203.01502

          • Code: None

          OmniFusion: 360 Monocular Depth Estimation via Geometry-Aware Fusion

          • Paper:?https://arxiv.org/abs/2203.00838

          • Code: None

          Toward Practical Self-Supervised Monocular Indoor Depth Estimation

          • Paper:?https://arxiv.org/abs/2112.02306

          • Code: None


          立體匹配(Stereo Matching)

          ACVNet: Attention Concatenation Volume for Accurate and Efficient Stereo Matching

          • Paper:?https://arxiv.org/abs/2203.02146

          • Code:?https://github.com/gangweiX/ACVNet


          車道線檢測(Lane Detection)

          Rethinking Efficient Lane Detection via Curve Modeling

          • Paper:?https://arxiv.org/abs/2203.02431

          • Code:?https://github.com/voldemortX/pytorch-auto-drive

          • Demo:https://user-images.githubusercontent.com/32259501/148680744-a18793cd-f437-461f-8c3a-b909c9931709.mp4


          圖像修復(fù)(Image Inpainting)

          Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding

          • Paper:?https://arxiv.org/abs/2203.00867

          • Code:?https://github.com/DQiaole/ZITS_inpainting


          人群計(jì)數(shù)(Crowd Counting)

          Leveraging Self-Supervision for Cross-Domain Crowd Counting

          • Paper:?https://arxiv.org/abs/2103.16291

          • Code: None


          醫(yī)學(xué)圖像(Medical Image)

          BoostMIS: Boosting Medical Image Semi-supervised Learning with Adaptive Pseudo Labeling and Informative Active Annotation

          • Paper:?https://arxiv.org/abs/2203.02533

          • Code: None


          場景圖生成(Scene Graph Generation)

          SGTR: End-to-end Scene Graph Generation with Transformer

          • Paper:?https://arxiv.org/abs/2112.12970

          • Code: None


          風(fēng)格遷移(Style Transfer)

          StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions

          • Homepage:?https://lukashoel.github.io/stylemesh/

          • Paper:?https://arxiv.org/abs/2112.01530

          • Code:?https://github.com/lukasHoel/stylemesh

          • Demo:https://www.youtube.com/watch?v=ZqgiTLcNcks


          弱監(jiān)督物體檢測(Weakly Supervised Object Localization)

          Weakly Supervised Object Localization as Domain Adaption

          • Paper:?https://arxiv.org/abs/2203.01714

          • Code:?https://github.com/zh460045050/DA-WSOL_CVPR2022


          高光譜圖像重建(Hyperspectral Image Reconstruction)

          Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction

          • Paper:?https://arxiv.org/abs/2111.07910

          • Code:?https://github.com/caiyuanhao1998/MST


          水印(Watermarking)

          Deep 3D-to-2D Watermarking: Embedding Messages in 3D Meshes and Extracting Them from 2D Renderings

          • Paper:?https://arxiv.org/abs/2104.13450

          • Code: None


          數(shù)據(jù)集(Datasets)

          It's About Time: Analog Clock Reading in the Wild

          • Homepage:?https://charigyang.github.io/abouttime/

          • Paper:?https://arxiv.org/abs/2111.09162

          • Code:?https://github.com/charigyang/itsabouttime

          • Demo:?https://youtu.be/cbiMACA6dRc

          Toward Practical Self-Supervised Monocular Indoor Depth Estimation

          • Paper:?https://arxiv.org/abs/2112.02306

          • Code: None

          Kubric: A scalable dataset generator

          • Paper:?https://arxiv.org/abs/2203.03570

          • Code:?https://github.com/google-research/kubric

          Scribble-Supervised LiDAR Semantic Segmentation

          • Paper:?https://arxiv.org/abs/2203.08537

          • Dataset:?https://github.com/ouenal/scribblekitti


          新任務(wù)(New Task)

          Language-based Video Editing via Multi-Modal Multi-Level Transformer

          • Paper:?https://arxiv.org/abs/2104.01122

          • Code: None

          It's About Time: Analog Clock Reading in the Wild

          • Homepage:?https://charigyang.github.io/abouttime/

          • Paper:?https://arxiv.org/abs/2111.09162

          • Code:?https://github.com/charigyang/itsabouttime

          • Demo:?https://youtu.be/cbiMACA6dRc

          Splicing ViT Features for Semantic Appearance Transfer

          • Homepage:?https://splice-vit.github.io/

          • Paper:?https://arxiv.org/abs/2201.00424

          • Code:?https://github.com/omerbt/Splice


          其他(Others)

          Kubric: A scalable dataset generator

          • Paper:?https://arxiv.org/abs/2203.03570

          • Code:?https://github.com/google-research/kubric



          機(jī)器學(xué)習(xí)算法AI大數(shù)據(jù)技術(shù)

          ?搜索公眾號添加:?datanlp

          長按圖片,識別二維碼




          閱讀過本文的人還看了以下文章:


          TensorFlow 2.0深度學(xué)習(xí)案例實(shí)戰(zhàn)


          基于40萬表格數(shù)據(jù)集TableBank,用MaskRCNN做表格檢測


          《基于深度學(xué)習(xí)的自然語言處理》中/英PDF


          Deep Learning 中文版初版-周志華團(tuán)隊(duì)


          【全套視頻課】最全的目標(biāo)檢測算法系列講解,通俗易懂!


          《美團(tuán)機(jī)器學(xué)習(xí)實(shí)踐》_美團(tuán)算法團(tuán)隊(duì).pdf


          《深度學(xué)習(xí)入門:基于Python的理論與實(shí)現(xiàn)》高清中文PDF+源碼


          《深度學(xué)習(xí):基于Keras的Python實(shí)踐》PDF和代碼


          特征提取與圖像處理(第二版).pdf


          python就業(yè)班學(xué)習(xí)視頻,從入門到實(shí)戰(zhàn)項(xiàng)目


          2019最新《PyTorch自然語言處理》英、中文版PDF+源碼


          《21個項(xiàng)目玩轉(zhuǎn)深度學(xué)習(xí):基于TensorFlow的實(shí)踐詳解》完整版PDF+附書代碼


          《深度學(xué)習(xí)之pytorch》pdf+附書源碼


          PyTorch深度學(xué)習(xí)快速實(shí)戰(zhàn)入門《pytorch-handbook》


          【下載】豆瓣評分8.1,《機(jī)器學(xué)習(xí)實(shí)戰(zhàn):基于Scikit-Learn和TensorFlow》


          《Python數(shù)據(jù)分析與挖掘?qū)崙?zhàn)》PDF+完整源碼


          汽車行業(yè)完整知識圖譜項(xiàng)目實(shí)戰(zhàn)視頻(全23課)


          李沐大神開源《動手學(xué)深度學(xué)習(xí)》,加州伯克利深度學(xué)習(xí)(2019春)教材


          筆記、代碼清晰易懂!李航《統(tǒng)計(jì)學(xué)習(xí)方法》最新資源全套!


          《神經(jīng)網(wǎng)絡(luò)與深度學(xué)習(xí)》最新2018版中英PDF+源碼


          將機(jī)器學(xué)習(xí)模型部署為REST API


          FashionAI服裝屬性標(biāo)簽圖像識別Top1-5方案分享


          重要開源!CNN-RNN-CTC 實(shí)現(xiàn)手寫漢字識別


          yolo3 檢測出圖像中的不規(guī)則漢字


          同樣是機(jī)器學(xué)習(xí)算法工程師,你的面試為什么過不了?


          前海征信大數(shù)據(jù)算法:風(fēng)險(xiǎn)概率預(yù)測


          【Keras】完整實(shí)現(xiàn)‘交通標(biāo)志’分類、‘票據(jù)’分類兩個項(xiàng)目,讓你掌握深度學(xué)習(xí)圖像分類


          VGG16遷移學(xué)習(xí),實(shí)現(xiàn)醫(yī)學(xué)圖像識別分類工程項(xiàng)目


          特征工程(一)


          特征工程(二) :文本數(shù)據(jù)的展開、過濾和分塊


          特征工程(三):特征縮放,從詞袋到 TF-IDF


          特征工程(四): 類別特征


          特征工程(五): PCA 降維


          特征工程(六): 非線性特征提取和模型堆疊


          特征工程(七):圖像特征提取和深度學(xué)習(xí)


          如何利用全新的決策樹集成級聯(lián)結(jié)構(gòu)gcForest做特征工程并打分?


          Machine Learning Yearning 中文翻譯稿


          螞蟻金服2018秋招-算法工程師(共四面)通過


          全球AI挑戰(zhàn)-場景分類的比賽源碼(多模型融合)


          斯坦福CS230官方指南:CNN、RNN及使用技巧速查(打印收藏)


          python+flask搭建CNN在線識別手寫中文網(wǎng)站


          中科院Kaggle全球文本匹配競賽華人第1名團(tuán)隊(duì)-深度學(xué)習(xí)與特征工程



          不斷更新資源

          深度學(xué)習(xí)、機(jī)器學(xué)習(xí)、數(shù)據(jù)分析、python

          ?搜索公眾號添加:?datayx??


          瀏覽 302
          點(diǎn)贊
          評論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評論
          圖片
          表情
          推薦
          點(diǎn)贊
          評論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  亚洲操逼无码 | 69久人妻无码精品一区 | 青青草视频在线免费观看 | 性爱av无码| 国内内射在线 |