CVPR 2022 論文和開源項(xiàng)目合集

向AI轉(zhuǎn)型的程序員都關(guān)注了這個號??????
機(jī)器學(xué)習(xí)AI算法工程?? 公眾號:datayx
【CVPR 2022 論文開源目錄】
Backbone
CLIP
GAN
NAS
NeRF
Visual Transformer
視覺和語言(Vision-Language)
自監(jiān)督學(xué)習(xí)(Self-supervised Learning)
數(shù)據(jù)增強(qiáng)(Data Augmentation)
目標(biāo)檢測(Object Detection)
目標(biāo)跟蹤(Visual Tracking)
語義分割(Semantic Segmentation)
實(shí)例分割(Instance Segmentation)
小樣本分割(Few-Shot Segmentation)
視頻理解(Video Understanding)
圖像編輯(Image Editing)
Low-level Vision
超分辨率(Super-Resolution)
3D點(diǎn)云(3D Point Cloud)
3D目標(biāo)檢測(3D Object Detection)
3D語義分割(3D Semantic Segmentation)
3D目標(biāo)跟蹤(3D Object Tracking)
3D人體姿態(tài)估計(jì)(3D Human Pose Estimation)
3D語義場景補(bǔ)全(3D Semantic Scene Completion)
3D重建(3D Reconstruction)
偽裝物體檢測(Camouflaged Object Detection)
深度估計(jì)(Depth Estimation)
立體匹配(Stereo Matching)
車道線檢測(Lane Detection)
圖像修復(fù)(Image Inpainting)
人群計(jì)數(shù)(Crowd Counting)
醫(yī)學(xué)圖像(Medical Image)
場景圖生成(Scene Graph Generation)
弱監(jiān)督物體檢測(Weakly Supervised Object Localization)
高光譜圖像重建(Hyperspectral Image Reconstruction)
水印(Watermarking)
數(shù)據(jù)集(Datasets)
新任務(wù)(New Tasks)
其他(Others)
Backbone
A ConvNet for the 2020s
Paper:?https://arxiv.org/abs/2201.03545
Code:?https://github.com/facebookresearch/ConvNeXt
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Paper:?https://arxiv.org/abs/2203.06717
Code:?https://github.com/megvii-research/RepLKNet
Code2:?https://github.com/DingXiaoH/RepLKNet-pytorch
MPViT : Multi-Path Vision Transformer for Dense Prediction
Paper:?https://arxiv.org/abs/2112.11010
Code:?https://github.com/youngwanLEE/MPViT
HairCLIP: Design Your Hair by Text and Reference Image
Paper:?https://arxiv.org/abs/2112.05142
Code:?https://github.com/wty-ustc/HairCLIP
PointCLIP: Point Cloud Understanding by CLIP
Paper:?https://arxiv.org/abs/2112.02413
Code:?https://github.com/ZrrSkywalker/PointCLIP
Blended Diffusion for Text-driven Editing of Natural Images
Paper:?https://arxiv.org/abs/2111.14818
Code:?https://github.com/omriav/blended-diffusion
SemanticStyleGAN: Learning Compositional Generative Priors for Controllable Image Synthesis and Editing
Homepage:?https://semanticstylegan.github.io/
Paper:?https://arxiv.org/abs/2112.02236
Demo:?https://semanticstylegan.github.io/videos/demo.mp4
Style Transformer for Image Inversion and Editing
Paper:?https://arxiv.org/abs/2203.07932
Code:?https://github.com/sapphire497/style-transformer
β-DARTS: Beta-Decay Regularization for Differentiable Architecture Search
Paper:?https://arxiv.org/abs/2203.01665
Code:?https://github.com/Sunshine-Ye/Beta-DARTS
ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image Prior
Paper:?https://arxiv.org/abs/2111.15362
Code: None
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
Homepage:?https://jonbarron.info/mipnerf360/
Paper:?https://arxiv.org/abs/2111.12077
Demo:?https://youtu.be/YStDS2-Ln1s
Point-NeRF: Point-based Neural Radiance Fields
Homepage:?https://xharlie.github.io/projects/project_sites/pointnerf/
Paper:?https://arxiv.org/abs/2201.08845
Code:?https://github.com/Xharlie/point-nerf
NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images
Paper:?https://arxiv.org/abs/2111.13679
Homepage:?https://bmild.github.io/rawnerf/
Demo:?https://www.youtube.com/watch?v=JtBS4KBcKVc
Urban Radiance Fields
Homepage:?https://urban-radiance-fields.github.io/
Paper:?https://arxiv.org/abs/2111.14643
Demo:?https://youtu.be/qGlq5DZT6uc
Pix2NeRF: Unsupervised Conditional π-GAN for Single Image to Neural Radiance Fields Translation
Paper:?https://arxiv.org/abs/2202.13162
Code:?https://github.com/HexagonPrime/Pix2NeRF
HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video
Homepage:?https://grail.cs.washington.edu/projects/humannerf/
Paper:?https://arxiv.org/abs/2201.04127
Demo:?https://youtu.be/GM-RoZEymmw
Backbone
MPViT : Multi-Path Vision Transformer for Dense Prediction
Paper:?https://arxiv.org/abs/2112.11010
Code:?https://github.com/youngwanLEE/MPViT
應(yīng)用(Application)
Language-based Video Editing via Multi-Modal Multi-Level Transformer
Paper:?https://arxiv.org/abs/2104.01122
Code: None
MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video
Paper:?https://arxiv.org/abs/2203.00859
Code: None
Embracing Single Stride 3D Object Detector with Sparse Transformer
Paper:?https://arxiv.org/abs/2112.06375
Code:?https://github.com/TuSimple/SST
Multi-class Token Transformer for Weakly Supervised Semantic Segmentation
Paper:?https://arxiv.org/abs/2203.02891
Code:?https://github.com/xulianuwa/MCTformer
Spatio-temporal Relation Modeling for Few-shot Action Recognition
Paper:?https://arxiv.org/abs/2112.05132
Code:?https://github.com/Anirudh257/strm
Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction
Paper:?https://arxiv.org/abs/2111.07910
Code:?https://github.com/caiyuanhao1998/MST
Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling
Homepage:?https://point-bert.ivg-research.xyz/
Paper:?https://arxiv.org/abs/2111.14819
Code:?https://github.com/lulutang0608/Point-BERT
GroupViT: Semantic Segmentation Emerges from Text Supervision
Homepage:?https://jerryxu.net/GroupViT/
Paper:?https://arxiv.org/abs/2202.11094
Demo:?https://youtu.be/DtJsWIUTW-Y
Restormer: Efficient Transformer for High-Resolution Image Restoration
Paper:?https://arxiv.org/abs/2111.09881
Code:?https://github.com/swz30/Restormer
Splicing ViT Features for Semantic Appearance Transfer
Homepage:?https://splice-vit.github.io/
Paper:?https://arxiv.org/abs/2201.00424
Code:?https://github.com/omerbt/Splice
Self-supervised Video Transformer
Homepage:?https://kahnchana.github.io/svt/
Paper:?https://arxiv.org/abs/2112.01514
Code:?https://github.com/kahnchana/svt
Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers
Paper:?https://arxiv.org/abs/2203.02664
Code:?https://github.com/rulixiang/afa
Accelerating DETR Convergence via Semantic-Aligned Matching
Paper:?https://arxiv.org/abs/2203.06883
Code:?https://github.com/ZhangGongjie/SAM-DETR
DN-DETR: Accelerate DETR Training by Introducing Query DeNoising
Paper:?https://arxiv.org/abs/2203.01305
Code:?https://github.com/FengLi-ust/DN-DETR
Style Transformer for Image Inversion and Editing
Paper:?https://arxiv.org/abs/2203.07932
Code:?https://github.com/sapphire497/style-transformer
MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer
Paper:?https://arxiv.org/abs/2203.10981
Code:?https://github.com/kuanchihhuang/MonoDTR
Mask Transfiner for High-Quality Instance Segmentation
Paper:?https://arxiv.org/abs/2111.13673
Code:?https://github.com/SysCV/transfiner
Conditional Prompt Learning for Vision-Language Models
Paper:?https://arxiv.org/abs/2203.05557
Code:?https://github.com/KaiyangZhou/CoOp
UniVIP: A Unified Framework for Self-Supervised Visual Pre-training
Paper:?https://arxiv.org/abs/2203.06965
Code: None
Crafting Better Contrastive Views for Siamese Representation Learning
Paper:?https://arxiv.org/abs/2202.03278
Code:?https://github.com/xyupeng/ContrastiveCrop
HCSC: Hierarchical Contrastive Selective Coding
Homepage:?https://github.com/gyfastas/HCSC
Paper:?https://arxiv.org/abs/2202.00455
TeachAugment: Data Augmentation Optimization Using Teacher Knowledge
Paper:?https://arxiv.org/abs/2202.12513
Code:?https://github.com/DensoITLab/TeachAugment
AlignMix: Improving representation by interpolating aligned features
Paper:?https://arxiv.org/abs/2103.15375
Code: None
DN-DETR: Accelerate DETR Training by Introducing Query DeNoising
Paper:?https://arxiv.org/abs/2203.01305
Code:?https://github.com/FengLi-ust/DN-DETR
Accelerating DETR Convergence via Semantic-Aligned Matching
Paper:?https://arxiv.org/abs/2203.06883
Code:?https://github.com/ZhangGongjie/SAM-DETR
Localization Distillation for Dense Object Detection
Paper:?https://arxiv.org/abs/2102.12252
Code:?https://github.com/HikariTJU/LD
Code2:?https://github.com/HikariTJU/LD
Focal and Global Knowledge Distillation for Detectors
Paper:?https://arxiv.org/abs/2111.11837
Code:?https://github.com/yzd-v/FGD
A Dual Weighting Label Assignment Scheme for Object Detection
Paper:?https://arxiv.org/abs/2203.09730
Code:?https://github.com/strongwolf/DW
Correlation-Aware Deep Tracking
Paper:?https://arxiv.org/abs/2203.01666
Code: None
TCTrack: Temporal Contexts for Aerial Tracking
Paper:?https://arxiv.org/abs/2203.01885
Code:?https://github.com/vision4robotics/TCTrack
弱監(jiān)督語義分割
Class Re-Activation Maps for Weakly-Supervised Semantic Segmentation
Paper:?https://arxiv.org/abs/2203.00962
Code:?https://github.com/zhaozhengChen/ReCAM
Multi-class Token Transformer for Weakly Supervised Semantic Segmentation
Paper:?https://arxiv.org/abs/2203.02891
Code:?https://github.com/xulianuwa/MCTformer
Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers
Paper:?https://arxiv.org/abs/2203.02664
Code:?https://github.com/rulixiang/afa
半監(jiān)督語義分割
ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation
Paper:?https://arxiv.org/abs/2106.05095
Code:?https://github.com/LiheYoung/ST-PlusPlus
Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels
Homepage:?https://haochen-wang409.github.io/U2PL/
Paper:?https://arxiv.org/abs/2203.03884
Code:?https://github.com/Haochen-Wang409/U2PL
無監(jiān)督語義分割
GroupViT: Semantic Segmentation Emerges from Text Supervision
Homepage:?https://jerryxu.net/GroupViT/
Paper:?https://arxiv.org/abs/2202.11094
Demo:?https://youtu.be/DtJsWIUTW-Y
E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation
Paper:?https://arxiv.org/abs/2203.04074
Code:?https://github.com/zhang-tao-whu/e2ec
Mask Transfiner for High-Quality Instance Segmentation
Paper:?https://arxiv.org/abs/2111.13673
Code:?https://github.com/SysCV/transfiner
自監(jiān)督實(shí)例分割
FreeSOLO: Learning to Segment Objects without Annotations
Paper:?https://arxiv.org/abs/2202.12181
Code: None
視頻實(shí)例分割
Efficient Video Instance Segmentation via Tracklet Query and Proposal
Homepage:?https://jialianwu.com/projects/EfficientVIS.html
Paper:?https://arxiv.org/abs/2203.01853
Demo:?https://youtu.be/sSPMzgtMKCE
Learning What Not to Segment: A New Perspective on Few-Shot Segmentation
Paper:?https://arxiv.org/abs/2203.07615
Code:?https://github.com/chunbolang/BAM
Self-supervised Video Transformer
Homepage:?https://kahnchana.github.io/svt/
Paper:?https://arxiv.org/abs/2112.01514
Code:?https://github.com/kahnchana/svt
行為識別(Action Recognition)
Spatio-temporal Relation Modeling for Few-shot Action Recognition
Paper:?https://arxiv.org/abs/2112.05132
Code:?https://github.com/Anirudh257/strm
動作檢測(Action Detection)
End-to-End Semi-Supervised Learning for Video Action Detection
Paper:?https://arxiv.org/abs/2203.04251
Code: None
Style Transformer for Image Inversion and Editing
Paper:?https://arxiv.org/abs/2203.07932
Code:?https://github.com/sapphire497/style-transformer
Blended Diffusion for Text-driven Editing of Natural Images
Paper:?https://arxiv.org/abs/2111.14818
Code:?https://github.com/omriav/blended-diffusion
SemanticStyleGAN: Learning Compositional Generative Priors for Controllable Image Synthesis and Editing
Homepage:?https://semanticstylegan.github.io/
Paper:?https://arxiv.org/abs/2112.02236
Demo:?https://semanticstylegan.github.io/videos/demo.mp4
ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image Prior
Paper:?https://arxiv.org/abs/2111.15362
Code: None
Restormer: Efficient Transformer for High-Resolution Image Restoration
Paper:?https://arxiv.org/abs/2111.09881
Code:?https://github.com/swz30/Restormer
圖像超分辨率(Image Super-Resolution)
Learning the Degradation Distribution for Blind Image Super-Resolution
Paper:?https://arxiv.org/abs/2203.04962
Code:?https://github.com/greatlog/UnpairedSR
視頻超分辨率(Video Super-Resolution)
BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment
Paper:?https://arxiv.org/abs/2104.13371
Code:?https://github.com/open-mmlab/mmediting
Code:?https://github.com/ckkelvinchan/BasicVSR_PlusPlus
Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling
Homepage:?https://point-bert.ivg-research.xyz/
Paper:?https://arxiv.org/abs/2111.14819
Code:?https://github.com/lulutang0608/Point-BERT
A Unified Query-based Paradigm for Point Cloud Understanding
Paper:?https://arxiv.org/abs/2203.01252
Code: None
CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding
Paper:?https://arxiv.org/abs/2203.00680
Code:?https://github.com/MohamedAfham/CrossPoint
PointCLIP: Point Cloud Understanding by CLIP
Paper:?https://arxiv.org/abs/2112.02413
Code:?https://github.com/ZrrSkywalker/PointCLIP
Embracing Single Stride 3D Object Detector with Sparse Transformer
Paper:?https://arxiv.org/abs/2112.06375
Code:?https://github.com/TuSimple/SST
Canonical Voting: Towards Robust Oriented Bounding Box Detection in 3D Scenes
Paper:?https://arxiv.org/abs/2011.12001
Code:?https://github.com/qq456cvb/CanonicalVoting
MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer
Paper:?https://arxiv.org/abs/2203.10981
Code:?https://github.com/kuanchihhuang/MonoDTR
Scribble-Supervised LiDAR Semantic Segmentation
Paper:?https://arxiv.org/abs/2203.08537
Dataset:?https://github.com/ouenal/scribblekitti
Beyond 3D Siamese Tracking: A Motion-Centric Paradigm for 3D Single Object Tracking in Point Clouds
Paper:?https://arxiv.org/abs/2203.01730
Code:?https://github.com/Ghostish/Open3DSOT
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
Paper:?https://arxiv.org/abs/2111.12707
Code:?https://github.com/Vegetebird/MHFormer
中文解讀:?https://zhuanlan.zhihu.com/p/439459426
MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video
Paper:?https://arxiv.org/abs/2203.00859
Code: None
MonoScene: Monocular 3D Semantic Scene Completion
Paper:?https://arxiv.org/abs/2112.00726
Code:?https://github.com/cv-rits/MonoScene
BANMo: Building Animatable 3D Neural Models from Many Casual Videos
Homepage:?https://banmo-www.github.io/
Paper:?https://arxiv.org/abs/2112.12761
Code:?https://github.com/facebookresearch/banmo
Zoom In and Out: A Mixed-scale Triplet Network for Camouflaged Object Detection
Paper:?https://arxiv.org/abs/2203.02688
Code:?https://github.com/lartpang/ZoomNet
單目深度估計(jì)
NeW CRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation
Paper:?https://arxiv.org/abs/2203.01502
Code: None
OmniFusion: 360 Monocular Depth Estimation via Geometry-Aware Fusion
Paper:?https://arxiv.org/abs/2203.00838
Code: None
Toward Practical Self-Supervised Monocular Indoor Depth Estimation
Paper:?https://arxiv.org/abs/2112.02306
Code: None
ACVNet: Attention Concatenation Volume for Accurate and Efficient Stereo Matching
Paper:?https://arxiv.org/abs/2203.02146
Code:?https://github.com/gangweiX/ACVNet
Rethinking Efficient Lane Detection via Curve Modeling
Paper:?https://arxiv.org/abs/2203.02431
Code:?https://github.com/voldemortX/pytorch-auto-drive
Demo:https://user-images.githubusercontent.com/32259501/148680744-a18793cd-f437-461f-8c3a-b909c9931709.mp4
Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding
Paper:?https://arxiv.org/abs/2203.00867
Code:?https://github.com/DQiaole/ZITS_inpainting
Leveraging Self-Supervision for Cross-Domain Crowd Counting
Paper:?https://arxiv.org/abs/2103.16291
Code: None
BoostMIS: Boosting Medical Image Semi-supervised Learning with Adaptive Pseudo Labeling and Informative Active Annotation
Paper:?https://arxiv.org/abs/2203.02533
Code: None
SGTR: End-to-end Scene Graph Generation with Transformer
Paper:?https://arxiv.org/abs/2112.12970
Code: None
StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions
Homepage:?https://lukashoel.github.io/stylemesh/
Paper:?https://arxiv.org/abs/2112.01530
Code:?https://github.com/lukasHoel/stylemesh
Demo:https://www.youtube.com/watch?v=ZqgiTLcNcks
Weakly Supervised Object Localization as Domain Adaption
Paper:?https://arxiv.org/abs/2203.01714
Code:?https://github.com/zh460045050/DA-WSOL_CVPR2022
Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction
Paper:?https://arxiv.org/abs/2111.07910
Code:?https://github.com/caiyuanhao1998/MST
Deep 3D-to-2D Watermarking: Embedding Messages in 3D Meshes and Extracting Them from 2D Renderings
Paper:?https://arxiv.org/abs/2104.13450
Code: None
It's About Time: Analog Clock Reading in the Wild
Homepage:?https://charigyang.github.io/abouttime/
Paper:?https://arxiv.org/abs/2111.09162
Code:?https://github.com/charigyang/itsabouttime
Demo:?https://youtu.be/cbiMACA6dRc
Toward Practical Self-Supervised Monocular Indoor Depth Estimation
Paper:?https://arxiv.org/abs/2112.02306
Code: None
Kubric: A scalable dataset generator
Paper:?https://arxiv.org/abs/2203.03570
Code:?https://github.com/google-research/kubric
Scribble-Supervised LiDAR Semantic Segmentation
Paper:?https://arxiv.org/abs/2203.08537
Dataset:?https://github.com/ouenal/scribblekitti
Language-based Video Editing via Multi-Modal Multi-Level Transformer
Paper:?https://arxiv.org/abs/2104.01122
Code: None
It's About Time: Analog Clock Reading in the Wild
Homepage:?https://charigyang.github.io/abouttime/
Paper:?https://arxiv.org/abs/2111.09162
Code:?https://github.com/charigyang/itsabouttime
Demo:?https://youtu.be/cbiMACA6dRc
Splicing ViT Features for Semantic Appearance Transfer
Homepage:?https://splice-vit.github.io/
Paper:?https://arxiv.org/abs/2201.00424
Code:?https://github.com/omerbt/Splice
Kubric: A scalable dataset generator
Paper:?https://arxiv.org/abs/2203.03570
Code:?https://github.com/google-research/kubric
機(jī)器學(xué)習(xí)算法AI大數(shù)據(jù)技術(shù)
?搜索公眾號添加:?datanlp
長按圖片,識別二維碼
閱讀過本文的人還看了以下文章:
TensorFlow 2.0深度學(xué)習(xí)案例實(shí)戰(zhàn)
基于40萬表格數(shù)據(jù)集TableBank,用MaskRCNN做表格檢測
《基于深度學(xué)習(xí)的自然語言處理》中/英PDF
Deep Learning 中文版初版-周志華團(tuán)隊(duì)
【全套視頻課】最全的目標(biāo)檢測算法系列講解,通俗易懂!
《美團(tuán)機(jī)器學(xué)習(xí)實(shí)踐》_美團(tuán)算法團(tuán)隊(duì).pdf
《深度學(xué)習(xí)入門:基于Python的理論與實(shí)現(xiàn)》高清中文PDF+源碼
《深度學(xué)習(xí):基于Keras的Python實(shí)踐》PDF和代碼
python就業(yè)班學(xué)習(xí)視頻,從入門到實(shí)戰(zhàn)項(xiàng)目
2019最新《PyTorch自然語言處理》英、中文版PDF+源碼
《21個項(xiàng)目玩轉(zhuǎn)深度學(xué)習(xí):基于TensorFlow的實(shí)踐詳解》完整版PDF+附書代碼
《深度學(xué)習(xí)之pytorch》pdf+附書源碼
PyTorch深度學(xué)習(xí)快速實(shí)戰(zhàn)入門《pytorch-handbook》
【下載】豆瓣評分8.1,《機(jī)器學(xué)習(xí)實(shí)戰(zhàn):基于Scikit-Learn和TensorFlow》
《Python數(shù)據(jù)分析與挖掘?qū)崙?zhàn)》PDF+完整源碼
汽車行業(yè)完整知識圖譜項(xiàng)目實(shí)戰(zhàn)視頻(全23課)
李沐大神開源《動手學(xué)深度學(xué)習(xí)》,加州伯克利深度學(xué)習(xí)(2019春)教材
筆記、代碼清晰易懂!李航《統(tǒng)計(jì)學(xué)習(xí)方法》最新資源全套!
《神經(jīng)網(wǎng)絡(luò)與深度學(xué)習(xí)》最新2018版中英PDF+源碼
將機(jī)器學(xué)習(xí)模型部署為REST API
FashionAI服裝屬性標(biāo)簽圖像識別Top1-5方案分享
重要開源!CNN-RNN-CTC 實(shí)現(xiàn)手寫漢字識別
同樣是機(jī)器學(xué)習(xí)算法工程師,你的面試為什么過不了?
前海征信大數(shù)據(jù)算法:風(fēng)險(xiǎn)概率預(yù)測
【Keras】完整實(shí)現(xiàn)‘交通標(biāo)志’分類、‘票據(jù)’分類兩個項(xiàng)目,讓你掌握深度學(xué)習(xí)圖像分類
VGG16遷移學(xué)習(xí),實(shí)現(xiàn)醫(yī)學(xué)圖像識別分類工程項(xiàng)目
特征工程(二) :文本數(shù)據(jù)的展開、過濾和分塊
如何利用全新的決策樹集成級聯(lián)結(jié)構(gòu)gcForest做特征工程并打分?
Machine Learning Yearning 中文翻譯稿
斯坦福CS230官方指南:CNN、RNN及使用技巧速查(打印收藏)
python+flask搭建CNN在線識別手寫中文網(wǎng)站
中科院Kaggle全球文本匹配競賽華人第1名團(tuán)隊(duì)-深度學(xué)習(xí)與特征工程
不斷更新資源
深度學(xué)習(xí)、機(jī)器學(xué)習(xí)、數(shù)據(jù)分析、python
?搜索公眾號添加:?datayx??
