閱讀人工智能論文的十條建議
(給機(jī)器學(xué)習(xí)算法與Python實(shí)戰(zhàn)加星標(biāo),提升AI技能)

1.GloVe (2014)
論文鏈接:https://www.aclweb.org/anthology/D14-1162.pdf
Transformers:http://papers.nips.cc/paper/7181-attention-is-all-you-need
Word2Vec:https://arxiv.org/abs/1301.3781 BERT:https://arxiv.org/abs/1810.04805
2.AdaBoost (1997)
論文鏈接:https://www.sciencedirect.com/science/article/pii/S002200009791504X
隨機(jī)森林分類器:https://en.wikipedia.org/wiki/Random_forest 梯度提升技術(shù):https://en.wikipedia.org/wiki/Gradient_boosting XGBoost軟件包:https://github.com/dmlc/xgboost LightGBM:https://github.com/microsoft/LightGBM
3.Capsule Networks (2017)
論文鏈接:https://arxiv.org/abs/1710.09829
4.Relational Inductive Biases (2018)
論文鏈接:https://arxiv.org/pdf/1806.01261.pdf
2020年圖機(jī)學(xué)習(xí)的主要趨勢:https://towardsdatascience.com/top-trends-of-graph-machine-learning-in-2020-1194175351a3
5.Training Batch Norm and Only BatchNorm (2020)
論文鏈接:https://arxiv.org/abs/2003.00152

彩票假說:https://arxiv.org/abs/1803.03635
6.Spectral Norm (2018)
論文鏈接:https://arxiv.org/abs/1802.05957
Wasserstein損失:https://arxiv.org/abs/1701.07875
dropout:https://en.wikipedia.org/wiki/Dropout_(neural_networks)
7.Perceptual Losses (2016)
https://medium.com/ml-cheat-sheet/winning-at-loss-functions-2-important-loss-functions-in-computer-vision-b2b9d293e15a
8.Nadam (2016)
Radam:https://arxiv.org/abs/1908.03265v1 Lookahead:https://arxiv.org/abs/1907.08610 Ranger:https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
9.The Double Descent Hypothesis (2019)
圖像分類的“技巧包”:https://arxiv.org/abs/1812.01187
10.On The Measure of Intelligence (2019)
https://arxiv.org/abs/1911.01547
https://youtu.be/UX8OubxsY8w
推薦閱讀
(點(diǎn)擊標(biāo)題可跳轉(zhuǎn)閱讀)
集成學(xué)習(xí):一種先進(jìn)的機(jī)器學(xué)習(xí)方法
老鐵,三連支持一下,好嗎?↓↓↓
評論
圖片
表情
