面試大廠被懟:怎么連Attention都不會?
共 2367字,需瀏覽 5分鐘
·
2024-10-08 11:15
發(fā)論文,模型里沒點(diǎn)魔改Attention都不好意思叫創(chuàng)新。
面試算法崗,簡單的讓你講講幾種注意力,或者讓你手撕個(gè)注意力函數(shù)、MQA算法之類的。
所以各位小伙伴,尤其是在校生們,還是要打好Attention的基礎(chǔ)。這里也分享一些資料:包括縮放點(diǎn)積注意力、多頭注意力、交叉注意力、空間注意力、通道注意力等在內(nèi)的11種主流注意力機(jī)制112個(gè)創(chuàng)新研究,最新更新到24年9月
這些可以說是目前學(xué)術(shù)界有關(guān)attention最前沿的資料了。并且每篇論文都有對應(yīng)的代碼,可以自己手撕復(fù)現(xiàn),非常方便。
11種主流注意力機(jī)制112個(gè)創(chuàng)新研究paper+代碼,想要的掃碼領(lǐng)取??
掃碼領(lǐng)112個(gè)11種主流注意力機(jī)制
創(chuàng)新研究paper和代碼
縮放點(diǎn)積注意力
-
5.Sep.2024—LMLT:Low-to-high Multi-Level Vision Transformer for Image Super-Resolution
4.Sep.2024—MobileUNETR:A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation
-
4.Sep.2024—More is More Addition Bias in Large Language Models
-
4.Sep.2024—LongLLaVA:Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture
......
掃碼領(lǐng)112個(gè)11種主流注意力機(jī)制
創(chuàng)新研究paper和代碼
多頭注意力
-
4.Sep.2024—Multi-Head Attention Residual Unfolded Network for Model-Based Pansharpening
-
30.Aug.2024—From Text to Emotion: Unveiling the Emotion Annotation Capabilities of LLMs
-
25.Jun.2024—Temporal-Channel Modeling in Multi-head Self-Attention for Synthetic Speech Detection
-
14.May.2024—Improving Transformers with Dynamically Composable Multi-Head Attention
......
掃碼領(lǐng)112個(gè)11種主流注意力機(jī)制
創(chuàng)新研究paper和代碼
步幅注意力
-
25.Aug.2024—Vision-Language and Large Language Model Performance in Gastroenterology: GPT, Claude, Llama, Phi, Mistral, Gemma, and Quantized Models
-
21.Aug.2024—Unlocking Adversarial Suffix Optimization Without Affirmative Phrases: Efficient Black-box Jailbreaking via LLM as Optimizer
-
16.Aug.2024—Fine-tuning LLMs for Autonomous Spacecraft Control: A Case Study Using Kerbal Space Program
-
15.Aug.2024—FuseChat Knowledge Fusion of Chat Models
......
掃碼領(lǐng)112個(gè)11種主流注意力機(jī)制
創(chuàng)新研究paper和代碼
