Annotate Research Papers: ML/DL 논문을 '쉽게' 읽을 수 있는, 주석이 포함된 논문 📖 저장소

ML/DL 논문은 유용하지만 읽기 어렵습니다 :face_with_head_bandage:
(한때 불면증으로 고생했는데, 같은 팀 박사님 :student: 께서는 논문 읽기를 권하시더라고요 :face_with_open_eyes_and_hand_over_mouth:)

새로 나오는 논문은 기하급수로 증가하고, 제가 읽는 속도는 느린 산술급수급이라,
Paper Queue;;가 계속 늘어만 가고 있었는데;;

조금이나마 도움이 될만한 저장소TLDR 뉴스레터에서 발견하여 가져옵니다.
(아직 양이 많지는 않아서 주로 처음 보는 분야의 논문을 읽을 때 유용하지 않을까... 싶습니다. :sweat_smile: )

papers

지금까지의 논문 목록들은 아래와 같습니다.

Field Category Annotated Paper
Computer Vision Adaptive Risk Minimization Abstract
Axial DeepLab Code Abstract
ConvNext Code Abstract
EfficientNetsV2 Code Abstract
Supervised Flow-edge Guided Video Completion Code Abstract
Is Batch Norm Unique? Abstract
Knowledge Distillation: A good teacher is patient and consistent Code Abstract
RandConv Code Abstract
Polyloss Code Abstract
Scaling Down Deep Learning Code Abstract
Segment Anything Abstract
Supervised Contrastive Learning Code Abstract
Vision Transformer Code Abstract
Are all negatives created equal in contrastive instance discrimination? Abstract
Towards Domain-Agnostic Contrastive Learning Abstract
Self-Supervised Emerging Properties in Self-Supervised Vision Transformers Code Abstract
Decoder Denoising Pretraining Abstract
Masked Autoencoders Code Abstract
Swav Code Abstract
What Should Not Be Contrastive in Contrastive Learning Abstract
Semi-Supervised CoMatch Code Abstract
Diffusion Models Understanding Diffusion Models Abstract
On the Importance of Noise Scheduling for Diffusion Models Abstract
Emergent Correspondence from Diffusion Models Abstract
GANs CycleGan Code Abstract
Interpretability and Explainability What is being transferred in transfer learning? Code Abstract
Explaining in Style Code Abstract
NLP Do Language Embeddings Capture Scales? Abstract
mSLAM Abstract
Cramming Abstract
Speech SpeechStew Abstract
mSLAM Abstract
WhisperX Code Abstract
Others Multi-Task Self-Training for Learning General Representations Abstract
Decoder Denoising Pretraining for Semantic Segmentation Abstract
Sigmoid Loss for Image-Text Pretraining Abstract


아래 저장소에 계속 업데이트 한다고 하니, 관심 있으신 분들께서는 Watch :eyeglasses: 나 Star :star: 해두시는 것을 추천합니다. :smiley:

1개의 좋아요