목록Miscellaneous (2)
Logical Scribbles
Attention is all you need Transformer VIT (Vision Transformer) https://arxiv.org/abs/2010.11929 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convoluti..
이 포스팅에서는 간단히 인터넷을 돌아다니며 찾았던 공부해야 할 목록들, 공부할 때 도움을 받았던 곳을 정리해 보겠다. https://github.com/hoya012/deep_learning_object_detection GitHub - hoya012/deep_learning_object_detection: A paper list of object detection using deep learning. A paper list of object detection using deep learning. - GitHub - hoya012/deep_learning_object_detection: A paper list of object detection using deep learning. github.com 위..