Logical Scribbles

[공부] 공부 목록 Update 본문

카테고리 없음

[공부] 공부 목록 Update

KimJake 2023. 12. 19. 18:47

최근 컴퓨터비전에서 자연어처리의 아이디어를 응용하는 사례가 굉장히 많은 것 같다.

 

 

Learning How to Ask: Querying LMs with Mixtures of Soft Prompts

Natural-language prompts have recently been used to coax pretrained language models into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a few-shot extrapolation paradigm (Brown et al., 2020). For example, language m

arxiv.org

 

 

 

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-

arxiv.org

 

MULTITASK PROMPT TUNING ENABLES PARAMETER-EFFICIENT TRANSFER LEARNING

 

 

 

The Power of Scale for Parameter-Efficient Prompt Tuning

In this work, we explore "prompt tuning", a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned throug

arxiv.org