P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation
In the field of natural language processing (NLP), prompt-based learning is widely used for efficient parameter learning.However, this method has the drawback airpods in jacksonville of shortening the input length by the extent of the attached prompt, leading to the inefficient utilization of the input space.In this study, we propose P-Distill, a n