Shinhyeok Hwang
Home
About
Contact
LoRA: Low-Rank Adaptation of Large Language Models
Category
PEFT
Year/Month
2021-06
Status
Publications
Preprint
Code
https://github.com/microsoft/LoRA
Adapter의 한계
inference latency
품질과 효율성 사이 trade-off 발생
낮은 dimension에 model 학습이 있다고 가정.
→ decomposition