Treffer: Slimming Down LLMs Without Losing Their Minds

Title:
Slimming Down LLMs Without Losing Their Minds
Authors:
Publication Year:
2025
Collection:
Computer Science
Document Type:
Report Working Paper
Accession Number:
edsarx.2506.10885
Database:
arXiv

Weitere Informationen

This paper investigates and validates the impact of fine-tuning on large language model performance, focusing on parameter-efficient methods (LoRA and QLoRA). We evaluate model capabilities across three key domains: (1) commonsense reasoning (HellaSwag), (2) mathematical reasoning (GSM8K), and (3) multi-domain knowledge (MMLU-CS). Our findings demonstrate that: (1) LoRA-based methods effectively improve task-specific performance while maintaining computational efficiency, and (2) performance strongly depends on alignment between fine-tuning dataset and benchmark tasks. The study provides both theoretical insights into parameter-efficient mechanisms and practical guidance for developers implementing efficient LLM adaptation with limited resources.
Comment: 10 pages