Fine-tuning large language models (LLMs) on niche text corpora has emerged as a crucial step in enhancing their performance on research tasks. This study investigates various fine-tuning methods for LLMs when applied to scientific text. We analyze the impact of different variables, such as training, architecture, and hyperparameter tuning, on the a