25.04.3255
000 - General Works
Karya Ilmiah - Skripsi (S1) - Reference
Natural Language Processing (nlp)
5 kali
This study aims to improve code generation performance by applying parameter-efficient fine-tuning using Quantized Low-Rank Adaptation (QLoRA). Currently, large language models (LLMs) in code generation continue to face deployment challenges in low-resource environments, particularly due to high computational demands. The core problem addressed in this study is the inefficiency and limited adaptability of pre-trained models in producing correct code under constrained resource conditions, which results in decreased output quality and restricts accessibility for low-resource users. While previous approaches have employed fine-tuning on large-scale datasets to mitigate these issues—yielding improvements in generalization—they remain hindered by substantial memory usage and computational cost. This study analyzes a compact fine-tuning pipeline utilizing QLoRA, applied to the Qwen2.5-Coder-0.5B-Instruct model, to address these constraints and improve generation accuracy with minimal resource consumption. The proposed system was fine-tuned using two benchmark datasets—CodeExercise-Python-27k and Tested-22k-Python-Alpaca—and demonstrated performance improvements of up to 7.3% on HumanEval and 4.3% on HumanEval in pass@1 metrics, compared to the base model. These findings confirm that fine-tuning with specific datasets, with lightweight methods like QLoRA, significantly enhances the effectiveness of compact LLMs in code generation, contributing to advancements in software engineering, AI-assisted learning, and low-resource-constrained development platforms.
Tersedia 1 dari total 1 Koleksi
Nama | MUHAMAD RAIHAN SYAHRIN SYA'BANI |
Jenis | Perorangan |
Penyunting | Donni Richasdy, Dana Sulistiyo Kusumo |
Penerjemah |
Nama | Universitas Telkom, S1 Informatika |
Kota | Bandung |
Tahun | 2025 |
Harga sewa | IDR 0,00 |
Denda harian | IDR 0,00 |
Jenis | Non-Sirkulasi |