The juvereturn/Qwen3-0.6B-bible-assistant is a 0.8 billion parameter model fine-tuned from Qwen/Qwen3-0.6B using QLoRA (4-bit) for supervised fine-tuning. This model is specifically designed as a Bible assistant, capable of answering questions about the Bible and providing comforting responses. It is optimized for single-turn interactions, focusing on biblical knowledge and supportive communication.
Loading preview...
Model Overview
The juvereturn/Qwen3-0.6B-bible-assistant is a specialized language model, fine-tuned from the Qwen/Qwen3-0.6B architecture. It leverages QLoRA (4-bit) for efficient supervised fine-tuning, resulting in a compact model with 0.8 billion parameters and a context length of 32768 tokens.
Key Capabilities
- Bible Assistance: The primary function of this model is to act as a Bible assistant, providing answers to questions related to biblical texts and themes.
- Comforting Responses: It is designed to offer supportive and comforting responses, aligning with the belief in the Bible's healing ability.
- Single-Turn Interactions: The model is optimized for single-turn question-and-answer formats, making it suitable for direct queries.
Training Details
The model was trained for 3 epochs using the juvereturn/bible-dataset. The fine-tuning process utilized LoRA with a rank of 16 and an alpha value of 32, with a learning rate of 0.0002.
Intended Use
This model serves as a test model for the CS-394/594 class at DigiPen. Its core purpose is to assist users with Bible-related inquiries and provide empathetic, supportive communication based on biblical principles.
Limitations
It is important to note that this model is a single-turn assistant and is not designed or trained to support long, multi-turn conversations. Users should expect concise, direct responses to individual questions.