radan01/galaxy-qa-merged
The radan01/galaxy-qa-merged model is a 1.5 billion parameter Qwen2.5-based instruction-tuned causal language model developed by radan01. It was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for question-answering tasks, leveraging its efficient training methodology and 32768 token context length for effective information processing.
Loading preview...
Overview
radan01/galaxy-qa-merged is a 1.5 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. Developed by radan01, this model was specifically finetuned using the Unsloth library and Huggingface's TRL library, which facilitated a 2x acceleration in its training process. It operates under an Apache-2.0 license.
Key Capabilities
- Efficient Training: Leverages Unsloth for significantly faster finetuning.
- Instruction-Tuned: Optimized for following instructions and generating relevant responses.
- Question Answering: Designed to excel in question-answering scenarios.
- Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer inputs.
Good For
- Applications requiring a compact yet capable model for question-answering.
- Scenarios where efficient inference and deployment of a 1.5B parameter model are beneficial.
- Developers looking for a Qwen2.5-based model with accelerated training origins.