radan01/galaxy-qa-merged
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The radan01/galaxy-qa-merged model is a 1.5 billion parameter Qwen2.5-based instruction-tuned causal language model developed by radan01. It was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for question-answering tasks, leveraging its efficient training methodology and 32768 token context length for effective information processing.
Loading preview...