radan01/galaxy-qa-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The radan01/galaxy-qa-merged model is a 1.5 billion parameter Qwen2.5-based instruction-tuned causal language model developed by radan01. It was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for question-answering tasks, leveraging its efficient training methodology and 32768 token context length for effective information processing.

Loading preview...

Overview

radan01/galaxy-qa-merged is a 1.5 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. Developed by radan01, this model was specifically finetuned using the Unsloth library and Huggingface's TRL library, which facilitated a 2x acceleration in its training process. It operates under an Apache-2.0 license.

Key Capabilities

  • Efficient Training: Leverages Unsloth for significantly faster finetuning.
  • Instruction-Tuned: Optimized for following instructions and generating relevant responses.
  • Question Answering: Designed to excel in question-answering scenarios.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer inputs.

Good For

  • Applications requiring a compact yet capable model for question-answering.
  • Scenarios where efficient inference and deployment of a 1.5B parameter model are beneficial.
  • Developers looking for a Qwen2.5-based model with accelerated training origins.