didula-wso2/exp_24_julia_active_fixsft_16bit_vllm

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 10, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The didula-wso2/exp_24_julia_active_fixsft_16bit_vllm is a 7.6 billion parameter Qwen2 model developed by didula-wso2. This model was finetuned from didula-wso2/exp_24_sft-julia_sft_alpacasft_16bit_vllm using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language generation tasks, leveraging its Qwen2 architecture for robust performance.

Loading preview...

Model Overview

This model, didula-wso2/exp_24_julia_active_fixsft_16bit_vllm, is a 7.6 billion parameter Qwen2-based language model developed by didula-wso2. It was finetuned from didula-wso2/exp_24_sft-julia_sft_alpacasft_16bit_vllm.

Key Characteristics

  • Architecture: Based on the Qwen2 model family.
  • Parameter Count: 7.6 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: The model was trained 2x faster by utilizing Unsloth and Huggingface's TRL library, indicating an optimized finetuning process.
  • License: Distributed under the Apache-2.0 license, allowing for broad use and modification.

Potential Use Cases

Given its foundation and finetuning approach, this model is suitable for a variety of general-purpose natural language processing tasks, including:

  • Text generation and completion.
  • Instruction following, depending on the specific finetuning objectives of the base model.
  • Applications requiring a moderately sized yet capable language model.