didula-wso2/exp_24_julia_alpaca_extendedsft_16bit_vllm

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The didula-wso2/exp_24_julia_alpaca_extendedsft_16bit_vllm is a 7.6 billion parameter Qwen2-based causal language model developed by didula-wso2. This model was fine-tuned from unsloth/qwen2.5-coder-7b-instruct-bnb-4bit using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language generation tasks, leveraging its Qwen2 architecture and efficient fine-tuning process.

Loading preview...

Model Overview

The didula-wso2/exp_24_julia_alpaca_extendedsft_16bit_vllm is a 7.6 billion parameter language model based on the Qwen2 architecture. It was developed by didula-wso2 and fine-tuned from the unsloth/qwen2.5-coder-7b-instruct-bnb-4bit model.

Key Characteristics

  • Architecture: Qwen2-based, indicating strong general language understanding and generation capabilities.
  • Parameter Count: 7.6 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which enabled a 2x faster training process.
  • License: Released under the Apache-2.0 license, allowing for broad use and distribution.

Potential Use Cases

This model is suitable for a variety of natural language processing tasks, including:

  • Text Generation: Creating coherent and contextually relevant text.
  • Instruction Following: Responding to prompts and instructions effectively, building upon its instruction-tuned base.
  • General Conversational AI: Engaging in dialogue and providing informative responses.

Its efficient fine-tuning process suggests it could be a good candidate for applications requiring a capable model without extensive retraining resources.