Yano/exp-0221-020a-balanced-alfworld-qwen2.5-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 21, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Yano/exp-0221-020a-balanced-alfworld-qwen2.5-7b is a 7.6 billion parameter Qwen2.5-based causal language model developed by Yano. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

Yano/exp-0221-020a-balanced-alfworld-qwen2.5-7b is a 7.6 billion parameter instruction-tuned model based on the Qwen2.5 architecture. Developed by Yano, this model was fine-tuned from unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit.

Key Characteristics

  • Efficient Training: This model was trained 2x faster using Unsloth and Huggingface's TRL library, highlighting an optimized fine-tuning process.
  • Parameter Count: With 7.6 billion parameters, it offers a balance between performance and computational efficiency for various NLP tasks.
  • Context Length: The model supports a context length of 32768 tokens, allowing it to process and generate longer sequences of text.

Use Cases

This model is suitable for general instruction-following applications where efficient training and a robust Qwen2.5 base are beneficial. Its optimized fine-tuning process makes it a strong candidate for developers looking for performant models with reduced training overhead.