manhcuong2005/qwen2.5-1.5b-legal-edu-v2

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The manhcuong2005/qwen2.5-1.5b-legal-edu-v2 is a 1.5 billion parameter Qwen2.5-based causal language model developed by manhcuong2005, fine-tuned from unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is designed for general language tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

The manhcuong2005/qwen2.5-1.5b-legal-edu-v2 is a 1.5 billion parameter language model based on the Qwen2.5 architecture. Developed by manhcuong2005, this model is a fine-tuned version of unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit.

Key Characteristics

  • Architecture: Qwen2.5-based, a causal language model.
  • Parameter Count: 1.5 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Context Length: Supports a context window of 32768 tokens.

Potential Use Cases

This model is suitable for a variety of general language understanding and generation tasks, particularly where efficient deployment and faster fine-tuning capabilities are beneficial. Its foundation on the Qwen2.5 architecture suggests strong performance in areas like:

  • Text generation and completion.
  • Instruction-following tasks.
  • Summarization and question answering.

The use of Unsloth for training indicates an emphasis on optimized resource utilization during the fine-tuning phase, making it a practical choice for developers looking for efficient model deployment.