manhcuong2005/qwen2.5-1.5b-legal-intent

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 16, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The manhcuong2005/qwen2.5-1.5b-legal-intent model is a 1.5 billion parameter Qwen2.5-based language model, fine-tuned by manhcuong2005. It was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. This model is specifically designed for legal intent classification and processing, leveraging its efficient training methodology to provide specialized performance in legal domain applications.

Loading preview...

Model Overview

The manhcuong2005/qwen2.5-1.5b-legal-intent is a 1.5 billion parameter language model based on the Qwen2.5 architecture. Developed by manhcuong2005, this model has been fine-tuned from unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit.

Key Capabilities

  • Specialized Fine-tuning: The model has undergone specific fine-tuning, indicating a focus on a particular domain or task, likely related to legal intent given its name.
  • Efficient Training: It was fine-tuned using Unsloth and Huggingface's TRL library, which enabled a 2x speedup in the training process.
  • Qwen2.5 Architecture: Leverages the foundational capabilities of the Qwen2.5 model series.

Good For

  • Legal Intent Classification: Based on its name, the model is optimized for understanding and classifying legal intent within text.
  • Applications Requiring Efficiently Trained Models: Its development with Unsloth suggests suitability for scenarios where faster fine-tuning and potentially optimized resource usage are beneficial.
  • Domain-Specific NLP in Legal Contexts: Ideal for tasks such as document analysis, query understanding, or information extraction within the legal field.