vtgh1602/legal-llm-v1-qwen25-7b-merged

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The vtgh1602/legal-llm-v1-qwen25-7b-merged is a 7.6 billion parameter Qwen2.5-based language model developed by vtgh1602. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language tasks, leveraging its Qwen2.5 architecture and efficient fine-tuning process.

Loading preview...

Overview

The vtgh1602/legal-llm-v1-qwen25-7b-merged is a 7.6 billion parameter language model based on the Qwen2.5 architecture. Developed by vtgh1602, this model was fine-tuned from unsloth/Qwen2.5-7B-bnb-4bit using the Unsloth library and Huggingface's TRL library.

Key Characteristics

  • Base Model: Qwen2.5-7B
  • Parameter Count: 7.6 billion parameters
  • Training Efficiency: Fine-tuned with Unsloth, which facilitates significantly faster training (2x speedup).
  • Context Length: Supports a context length of 32768 tokens.

Intended Use

This model is suitable for a variety of general language understanding and generation tasks, benefiting from its Qwen2.5 foundation and efficient fine-tuning. Its development with Unsloth highlights an emphasis on optimized training processes.