Rayeeennnnnnnn/mizan-legal-tunisian

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 19, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Rayeeennnnnnnn/mizan-legal-tunisian model is a 3.1 billion parameter Qwen2-based instruction-tuned causal language model developed by Rayeeennnnnnnn. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is specifically optimized for legal applications within the Tunisian context, leveraging its Qwen2 architecture and efficient fine-tuning process.

Loading preview...

Model Overview

The Rayeeennnnnnnn/mizan-legal-tunisian model is a 3.1 billion parameter language model, fine-tuned from the unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit base model. Developed by Rayeeennnnnnnn, this model leverages the Qwen2 architecture and was trained with significant efficiency improvements using the Unsloth library and Huggingface's TRL library.

Key Characteristics

  • Base Model: Qwen2.5-3B-Instruct, known for its strong general language understanding capabilities.
  • Efficient Fine-tuning: Utilizes Unsloth for accelerated training, resulting in a 2x speed improvement.
  • Parameter Count: 3.1 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens.

Intended Use

This model is specifically designed and fine-tuned for applications requiring legal understanding and generation within the Tunisian legal framework. Its specialized training makes it suitable for tasks such that require nuanced comprehension of Tunisian legal texts and contexts.