Nina2811aw/qwen-32B-legal
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 11, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The Nina2811aw/qwen-32B-legal model is a 32.8 billion parameter Qwen2-based language model, fine-tuned by Nina2811aw. This model was efficiently trained using Unsloth and Huggingface's TRL library, indicating an optimization for faster fine-tuning processes. It is specifically adapted from unsloth/qwen2.5-32b-instruct-bnb-4bit, suggesting a focus on instruction-following capabilities within a legal domain context. Its primary strength lies in its specialized fine-tuning for legal applications, leveraging a substantial parameter count for nuanced understanding.
Loading preview...