siddhartha37ms/contract-analyzer-legal

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The siddhartha37ms/contract-analyzer-legal is a 3.2 billion parameter Llama-3.2-3B-Instruct model, developed by siddhartha37ms and fine-tuned using Unsloth for accelerated training. This model is specifically optimized for legal contract analysis, leveraging its instruction-tuned architecture. Its primary differentiator is its specialized fine-tuning for legal domain tasks, making it suitable for applications requiring nuanced understanding of legal texts.

Loading preview...

Model Overview

The siddhartha37ms/contract-analyzer-legal is a 3.2 billion parameter language model, fine-tuned from the unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit base model. Developed by siddhartha37ms, this model leverages the Llama-3.2-3B-Instruct architecture, known for its instruction-following capabilities.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit.
  • Parameter Count: 3.2 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: Training was accelerated using the Unsloth library in conjunction with Hugging Face's TRL library, enabling faster fine-tuning.
  • Context Length: The model supports a context length of 32768 tokens, allowing for processing of substantial legal documents.

Primary Differentiator

This model's key distinction lies in its specialized fine-tuning for legal contract analysis. While the base Llama-3.2-3B-Instruct model is general-purpose, this version has been adapted to understand and process legal texts, making it particularly effective for tasks within the legal domain.

Potential Use Cases

  • Legal Document Review: Analyzing and extracting information from contracts, agreements, and other legal documents.
  • Contract Summarization: Generating concise summaries of legal texts.
  • Legal Question Answering: Responding to queries based on provided legal content.

Limitations

As a specialized model, its performance on general-purpose tasks may not match that of models fine-tuned for broader applications. Its effectiveness is primarily within the legal domain for which it was trained.