santhosh-m/ocr2-sft-lora-merged-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Feb 21, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The santhosh-m/ocr2-sft-lora-merged-v2 is a 1.5 billion parameter Qwen2-based instruction-tuned causal language model developed by santhosh-m. Finetuned using Unsloth and Huggingface's TRL library, it is optimized for specific tasks related to its training data. This model offers a compact yet capable solution for applications requiring a Qwen2 architecture with a 32768 token context length.

Loading preview...

Model Overview

The santhosh-m/ocr2-sft-lora-merged-v2 is a 1.5 billion parameter instruction-tuned language model based on the Qwen2 architecture. Developed by santhosh-m, this model was finetuned from unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit using the Unsloth library, which enables faster training, and Huggingface's TRL library. It features a substantial context length of 32768 tokens.

Key Characteristics

  • Architecture: Qwen2-based, instruction-tuned.
  • Parameter Count: 1.5 billion parameters, offering a balance between performance and efficiency.
  • Training Efficiency: Utilizes Unsloth for accelerated finetuning, indicating potential for rapid adaptation to specific tasks.
  • Context Length: Supports a 32768 token context window, allowing for processing longer inputs and maintaining conversational coherence over extended interactions.

Potential Use Cases

  • Resource-constrained environments: Its 1.5B parameter size makes it suitable for deployment where computational resources are limited.
  • Applications requiring long context: The 32768 token context length is beneficial for tasks involving extensive documents or detailed conversations.
  • Further finetuning: As an instruction-tuned model, it can serve as a strong base for additional domain-specific finetuning.