santhosh-m/ocr2-sft-lora-merged-v2
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Feb 21, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The santhosh-m/ocr2-sft-lora-merged-v2 is a 1.5 billion parameter Qwen2-based instruction-tuned causal language model developed by santhosh-m. Finetuned using Unsloth and Huggingface's TRL library, it is optimized for specific tasks related to its training data. This model offers a compact yet capable solution for applications requiring a Qwen2 architecture with a 32768 token context length.

Loading preview...