rakesh277/qwen15-resume-parser-4bit

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 27, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The rakesh277/qwen15-resume-parser-4bit is a 1.5 billion parameter Qwen2 model, developed by rakesh277, finetuned for resume parsing. It was trained 2x faster using Unsloth and Huggingface's TRL library, leveraging a 32768 token context length. This model is optimized for efficient and accelerated processing of resume data.

Loading preview...

Overview

This model, developed by rakesh277, is a 1.5 billion parameter Qwen2-based language model specifically finetuned for resume parsing tasks. It leverages a substantial 32768 token context length, making it suitable for processing detailed documents.

Key Characteristics

  • Base Model: Finetuned from unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit.
  • Accelerated Training: Training was performed 2x faster using the Unsloth library in conjunction with Huggingface's TRL library.
  • Parameter Count: Features 1.5 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a 32768 token context window, allowing for comprehensive analysis of longer resumes.

Intended Use

This model is designed for applications requiring efficient and accurate extraction and parsing of information from resumes. Its accelerated training process suggests an emphasis on practical deployment and performance for specific NLP tasks related to human resources and recruitment.