justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 2, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT is a 0.5 billion parameter instruction-tuned Qwen2.5 model, fine-tuned by justinthelaw. It is specifically optimized using SFT and LoRA for answering questions about Justin's professional background, skills, and experience. This model is primarily designed for browser-based inference via transformers.js to power personalized resume Q&A chatbots.

Loading preview...

Model Overview

This model, justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT, is a specialized fine-tuned version of the Qwen2.5-0.5B-Instruct base model. Developed by justinthelaw, its core purpose is to provide detailed answers regarding Justin's professional background, including resume details, work experience, education, and skills.

Key Capabilities & Features

  • Personalized Q&A: Specifically trained to respond to queries about a single individual's professional profile.
  • Browser-based Inference: Optimized for deployment in web environments using transformers.js, making it suitable for interactive personal website chatbots.
  • Fine-tuning Method: Utilizes Supervised Fine-Tuning (SFT) combined with LoRA (Low-Rank Adaptation) adapters for efficient and targeted knowledge injection.
  • Training Data: Trained on a custom dataset, justinthelaw/Resume-Cover-Letter-SFT-Dataset, consisting of conversation-formatted QA pairs to enforce factual memorization.

Intended Use Cases

  • Personal Website Chatbots: Ideal for creating interactive AI assistants that can answer visitor questions about a resume.
  • Resume Q&A Applications: Useful for demonstrating personalized AI assistants focused on specific professional profiles.
  • Demonstrating Fine-tuning: Serves as an example of applying SFT and LoRA techniques for highly specific domain adaptation.

Limitations

It is crucial to note that this model is not a general-purpose language model. Its knowledge is strictly confined to the training data about Justin's resume and professional background, and it will not generalize to other topics or provide real-time information.