myfi/parser_model_ner_4.13_ep6

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 13, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The myfi/parser_model_ner_4.13_ep6 is a 4 billion parameter Qwen3-based instruction-tuned language model developed by myfi, fine-tuned from unsloth/Qwen3-4B-Instruct-2507. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. It is designed for specific parsing and Named Entity Recognition (NER) tasks, leveraging its efficient fine-tuning process.

Loading preview...

myfi/parser_model_ner_4.13_ep6 Overview

This model is a 4 billion parameter Qwen3-based instruction-tuned language model developed by myfi. It was fine-tuned from the unsloth/Qwen3-4B-Instruct-2507 base model, leveraging the Unsloth library and Huggingface's TRL for efficient training.

Key Capabilities

  • Efficient Fine-tuning: Achieved 2x faster training speeds compared to standard methods, thanks to Unsloth integration.
  • Qwen3 Architecture: Benefits from the robust capabilities of the Qwen3 model family.
  • Instruction-Tuned: Optimized for following instructions, making it suitable for specific NLP tasks.

Good For

  • Parsing Tasks: Ideal for applications requiring structured data extraction or text parsing.
  • Named Entity Recognition (NER): Suited for identifying and classifying entities within text.
  • Resource-Efficient Deployment: Its 4B parameter size combined with efficient training makes it a candidate for scenarios where faster development cycles and moderate resource usage are important.