myfi/parser_model_ner_4.13_ep5

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 13, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The myfi/parser_model_ner_4.13_ep5 is a 4 billion parameter Qwen3-based instruction-tuned language model developed by myfi, fine-tuned using Unsloth and Huggingface's TRL library. This model, with a 32768 token context length, is optimized for specific parsing and Named Entity Recognition (NER) tasks, leveraging efficient training methods for enhanced performance in these areas.

Loading preview...

Model Overview

The myfi/parser_model_ner_4.13_ep5 is a 4 billion parameter language model based on the Qwen3 architecture, developed by myfi. It was fine-tuned from unsloth/Qwen3-4B-Instruct-2507 using the Unsloth framework and Huggingface's TRL library, which enabled a 2x faster training process.

Key Capabilities

  • Efficient Fine-tuning: Leverages Unsloth for accelerated training, making it resource-efficient.
  • Qwen3 Architecture: Benefits from the robust capabilities of the Qwen3 base model.
  • Instruction-Tuned: Designed to follow instructions effectively for specific tasks.

Good For

  • Parsing Tasks: Optimized for processing and structuring textual information.
  • Named Entity Recognition (NER): Suited for identifying and classifying entities within text.
  • Applications requiring efficient Qwen3-based models: Ideal for scenarios where a 4B parameter model with a 32768 token context length is suitable for specialized NLP tasks.