myfi/parser_model_ner_4.12
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 12, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
myfi/parser_model_ner_4.12 is a 4 billion parameter Qwen3-based instruction-tuned language model developed by myfi, fine-tuned using Unsloth and Huggingface's TRL library. This model is optimized for specific tasks, benefiting from accelerated training. It features a 32768 token context length, making it suitable for applications requiring substantial input processing.
Loading preview...
Overview
myfi/parser_model_ner_4.12 is a 4 billion parameter language model based on the Qwen3 architecture, developed by myfi. It has been instruction-tuned, leveraging the Unsloth framework for accelerated training and Huggingface's TRL library. This fine-tuning process allowed for a 2x faster training cycle, indicating an efficient development approach.
Key Characteristics
- Architecture: Qwen3-based, indicating a robust foundation for language understanding and generation.
- Parameter Count: 4 billion parameters, offering a balance between performance and computational efficiency.
- Context Length: Supports a substantial context window of 32768 tokens, enabling the processing of longer inputs and maintaining coherence over extended interactions.
- Training Efficiency: Utilized Unsloth for 2x faster training, highlighting an optimization in the fine-tuning process.
Good For
- Applications requiring a moderately sized, instruction-tuned model with efficient training.
- Tasks benefiting from a large context window, allowing for detailed input analysis.
- Use cases where the Qwen3 architecture's strengths are advantageous, particularly after specialized fine-tuning.