viamr-project/qwen3-1.7B-amr-v1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Jan 6, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The viamr-project/qwen3-1.7B-amr-v1 is a 2 billion parameter language model, fine-tuned from unsloth/Qwen3-1.7B, featuring a 40960 token context length. Developed by viamr-project, this model was trained using Unsloth for accelerated performance. Its primary differentiator is the optimized training process, making it suitable for applications requiring efficient deployment of a Qwen3-based model.

Loading preview...

Overview

viamr-project/qwen3-1.7B-amr-v1 is a 2 billion parameter language model, fine-tuned by viamr-project from the unsloth/Qwen3-1.7B base model. This model leverages the Unsloth framework, which enabled a 2x faster training process compared to standard methods. It supports a substantial context length of 40960 tokens, making it capable of handling extensive inputs.

Key Capabilities

  • Efficient Training: Benefits from Unsloth's optimizations for faster fine-tuning.
  • Large Context Window: Supports up to 40960 tokens, suitable for tasks requiring long-range understanding.
  • Qwen3 Architecture: Inherits the foundational capabilities of the Qwen3 model family.

Good for

  • Developers seeking an efficiently trained Qwen3-based model.
  • Applications requiring a model with a large context window for processing lengthy texts.
  • Experimentation with models fine-tuned using accelerated training techniques like Unsloth.