mackgorski/testmantle-15b-v2-merged

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The mackgorski/testmantle-15b-v2-merged is a 1.5 billion parameter Qwen2-based instruction-tuned causal language model developed by mackgorski. This model was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its efficient finetuning process.

Loading preview...

Overview

The mackgorski/testmantle-15b-v2-merged is a 1.5 billion parameter instruction-tuned language model based on the Qwen2 architecture. Developed by mackgorski, this model was efficiently finetuned using Unsloth and Huggingface's TRL library, which allowed for a 2x faster training process compared to standard methods. It maintains a context length of 32768 tokens, making it suitable for processing moderately long inputs.

Key Characteristics

  • Base Model: Finetuned from unsloth/qwen2.5-1.5b-instruct.
  • Efficient Training: Utilizes Unsloth for accelerated finetuning.
  • Parameter Count: 1.5 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens.

Use Cases

This model is well-suited for general instruction-following tasks where a compact yet capable language model is required. Its efficient training process suggests potential for rapid adaptation to specific domains or tasks, making it a good candidate for applications requiring quick deployment and iteration.