Akaakira/aihm-evaluate-merged

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 18, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Akaakira/aihm-evaluate-merged is a 7.6 billion parameter Qwen2-based causal language model developed by Akaakira. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its Qwen2 architecture and efficient fine-tuning process.

Loading preview...

Model Overview

Akaakira/aihm-evaluate-merged is a 7.6 billion parameter instruction-tuned language model built upon the Qwen2 architecture. Developed by Akaakira, this model was fine-tuned from unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit using the Unsloth library, which facilitated a 2x faster training process, and Huggingface's TRL library.

Key Characteristics

  • Base Model: Qwen2.5-7B-Instruct
  • Parameter Count: 7.6 billion parameters
  • Context Length: 32,768 tokens
  • Training Efficiency: Fine-tuned with Unsloth for accelerated training.

Potential Use Cases

This model is suitable for a variety of general-purpose instruction-following applications, benefiting from its Qwen2 base and efficient fine-tuning. Developers looking for a capable 7.6B parameter model with a large context window, fine-tuned with performance-oriented tools like Unsloth, may find this model particularly useful for:

  • General text generation and completion
  • Instruction-based question answering
  • Summarization tasks
  • Conversational AI applications