agapeeva/qwen2.5-1.5b-instruct-abliterated-ru

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 28, 2026Architecture:Transformer Cold

The agapeeva/qwen2.5-1.5b-instruct-abliterated-ru model is a 1.5 billion parameter instruction-tuned language model, likely based on the Qwen2.5 architecture, with a context length of 32768 tokens. This model is specifically designed for Russian language tasks, focusing on instruction following. Its primary differentiator is its optimization for Russian language processing within a compact 1.5B parameter count, making it suitable for efficient deployment in Russian-centric applications.

Loading preview...

Model Overview

The agapeeva/qwen2.5-1.5b-instruct-abliterated-ru is an instruction-tuned language model, featuring 1.5 billion parameters and supporting a substantial context length of 32768 tokens. While specific details regarding its development, training data, and precise architecture are not provided in the current model card, its naming convention suggests an origin or fine-tuning based on the Qwen2.5 series, with a clear focus on the Russian language.

Key Characteristics

  • Parameter Count: 1.5 billion parameters, indicating a relatively compact model size suitable for various deployment scenarios.
  • Context Length: A generous 32768 tokens, allowing for processing and understanding of longer inputs and generating more coherent, extended responses.
  • Language Focus: The abliterated-ru suffix strongly implies a specialization in the Russian language, likely optimized for instruction-following tasks in Russian.

Potential Use Cases

Given its instruction-tuned nature and Russian language focus, this model is likely suitable for:

  • Russian Language Chatbots: Developing conversational AI agents that can understand and respond in Russian.
  • Text Generation in Russian: Creating various forms of Russian text, from creative writing to summaries, based on given instructions.
  • Instruction Following: Executing specific commands or answering questions posed in Russian.
  • Efficient Deployment: Its 1.5B parameter size suggests it could be more efficient to run compared to larger models, making it viable for applications with resource constraints.