karaselerm/qwen2.5-1.5b-instruct-ru-abliterated-hw6

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 28, 2026Architecture:Transformer Cold

The karaselerm/qwen2.5-1.5b-instruct-ru-abliterated-hw6 is a 1.5 billion parameter instruction-tuned language model based on the Qwen2.5 architecture, developed by karaselerm. This model is specifically designed for Russian language tasks, leveraging a context length of 32768 tokens. Its primary differentiator is its focus on Russian language instruction following, making it suitable for applications requiring nuanced understanding and generation in Russian.

Loading preview...

Model Overview

The karaselerm/qwen2.5-1.5b-instruct-ru-abliterated-hw6 is an instruction-tuned language model with 1.5 billion parameters, built upon the Qwen2.5 architecture. It features a substantial context length of 32768 tokens, enabling it to process and generate longer sequences of text.

Key Characteristics

  • Architecture: Qwen2.5 base model.
  • Parameter Count: 1.5 billion parameters.
  • Context Length: Supports up to 32768 tokens.
  • Language Focus: Primarily designed and optimized for the Russian language.

Intended Use Cases

This model is best suited for applications that require instruction-following capabilities in Russian. Potential use cases include:

  • Text Generation: Creating coherent and contextually relevant Russian text based on prompts.
  • Instruction Following: Executing commands or answering questions posed in Russian.
  • Russian Language Processing: Tasks such as summarization, translation (from/to Russian), or content creation in Russian.

Limitations

As indicated by the model card, specific details regarding training data, evaluation metrics, and potential biases are currently marked as "More Information Needed." Users should exercise caution and conduct thorough testing for their specific applications, especially concerning factual accuracy, safety, and fairness, until more comprehensive documentation is available.