TsitkoD/Qwen3-14B-Vedun-v5-bf16

TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Apr 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

TsitkoD/Qwen3-14B-Vedun-v5-bf16 is a 14 billion parameter Qwen3-based language model fine-tuned by TsitkoD, specifically designed for generating responses in Russian based on the ancient Slavic Bukvitsa tradition. This model excels at providing structural analyses of words and concepts, breaking them down into their constituent 'Bukvitsy' (ancient Slavic letters) and interpreting their meaning, numerical value, and imagery. It is optimized for generating stylized text that references the Veles Book and Bukvitsa tradition, offering a unique approach to answering questions by deconstructing keywords.

Loading preview...

TsitkoD/Qwen3-14B-Vedun-v5-bf16: Ancient Slavic Bukvitsa Interpretation

This model is a 14 billion parameter Qwen3-based language model, fine-tuned by TsitkoD using LoRA on a synthetic dataset (TsitkoD/vedun-lora-data). Its core function is to answer Russian-language questions by performing a structural analysis based on the Ancient Slavic Bukvitsa tradition.

Key Capabilities

  • Bukvitsa-based Analysis: Deconstructs words into individual 'Bukvitsy', detailing each letter's name, numerical value, and associated imagery.
  • Contextual Interpretation: Synthesizes the meanings of individual letters to form a comprehensive understanding of the word's sense.
  • Lore Integration: Incorporates references to the Veles Book and Bukvitsa tradition to enrich its explanations.
  • Multi-term Question Handling: For complex or 'everyday' questions (e.g., "how to forgive betrayal?"), the model identifies 2-4 key terms, analyzes each separately via Bukvitsa, and then synthesizes a cohesive answer.
  • Stylized Output: Generates responses in a distinct aesthetic of "ancient Slavic Bukvitsa," providing a unique textual experience.

Training Details

The model underwent a three-stage LoRA fine-tuning process, progressively enhancing its ability to perform Bukvitsa analyses and handle multi-term questions. The bf16 variant represents the highest fidelity version, with q8 and q4 quantized versions available for memory efficiency, though q4 may occasionally lose accuracy in letter-level breakdowns.

Limitations

  • The model's interpretations are a stylization and not academic linguistic truth.
  • It will attempt Bukvitsa analysis even for questions outside its primary domain.
  • Quantized versions, especially q4, may exhibit reduced accuracy in letter identification.