ShuvoSync/Kyn-0.8
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Feb 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

ShuvoSync/Kyn-0.8 is a 14 billion parameter Qwen3-based causal language model developed by ShuvoSync. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is designed for general language generation tasks, leveraging its Qwen3 architecture and efficient fine-tuning process.

Loading preview...

ShuvoSync/Kyn-0.8: A Fine-Tuned Qwen3 Model

ShuvoSync/Kyn-0.8 is a 14 billion parameter language model developed by ShuvoSync. It is based on the Qwen3 architecture and has been fine-tuned using a combination of the Unsloth library and Huggingface's TRL library. This approach allowed for a significantly faster training process, reportedly twice as fast compared to standard methods.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/qwen3-14b-unsloth-bnb-4bit.
  • Efficient Training: Utilizes Unsloth for accelerated fine-tuning, optimizing the training workflow.
  • Parameter Count: Features 14 billion parameters, providing a balance between performance and computational requirements.
  • Context Length: Supports a context length of 32768 tokens, suitable for processing longer inputs and generating coherent, extended outputs.

Potential Use Cases

This model is well-suited for applications requiring a capable 14B parameter language model with the benefits of efficient fine-tuning. Its Qwen3 foundation suggests strong general language understanding and generation capabilities, making it applicable for tasks such as:

  • Text generation and completion
  • Summarization
  • Question answering
  • Chatbot development