boqiny/Qwen3-8B-FengGe-SFT

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 31, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

boqiny/Qwen3-8B-FengGe-SFT is an 8 billion parameter LoRA fine-tuned conversational model based on Qwen/Qwen3-8B, designed for informal, long-form Chinese dialogue. It is specifically trained on the Zhoulifeng Streaming Dataset to mimic a distinctive "峰哥" speaking style. This model excels at generating free-flowing, personality-driven responses in Chinese, making it suitable for applications requiring a specific conversational tone rather than factual accuracy.

Loading preview...

Overview

Qwen3-8B-FengGe-SFT is an 8 billion parameter conversational model, fine-tuned using LoRA on the Qwen/Qwen3-8B base model. Its primary distinction is its training on the Zhoulifeng Streaming Dataset to adopt a unique, informal Chinese speaking style known as "峰哥". This model is optimized for generating long-form, free-flowing dialogue with a strong personality.

Key Capabilities

  • Generates conversational Chinese text with a specific, informal "峰哥" stylistic bias.
  • Produces long, free-flowing responses suitable for extended dialogue.
  • Leverages the Qwen3-8B architecture for robust language generation.

Good for

  • Creating chatbots or virtual assistants with a distinct, personality-driven Chinese voice.
  • Simulating informal, conversational interactions in Chinese.
  • Applications where stylistic consistency and long-form dialogue are prioritized over factual accuracy.

Limitations

  • Exhibits a strong stylistic bias, making it unsuitable for neutral or formal contexts.
  • May generate repetitive enumerations if repetition_penalty is not properly configured during inference.
  • Not recommended for factual question-answering or safety-critical applications due to its specialized training and stylistic focus.