mimi1998/Qwen3-8B-Instruct-SFT-Meme-LoRA-V3
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The mimi1998/Qwen3-8B-Instruct-SFT-Meme-LoRA-V3 is an 8 billion parameter Qwen3 instruction-tuned language model developed by mimi1998, featuring a 32768 token context length. This model was fine-tuned using LoRA and optimized for speed with Unsloth and Huggingface's TRL library. It is designed for instruction-following tasks, leveraging its efficient training methodology.

Loading preview...