heajea/qwen3.5-4b-english-tutor-v3

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 26, 2026Architecture:Transformer Cold

The heajea/qwen3.5-4b-english-tutor-v3 is a 4 billion parameter Qwen3.5-based language model, fine-tuned for English tutoring applications. This model was optimized and converted to GGUF format using Unsloth, making it suitable for efficient deployment on various hardware. Its primary strength lies in providing English language instruction and support.

Loading preview...

Model Overview

The heajea/qwen3.5-4b-english-tutor-v3 is a 4 billion parameter language model built on the Qwen3.5 architecture. It has been specifically fine-tuned to function as an English tutor, leveraging its base capabilities for language instruction.

Key Capabilities

  • English Tutoring: Designed to assist users with English language learning and practice.
  • Efficient Deployment: Converted to the GGUF format, enabling optimized performance and compatibility with tools like llama-cli and llama-mtmd-cli.
  • Unsloth Optimization: The model was fine-tuned and converted using Unsloth, which facilitates faster training and efficient inference.

Good For

  • Interactive English Learning: Ideal for applications requiring an AI assistant for English language education.
  • Local LLM Deployment: Suitable for users who want to run an English tutoring model locally using GGUF-compatible inference engines like Ollama or llama.cpp.