VoCuc/Qwen1.5_1.8B_SFT_Dolly

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.8BQuant:BF16Ctx Length:32kPublished:Jan 18, 2026Architecture:Transformer Warm

VoCuc/Qwen1.5_1.8B_SFT_Dolly is a 1.8 billion parameter causal language model, likely based on the Qwen1.5 architecture, fine-tuned for instruction following. With a context length of 32768 tokens, this model is designed for general-purpose conversational AI and task execution based on user prompts. Its compact size makes it suitable for applications requiring efficient inference while maintaining reasonable performance.

Loading preview...

Model Overview

VoCuc/Qwen1.5_1.8B_SFT_Dolly is a 1.8 billion parameter language model, likely derived from the Qwen1.5 series, that has been instruction-tuned. This model is designed to understand and respond to a wide range of prompts, making it suitable for various natural language processing tasks. The "SFT_Dolly" in its name suggests it has undergone Supervised Fine-Tuning (SFT) using a dataset similar to Dolly, which typically focuses on instruction-following capabilities without relying on human feedback data.

Key Characteristics

  • Parameter Count: 1.8 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing it to process and generate longer sequences of text while maintaining coherence.
  • Instruction-Tuned: Optimized for following instructions and engaging in conversational interactions, making it versatile for various applications.

Potential Use Cases

  • Chatbots and Conversational Agents: Its instruction-following nature makes it well-suited for building interactive dialogue systems.
  • Text Generation: Can be used for generating creative content, summaries, or expanding on given prompts.
  • Prototyping and Development: Its smaller size compared to larger models allows for faster iteration and deployment in development environments.