kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Oct 1, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test is a 3.1 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-3B-Instruct. This model supports a context length of 32768 tokens and is designed for general conversational tasks. It is based on the Qwen2.5 architecture and is intended for use in applications requiring a compact yet capable language model.

Loading preview...

Overview

kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test is an instruction-tuned language model with 3.1 billion parameters, built upon the Qwen2.5-3B-Instruct base model. It is designed to handle a wide range of conversational and instructional tasks, leveraging a substantial context window of 32768 tokens. This model is shared by kxdw2580 and utilizes the Apache-2.0 license.

Key Capabilities

  • Instruction Following: Capable of understanding and executing instructions for various tasks.
  • Multilingual Support: Supports multiple languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic.
  • Extended Context Window: Processes inputs up to 32768 tokens, allowing for more complex and longer interactions.

Good for

  • General Conversational AI: Suitable for chatbots, virtual assistants, and interactive applications.
  • Multilingual Applications: Ideal for use cases requiring understanding and generation in several major global languages.
  • Research and Development: Provides a solid base for further fine-tuning on specific datasets or tasks, particularly for those exploring uncensored model behaviors.