Qwen/Qwen2-1.5B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Jun 3, 2024License:apache-2.0Architecture:Transformer0.2K Open Weights Warm
Qwen/Qwen2-1.5B-Instruct is a 1.5 billion parameter instruction-tuned causal language model developed by Qwen, part of the Qwen2 series. Built on a Transformer architecture with SwiGLU activation and group query attention, it features an improved tokenizer for multilingual and code adaptability. This model demonstrates strong performance across language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning benchmarks, making it suitable for a wide range of general-purpose conversational AI applications.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–