unsloth/Qwen2-1.5B
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Jun 6, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

unsloth/Qwen2-1.5B is a 1.5 billion parameter causal language model from the Qwen2 family, optimized by Unsloth for efficient fine-tuning. It features a 131072 token context length, making it suitable for tasks requiring extensive context. This model is specifically designed to leverage Unsloth's acceleration techniques, enabling 2x faster fine-tuning with significantly reduced memory consumption compared to standard methods. It is ideal for developers seeking to quickly adapt a compact yet capable LLM for various applications.

Loading preview...