CalamitousFelicitousness/Qwen2.5-7B-Instruct-fp8-dynamic
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Sep 18, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen2.5-7B-Instruct is a 7.61 billion parameter instruction-tuned causal language model developed by Qwen, building upon the Qwen2 architecture. This model features a 131,072 token context length and is significantly improved in coding, mathematics, instruction following, and structured data understanding. It excels at generating long texts and structured outputs like JSON, and supports over 29 languages.

Loading preview...