flammenai/Flammades-Qwen2.5-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Flammades-Qwen2.5-32B is a 32.8 billion parameter language model developed by flammenai, based on the Qwen2.5 architecture. This model has been fine-tuned using ORPO on specific datasets, including flammenai/Date-DPO-NoAsterisks and jondurbin/truthy-dpo-v0.1, to enhance its performance. It is designed for general language tasks, leveraging its large parameter count and fine-tuning for improved conversational and factual accuracy.

Loading preview...