ant-opt/LLMOPT-Qwen2.5-14B
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 21, 2025License:mitArchitecture:Transformer0.0K Open Weights Cold

LLMOPT-Qwen2.5-14B is a 14.8 billion parameter language model developed by Ant Group, East China Normal University, and Nanjing University. Fine-tuned from Qwen2.5-14B-Instruct, this model specializes in defining and solving general optimization problems from natural language descriptions. It demonstrates high execution rates and solving accuracy across various optimization benchmarks, making it suitable for complex operational research tasks.

Loading preview...