Overview
ORLM-LLaMA-3-8B: Specialized for Operations Research
ORLM-LLaMA-3-8B is an 8 billion parameter model developed by Cardinal Operations, fine-tuned from Meta's LLaMA-3-8B. Its core capability lies in understanding natural language operations research (OR) problems and automatically generating both the mathematical model and Python code using the coptpy library to solve them. This specialization makes it a powerful tool for automating the modeling phase of OR tasks.
Key Capabilities
- Automated OR Modeling: Translates complex OR questions into precise mathematical formulations, including decision variables, objective functions, and constraints.
- Code Generation: Produces executable Python code using
coptpyfor the generated mathematical models, facilitating direct problem solving. - Strong Performance: Achieves competitive results across various OR benchmarks, including NL4OPT (85.7%), MAMO EasyLP (82.3%), MAMO ComplexLP (37.4%), and IndustryOR (38.0%), with a micro average of 71.4% and macro average of 60.8%.
When to Use This Model
- Operations Research: Ideal for tasks involving optimization, resource allocation, scheduling, and other OR problems.
- Mathematical Modeling: When there is a need to convert descriptive problems into structured mathematical models.
- Automated Solver Integration: For users who require Python code generation compatible with optimization solvers like COPT via
coptpy.