CardinalOperations/ORLM-LLaMA-3-8B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:May 29, 2024License:llama3Architecture:Transformer0.0K Warm

CardinalOperations/ORLM-LLaMA-3-8B is an 8 billion parameter language model, fully fine-tuned by Cardinal Operations on the OR-Instruct dataset, built upon Meta's LLaMA-3-8B architecture. This model specializes in operations research, excelling at translating natural language problems into mathematical models and generating corresponding Python code using `coptpy`. It demonstrates strong performance across NL4OPT, MAMO, and IndustryOR benchmarks, particularly in generating optimization models and code.

Loading preview...

ORLM-LLaMA-3-8B: Specialized for Operations Research

ORLM-LLaMA-3-8B is an 8 billion parameter model developed by Cardinal Operations, fine-tuned from Meta's LLaMA-3-8B. Its core capability lies in understanding natural language operations research (OR) problems and automatically generating both the mathematical model and Python code using the coptpy library to solve them. This specialization makes it a powerful tool for automating the modeling phase of OR tasks.

Key Capabilities

  • Automated OR Modeling: Translates complex OR questions into precise mathematical formulations, including decision variables, objective functions, and constraints.
  • Code Generation: Produces executable Python code using coptpy for the generated mathematical models, facilitating direct problem solving.
  • Strong Performance: Achieves competitive results across various OR benchmarks, including NL4OPT (85.7%), MAMO EasyLP (82.3%), MAMO ComplexLP (37.4%), and IndustryOR (38.0%), with a micro average of 71.4% and macro average of 60.8%.

When to Use This Model

  • Operations Research: Ideal for tasks involving optimization, resource allocation, scheduling, and other OR problems.
  • Mathematical Modeling: When there is a need to convert descriptive problems into structured mathematical models.
  • Automated Solver Integration: For users who require Python code generation compatible with optimization solvers like COPT via coptpy.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p