open-r1/OlympicCoder-7B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 11, 2025License:apache-2.0Architecture:Transformer0.2K Open Weights Warm

OlympicCoder-7B is a 7 billion parameter code model developed by open-r1, fine-tuned from Qwen/Qwen2.5-Coder-7B-Instruct. It is specifically optimized for competitive programming tasks, demonstrating strong performance on benchmarks like LiveCodeBench and the 2024 International Olympiad in Informatics. This model excels at generating code solutions, particularly for challenging algorithmic problems.

Loading preview...

OlympicCoder-7B: A Specialized Code Generation Model

OlympicCoder-7B, developed by open-r1, is a 7 billion parameter model specifically fine-tuned for competitive programming. It builds upon the Qwen/Qwen2.5-Coder-7B-Instruct architecture and is designed to tackle complex coding challenges.

Key Capabilities

  • Competitive Coding Performance: Achieves strong results on demanding benchmarks such as LiveCodeBench and the 2024 International Olympiad in Informatics (IOI'24).
  • Fine-tuned on Decontaminated Data: Trained on a decontaminated version of the Codeforces dataset, enhancing its ability to solve algorithmic problems.
  • C++ Post-training: The model was post-trained exclusively on C++ solutions, which may influence its performance on Python-centric benchmarks like LiveCodeBench, where it is considered partially out-of-domain.
  • Chain-of-Thought (CoT) Prompting: The model's chat template is configured to encourage detailed chain-of-thought reasoning, prefilling the assistant's turn with a <think> token.

Good For

  • Competitive Programmers: Ideal for generating solutions to challenging algorithmic problems encountered in contests like the IOI or platforms like Codeforces and LeetCode.
  • Code Generation Tasks: Particularly effective for tasks requiring logical problem-solving and code implementation in a competitive context.
  • Research in Code LLMs: Useful for researchers exploring model performance on advanced coding benchmarks and the impact of specialized training data.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p