oxyapi/oxy-1-small
Oxy 1 Small is a 14.8 billion parameter language model developed by Oxygen (oxyapi), fine-tuned from the Qwen2.5-14B-Instruct architecture. With a context length of 131072 tokens, it is specifically optimized for generating dynamic and contextually rich role-play dialogues. This model excels at interactive storytelling and creative writing within role-play scenarios, offering an efficient solution for immersive experiences.
Loading preview...
What is oxyapi/oxy-1-small?
Oxy 1 Small is a 14.8 billion parameter language model developed by Oxygen (oxyapi), fine-tuned from the Qwen/Qwen2.5-14B-Instruct base model. It is specifically designed and optimized for role-play scenarios, aiming to generate engaging dialogues and interactive storytelling. The model supports a maximum input of 32,768 tokens and a maximum output of 8,192 tokens.
Key Capabilities
- Specialized Role-Play Generation: Fine-tuned on custom datasets to produce dynamic and contextually rich dialogues for role-playing.
- Efficient Performance: Despite its base model size, it is presented as a compact and efficient solution for faster inference and reduced computational resources.
- Multilingual Support: While primarily English, the base model supports a wide range of languages including Chinese, French, Spanish, German, and Japanese.
Performance Highlights
Evaluations on the Open LLM Leaderboard show an average score of 33.14 across various benchmarks. Notable scores include 62.45 on IFEval (0-Shot) and 44.45 on MMLU-PRO (5-shot).
Good for
- Developers building applications requiring interactive role-play experiences.
- Generating creative and immersive storytelling content.
- Use cases where a model specifically tuned for dialogue generation in character is beneficial.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.