Model Overview
jondurbin/airoboros-l2-13b-2.2 is an experimental 13 billion parameter model, fine-tuned by jondurbin, primarily utilizing synthetic data generated by the Airoboros project. This version is a "clean" variant, meaning it does not contain the de-alignment data found in other Airoboros 2.2 models, offering a more aligned experience for general use cases while remaining system-prompt overridable. The model's training data emphasizes instruction response pairs over casual chat or roleplay, with significant portions dedicated to coding, general instructions, and Orca-style prompts.
Key Capabilities
- Context-Obedient Question Answering: Tuned to strictly adhere to provided context, minimizing hallucinations by ignoring prior knowledge and limiting responses to the given information. It uses a specific
BEGININPUT/ENDINPUT formatting for closed-context instructions. - Coding: Capable of generating complex code based on detailed requirements, supporting various programming languages and scenarios. It can also output plain code without explanations using a
PLAINFORMAT flag. - Agent/Function Calling: Designed to generate function calls in JSON or YAML format based on user input and a list of available tools, similar to OpenAI's function calling.
- Chain-of-Thought Reasoning: Can provide multiple potential solutions to problems, rank them based on mathematical logic, and select the most feasible answer.
- reWOO-style Execution Planning: Supports generating systematic plans for complex instructions that require multiple tool calls, outputting a sequence of actions and evidence steps.
Good for
- Developers requiring a model for instruction-following tasks.
- Applications needing reliable context-based question answering with reduced hallucination.
- Use cases involving code generation or function calling/agentic workflows.
- Scenarios where multi-step reasoning and structured problem-solving are beneficial.