Overview
jondurbin/airoboros-l2-7b-2.2.1 is an experimental 7 billion parameter language model, primarily fine-tuned using synthetic data generated by the Airoboros project. It serves as an updated version of airoboros-l2-7b-2.2, featuring re-generated writing responses, longer contextual blocks, and the removal of specific "rp" (roleplay) data. This model is designed for general-purpose use but places a strong emphasis on instruction following and context-obedient question answering, distinguishing it from models optimized for casual conversation.
Key Capabilities
- Instruction Following: Excels at adhering to complex instructions, including multi-criteria coding requests and detailed narrative generation.
- Context-Obedient QA: Trained to ignore prior knowledge and strictly use provided context for answers, minimizing hallucinations. It utilizes a specific delimited format for closed-context prompts.
- Summarization: Capable of summarizing text based on provided input and instructions.
- Code Generation: Can generate complex code based on detailed requirements, with an option for plain code output.
- Agent/Function Calling: Supports generation of JSON or YAML outputs for function/argument selection based on user input, similar to OpenAI's function calling.
- Chain-of-Thought Reasoning: Can provide multiple potential solutions to a problem, rank them by logical soundness, and select a final answer.
- Execution Planning: Supports reWOO-style execution planning for multi-tool tasks, outputting a systematic plan for tool utilization.
Good For
- Applications requiring strict adherence to instructions and provided context.
- Developers building tools that leverage function calling or agentic workflows.
- Tasks involving code generation, summarization, and complex problem-solving with step-by-step reasoning.
- Use cases where precise, non-hallucinated responses based on specific inputs are critical.