Overview
jondurbin/airoboros-l2-13b-2.2.1 is an experimental 13 billion parameter language model, primarily fine-tuned by jondurbin using synthetic data from the Airoboros project. This version is an update to airoboros-l2-13b-2.2, incorporating several key improvements.
Key Capabilities
- Enhanced Instruction Following: The model is designed for strong instruction adherence, making it suitable for complex task execution rather than casual chat or roleplay.
- Updated Training Data: Features re-generated writing responses and longer contextual blocks, contributing to improved coherence and context understanding.
- De-censoring: Includes less aggressive de-censoring efforts compared to previous versions.
- Context-Obedient Question Answering: Trained to prioritize provided context, reducing hallucinations and ignoring prior knowledge when instructed.
- Summarization: Capable of summarizing text inputs using a specific closed-context format.
- Code Generation: Supports complex coding instructions across various languages, including options for plain code output.
- Agent/Function Calling: Can generate function and argument calls in JSON or YAML format based on user input, similar to OpenAI's function calling.
- Chain-of-Thought Reasoning: Able to provide multiple potential solutions to problems, rank them by logical soundness, and select the most feasible answer.
- reWOO Style Execution Planning: Supports systematic planning for complex instructions requiring multiple tool calls, outputting a structured plan.
Good For
- Applications requiring precise instruction following.
- Closed-context question answering and summarization tasks.
- Generating code based on detailed specifications.
- Developing agents that require structured function calling.
- Tasks benefiting from multi-step reasoning and execution planning.