jondurbin/airoboros-l2-13b-3.0: Instruction-Following LLM
The jondurbin/airoboros-l2-13b-3.0 is a 13 billion parameter experimental language model, building upon the Airoboros 3.0 dataset. This model is distinct for its strong emphasis on instruction following over casual conversation, utilizing the Llama-2 chat format for prompting.
Key Capabilities
- Enhanced Instruction Following: Specifically trained to adhere closely to instructions, making it suitable for tasks requiring precise output.
- MathJSON Integration: Capable of generating MathJSON solutions for mathematical problems, leveraging external tools for accurate calculations.
- Context-Obedient Question Answering: Designed to answer questions strictly based on provided context, minimizing hallucinations by ignoring external knowledge when instructed.
- Summarization: Includes training for summarizing text within a given context.
- Code Generation: Proficient in generating code based on complex requirements, with an option for plain code output.
- Function Calling/Agent Planning: Supports generating JSON or YAML for function calls and constructing multi-step execution plans for complex tasks, similar to reWOO style planning.
Good For
- Developers needing a model for structured data generation (e.g., MathJSON, function calls).
- Applications requiring strict adherence to provided context for question answering.
- Tasks where precise instruction following and reduced hallucination are critical.
- Code generation and complex problem-solving through execution planning.
Note: This model uses the Llama-2 chat format. Due to its training data origin (OpenAI API), commercial use may be restricted; users should review the associated licenses.