allenai/codetulu-2-13b
Codetulu 2 13B by AllenAI is a 13 billion parameter language model, fine-tuned from CodeLlama-13b-hf, designed to function as a helpful assistant. It was trained on a diverse mix of publicly available, synthetic, and human-created datasets, with a context length of 4096 tokens. This model specializes in instruction-following and chat-based interactions, particularly in English, and is suitable for applications requiring assistant-like capabilities.
Loading preview...
Codetulu 2 13B: A Fine-Tuned Code Assistant
Codetulu 2 13B, developed by AllenAI, is a 13 billion parameter language model fine-tuned from codellama/CodeLlama-13b-hf. It is part of the Tulu series, which focuses on creating helpful assistant models. This iteration leverages a unique blend of publicly available, synthetic, and human-created datasets to enhance its instruction-following and conversational abilities.
Key Capabilities
- Instruction Following: Designed to act as a helpful assistant, responding to a wide range of instructions.
- Chat-based Interactions: Optimized for engaging in dialogue, making it suitable for conversational AI applications.
- Code-centric Foundation: Built upon CodeLlama, suggesting a strong understanding of code-related contexts, though its primary fine-tuning is for general assistance.
- English Language Focus: Primarily trained and intended for use with English language inputs.
Good For
- Developing AI assistants that require robust instruction-following.
- Applications needing conversational capabilities based on a strong language model.
- Research and development in fine-tuning large language models for specific assistant roles.
For more technical details, refer to the associated paper: Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2.