allenai/codetulu-2-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 13, 2023Architecture:Transformer0.0K Cold

Codetulu 2 7B by AllenAI is a 7 billion parameter instruction-tuned language model, fine-tuned from CodeLlama-7b-hf with a 4096 token context length. It is designed to act as a helpful assistant, trained on a diverse mix of publicly available, synthetic, and human-created datasets, with a primary focus on English. This model is particularly suited for conversational AI applications requiring assistant-like behavior.

Loading preview...

Codetulu 2 7B: A Fine-Tuned Code Assistant

Codetulu 2 7B is a 7 billion parameter language model developed by AllenAI, specifically fine-tuned from the CodeLlama-7b-hf base model. It is part of the Tulu series, which focuses on creating helpful assistant models. This iteration, Codetulu 2 7B, leverages a diverse training mix including publicly available, synthetic, and human-created datasets, enhancing its ability to function as a conversational assistant.

Key Capabilities

  • Instruction Following: Trained to act as a helpful assistant, responding to user instructions.
  • Diverse Training: Benefits from a mix of human and synthetically generated dialogues.
  • English Language Focus: Primarily optimized for English language interactions.
  • CodeLlama Foundation: Built upon the robust CodeLlama architecture, suggesting potential for code-related understanding, though its primary fine-tuning is for assistant-like behavior.

Good For

  • Conversational AI: Ideal for building chatbots and virtual assistants.
  • Instruction-Based Tasks: Performing tasks based on explicit user instructions.
  • Research and Development: Exploring instruction-tuned models based on CodeLlama.

For more technical details, refer to the associated paper: Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2.