Overview
Overview
OpenAssistant/codellama-13b-oasst-sft-v10 is a 13 billion parameter causal decoder-only transformer language model, fine-tuned by OpenAssistant from Meta's CodeLlama 13B. This iteration focuses on improving conversational capabilities and compatibility through a standardized prompt template.
Key Capabilities & Features
- Code-centric Fine-tuning: Built upon CodeLlama 13B, it is specialized for code-related interactions.
- ChatML Prompt Template: Adopts the OpenAI ChatML standard for its prompt template, enhancing compatibility with various chat inference and frontend applications.
- System Message Guidance: Recommends using the official Llama2 system message for inference to ensure safe and helpful responses.
- Training Data: Fine-tuned using a mix of datasets including
orca-chat,bestofmegacode, andoasst_export, indicating a focus on diverse conversational and code-related data. - RoPE Theta Update: Incorporates a new RoPE Theta value (1e6), requiring
trust_remote_code=Trueor a recent Hugging Face Transformers version for correct loading.
Ethical Considerations
As with all LLMs, the model may produce inaccurate, biased, or objectionable responses. Developers are advised to perform safety testing and tuning for specific applications before deployment, referencing Meta's Responsible Use Guide.