amazingvince/where-llambo-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 5, 2023License:apache-2.0Architecture:Transformer Open Weights Cold
amazingvince/where-llambo-7b is a 7 billion parameter language model, fine-tuned from Mistral, designed for broad instruction following and roleplaying capabilities. This model is currently undergoing supervised fine-tuning (SFT) on a diverse mix of public datasets, with plans for further optimization through DPO with custom data. It aims to provide a versatile base for various conversational and creative text generation tasks.
Loading preview...
Model Overview
amazingvince/where-llambo-7b is a 7 billion parameter language model, built upon the Mistral architecture. It is currently in an active development phase, having completed approximately 50% of its supervised fine-tuning (SFT) process.
Key Capabilities
- Broad Instruction Following: The model is being trained to understand and execute a wide range of instructions, making it suitable for general-purpose conversational AI.
- Roleplaying: A specific focus during fine-tuning is to enhance its ability to engage in various roleplaying scenarios, offering more dynamic and immersive interactions.
- Foundation for Further Development: This SFT version serves as a base, with future plans to incorporate Direct Preference Optimization (DPO) using custom datasets to further refine its responses and alignment.
Good For
- Developers looking for a Mistral-based model with initial instruction-tuning and roleplaying capabilities.
- Experimentation with conversational AI and creative text generation.
- As a starting point for further fine-tuning on specific domain data or unique interaction styles.