marianbasti/Llama-2-13b-fp16-alpaca-spanish
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer0.0K Open Weights Cold
The marianbasti/Llama-2-13b-fp16-alpaca-spanish model is a 13 billion parameter Llama 2 variant, fine-tuned by marianbasti, specifically designed to enhance Spanish language performance. It utilizes a LoRA adaptation trained on a translated Alpaca dataset, focusing on conversational capabilities. This model aims to improve the Llama-2 foundation model's proficiency in Spanish interactions, making it suitable for applications requiring robust Spanish conversational AI.
Loading preview...
Llama 2-13b-fp16-alpaca-spanish: Enhanced Spanish Conversational AI
This model is a LoRA (Low-Rank Adaptation) fine-tune of the Llama 2 13B foundation model, developed by marianbasti. Its primary objective is to significantly improve the model's performance in Spanish language tasks, particularly in conversational contexts.
Key Capabilities & Training Details
- Spanish Language Focus: Specifically trained to enhance Spanish proficiency, addressing a common need for better non-English LLM performance.
- Conversational AI: The training methodology, utilizing a translated Alpaca dataset, emphasizes conversational abilities.
- Base Model: Built upon The Bloke's Llama-2-13B-fp16, ensuring a strong foundational architecture.
- LoRA Adaptation: Employs LoRA with a scale of 2, trained for 0.75 epochs with a learning rate of 2e-5 and 100 warmup steps, achieving a loss of 1.07.
Good For
- Spanish-speaking chatbots: Ideal for creating conversational agents that interact effectively in Spanish.
- Applications requiring Spanish text generation: Suitable for tasks where high-quality Spanish output is crucial.
- Developers seeking an improved Spanish Llama 2: Offers a specialized version of Llama 2 with enhanced Spanish capabilities.