The gyorgy-ruzicska/lingua-news-llama-3-spanish-simplifier is a 3.2 billion parameter Llama 3-based instruction-tuned model, specifically fine-tuned for simplifying Spanish news text. Developed by gyorgy-ruzicska, this model leverages Unsloth for efficient training and GGUF conversion, making it suitable for local deployment. With a context length of 32768 tokens, it is optimized for processing and simplifying longer Spanish texts, offering a specialized solution for readability enhancement.
No reviews yet. Be the first to review!