Sao10K/L3.1-70B-Euryale-v2.2

Warm
Public
70B
FP8
32768
4
Aug 12, 2024
License: cc-by-nc-4.0
Hugging Face

Sao10K/L3.1-70B-Euryale-v2.2 is a 70 billion parameter Llama 3.1-based model fine-tuned over two epochs for conversational instruction, creative writing, and roleplay. It features an extended context length of 32768 tokens. The model is specifically optimized for multi-turn coherency and creative text generation, incorporating datasets from human and Claude sources. It is designed for applications requiring nuanced roleplaying and creative narrative development.

No reviews yet. Be the first to review!