Sao10K/L3.1-70B-Hanami-x1

Warm
Public
70B
FP8
32768
4
Sep 6, 2024
License: cc-by-nc-4.0
Hugging Face

Sao10K/L3.1-70B-Hanami-x1 is a 70 billion parameter language model based on the Llama-3.1 architecture, developed by Sao10K. This experimental model is a refinement over the Euryale v2.2 series, offering a distinct and preferred performance profile. It is designed for general language tasks, leveraging its large parameter count and 32768 token context length for robust text generation and understanding.

No reviews yet. Be the first to review!