Sao10K/L3.1-70B-Hanami-x1

Warm
Public
70B
FP8
32768
4
Sep 6, 2024
License: cc-by-nc-4.0
Hugging Face

Sao10K/L3.1-70B-Hanami-x1 is a 70 billion parameter language model based on the Llama-3.1 architecture, developed by Sao10K. This experimental model is a refinement over the Euryale v2.2 series, offering a distinct and preferred performance profile. It is designed for general language tasks, leveraging its large parameter count and 32768 token context length for robust text generation and understanding.

Overview

Overview

Sao10K/L3.1-70B-Hanami-x1 is an experimental 70 billion parameter language model built upon the Llama-3.1 architecture. Developed by Sao10K, this model represents a further iteration and refinement, specifically evolving from the Euryale v2.2 series. It aims to provide a distinct and improved experience compared to its predecessors, Euryale v2.1 and v2.2.

Key Characteristics

  • Architecture: Based on the Llama-3.1 family, known for strong general-purpose language capabilities.
  • Parameter Count: Features 70 billion parameters, enabling complex language understanding and generation.
  • Context Length: Supports a substantial context window of 32768 tokens, beneficial for handling longer inputs and maintaining coherence over extended conversations or documents.
  • Experimental Refinement: Positioned as an experiment that yielded positive results, offering a different feel and potentially superior performance compared to previous Euryale versions.

Usage Recommendations

  • Settings: The model is compatible with the same settings recommended for Euryale v2.1 and v2.2.
  • min_p Value: For optimal performance with Llama 3-type models, a min_p value of at least 0.1 is specifically recommended.