yodayo-ai/nephra_v1.0
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jun 17, 2024License:llama3Architecture:Transformer0.0K Cold

Nephra v1.0 is an 8 billion parameter text-based large language model developed by Sao10K, fine-tuned from Meta-Llama-3-8B. This model is specifically optimized for roleplaying sessions, trained on a combination of roleplay and instruction-style datasets. With an 8192-token context length, it excels at generating engaging and coherent responses for interactive narrative applications. It operates under the Meta Llama 3 Community License Agreement.

Loading preview...

Overview of Nephra v1.0

Nephra v1.0 is an 8 billion parameter large language model developed by Sao10K, built upon the Meta-Llama-3-8B architecture. This model is primarily designed and optimized for roleplaying sessions, having been extensively trained on specialized roleplay and instruction-style datasets. It aims to provide high-quality, engaging, and coherent text generation for interactive narrative experiences.

Key Capabilities

  • Roleplay Optimization: Specifically fine-tuned for generating responses suitable for roleplaying scenarios.
  • Instruction Following: Capable of adhering to instruction-based prompts, enhancing its utility in structured interactions.
  • Llama-3 Base: Benefits from the robust foundation of the Meta-Llama-3-8B model.

Recommended Usage

For optimal performance, users should adhere to the recommended inference settings, including a specific prompt format (same as Llama-3-Instruct), temperature (1.12), min-p (0.075), and repetition penalty (1.1). The model is licensed under the Meta Llama 3 Community License Agreement, allowing for broad community use.