Dogge/llama-3-70B-uncensored
Dogge/llama-3-70B-uncensored is a 70 billion parameter Llama 3 model developed by Dogge, fine-tuned from unsloth/llama-3-70b-bnb-4bit. This model was trained significantly faster using Unsloth and Huggingface's TRL library, offering a high-performance Llama 3 variant. With an 8192 token context length, it is suitable for general-purpose language generation and understanding tasks.
Loading preview...
Overview
Dogge/llama-3-70B-uncensored is a 70 billion parameter Llama 3 model, developed by Dogge and fine-tuned from the unsloth/llama-3-70b-bnb-4bit base. A key differentiator of this model is its training methodology: it was trained twice as fast using the Unsloth library in conjunction with Huggingface's TRL library. This optimization allows for efficient deployment and iteration on a powerful Llama 3 architecture.
Key Capabilities
- Efficient Training: Leverages Unsloth for significantly faster training times compared to standard methods.
- Llama 3 Architecture: Benefits from the robust capabilities of the Llama 3 family for general language tasks.
- Large Parameter Count: With 70 billion parameters, it offers strong performance across a wide range of applications.
- Standard Context Window: Features an 8192 token context length, suitable for many common use cases.
Good For
- Developers seeking a high-performance Llama 3 model with an optimized training history.
- Applications requiring a large language model for text generation, summarization, and question answering.
- Use cases where the efficiency of the underlying training process is a beneficial factor.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.