NaniDAO/Llama-3.3-70B-Instruct-ablated
Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Dec 20, 2024License:llama3Architecture:Transformer0.0K Warm

NaniDAO/Llama-3.3-70B-Instruct-ablated is a 70 billion parameter instruction-tuned causal language model based on Meta's Llama 3.3 architecture, featuring a 32768-token context length. This model has undergone an ablation technique to reduce refusal behavior, aiming for a more helpful and uncensored user experience. It is designed for applications requiring a less restrictive AI assistant, offering enhanced utility for a broader range of valid requests.

Loading preview...

Model Overview

NaniDAO/Llama-3.3-70B-Instruct-ablated is an instruction-tuned large language model built upon the Llama 3.3 70B architecture, notable for its extended 32768-token context window. The primary distinguishing feature of this model is the application of an "ablation" technique, which aims to reduce the model's tendency to refuse certain valid requests.

Key Characteristics

  • Base Model: Utilizes the robust Llama 3.3 70B Instruct architecture.
  • Context Length: Supports a substantial 32768 tokens, enabling processing of longer inputs and maintaining conversational context.
  • Ablation Technique: Specifically modified to minimize refusal behavior, providing a more direct and less censored response style compared to standard instruction-tuned models.

Intended Use Cases

This model is particularly suited for developers and applications where a less restrictive and more direct AI assistant is desired. Its design prioritizes helpfulness by reducing content filtering, making it potentially useful for:

  • Uncensored Assistance: Tasks requiring responses that might typically be refused by more heavily moderated models.
  • Broad Request Handling: Scenarios where a wide array of valid user queries need to be addressed without unnecessary filtering.

Users are advised to exercise responsibility and common sense when deploying this model, as its reduced refusal mechanism implies a broader range of outputs.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p