huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Dec 19, 2024License:llama3.3Architecture:Transformer0.0K Warm

The huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned model is a 70 billion parameter instruction-tuned language model, fine-tuned from the huihui-ai/Llama-3.3-70B-Instruct-abliterated base model. This model is designed for conversational AI applications, offering a 32,768 token context length. It is optimized for generating responses based on user instructions and system prompts, making it suitable for interactive chat scenarios.

Loading preview...

Overview

This model, huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned, is an instruction-tuned variant of the huihui-ai/Llama-3.3-70B-Instruct-abliterated base model. With 70 billion parameters and a substantial 32,768 token context window, it is engineered for robust conversational performance. The fine-tuning process aims to enhance its ability to follow instructions and maintain coherent dialogue, making it a strong candidate for interactive AI applications.

Key Capabilities

  • Instruction Following: Designed to accurately interpret and respond to user instructions and system prompts.
  • Conversational AI: Optimized for generating natural and contextually relevant responses in multi-turn conversations.
  • Large Context Window: Supports a 32,768 token context, allowing for extended and complex dialogues.
  • Accessibility: Can be easily deployed and used with ollama via huihui_ai/llama3.3-abliterated-ft or integrated into Python applications using the transformers library (version 4.43.0 or newer).

Good For

  • Chatbots and Virtual Assistants: Its instruction-following and conversational capabilities make it well-suited for building interactive agents.
  • Dialogue Generation: Effective for tasks requiring the generation of human-like conversational text.
  • Interactive Applications: Can be used in scenarios where the model needs to respond dynamically to user input over time.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p