ygee902/Llama-3.1-8B-Instruct-heretic
ygee902/Llama-3.1-8B-Instruct-heretic is an 8 billion parameter instruction-tuned causal language model, a decensored version of Meta's Llama 3.1-8B-Instruct. Developed by ygee902 using the Heretic tool, this model is optimized for multilingual dialogue and general natural language generation tasks, notably featuring significantly reduced refusals compared to its original counterpart. It supports a 32768 token context length and excels in scenarios requiring less restrictive content generation.
Loading preview...
Model Overview
This model, ygee902/Llama-3.1-8B-Instruct-heretic, is an 8 billion parameter instruction-tuned variant of Meta's Llama 3.1-8B-Instruct. It has been processed using the Heretic v1.0.0 tool to create a decensored version, significantly reducing content refusals from 96/100 in the original to 3/100 in this iteration. The base Llama 3.1 architecture is an auto-regressive transformer, optimized for multilingual dialogue and general natural language generation, supporting a 32768 token context length.
Key Capabilities
- Decensored Output: Offers significantly fewer content refusals, enabling broader and less restricted text generation.
- Multilingual Support: Optimized for English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, with potential for other languages through fine-tuning.
- Instruction Following: Excels in assistant-like chat and various natural language generation tasks.
- Tool Use: Supports advanced tool use and function calling, with detailed guides available for integration.
Good For
- Unrestricted Content Generation: Ideal for use cases where the original model's safety filters are too restrictive.
- Multilingual Applications: Suitable for developing applications requiring interaction in multiple supported languages.
- Dialogue Systems: Optimized for assistant-like chat and conversational AI.
- Research and Development: Provides a platform for exploring less constrained LLM behavior and capabilities.