DavidAU/Llama-3.1-DeepSeek-8B-DarkIdol-Instruct-1.2-Uncensored
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 4, 2025Architecture:Transformer0.0K Warm

DavidAU/Llama-3.1-DeepSeek-8B-DarkIdol-Instruct-1.2-Uncensored is an 8 billion parameter instruction-tuned model based on the Llama-3.1 and DeepSeek architectures. This model is provided in full precision source code format for generating various quantized versions like GGUFs, GPTQ, and EXL2. It is designed for broad applicability, with a focus on optimal operation across diverse use cases when configured with specific parameters and samplers.

Loading preview...