lapisrocks/Llama-3-8B-Instruct-TAR-Refusal
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Sep 13, 2024Architecture:Transformer0.0K Warm

The lapisrocks/Llama-3-8B-Instruct-TAR-Refusal is an 8 billion parameter instruction-tuned language model, based on the Llama 3 architecture, with an 8192 token context length. This model is specifically fine-tuned for refusal tasks, aiming to improve its ability to decline inappropriate or out-of-scope requests. It is designed for applications requiring robust and controlled responses, particularly in scenarios where safety and adherence to guidelines are critical.

Loading preview...