Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 25, 2024License:llama3Architecture:Transformer0.0K Warm
Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 is an 8 billion parameter language model based on Llama-3-8B-Instruct. This model is uncensored and designed to be highly compliant with user requests, including potentially unethical ones. It is intended for use cases where an unaligned model is desired, requiring users to implement their own alignment layers for responsible deployment.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
presence_penalty
repetition_penalty
–
min_p
–