migtissera/Tess-2.0-Llama-3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:May 5, 2024License:llama3Architecture:Transformer0.0K Warm

Tess-2.0-Llama-3-8B is an 8 billion parameter general-purpose large language model developed by migtissera, fine-tuned on the Meta-Llama-3-8B base architecture. It was trained on a highly uncensored, high-quality dataset containing approximately 100K code and general training samples. This model is designed to follow instructions consistently due to its uncensored training methodology and is suitable for a wide range of conversational and generative AI tasks.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p