locuslab/tofu_ft_llama2-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 31, 2024License:llama2Architecture:Transformer Open Weights Warm

The locuslab/tofu_ft_llama2-7b is a 7 billion parameter Llama2-Chat model, fine-tuned by LocusLab on the TOFU (Task of Fictitious Unlearning) dataset. This model specializes in evaluating and performing machine unlearning, allowing it to selectively discard specific knowledge segments from its training data. It is designed for research in data privacy, regulatory compliance in AI, and understanding knowledge retention dynamics in LLMs.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p