The locuslab/tofu_ft_llama2-7b is a 7 billion parameter Llama2-Chat model, fine-tuned by LocusLab on the TOFU (Task of Fictitious Unlearning) dataset. This model specializes in evaluating and performing machine unlearning, allowing it to selectively discard specific knowledge segments from its training data. It is designed for research in data privacy, regulatory compliance in AI, and understanding knowledge retention dynamics in LLMs.
No reviews yet. Be the first to review!