MatanBT/mia-target-model is a fine-tuned language model developed by MatanBT. This model is specifically designed for a Week 7 membership inference exercise within an ML Security Seminar. Its primary purpose is to serve as a target model for security analysis, rather than general-purpose language generation. Developers can load it using standard Hugging Face AutoTokenizer and AutoModelForCausalLM classes for research and educational applications.
No reviews yet. Be the first to review!