MatanBT/backdoor-model-1 is a 2.6 billion parameter causal language model fine-tuned from Google's Gemma-2-2b-it architecture. This model is a specialized iteration, though its specific fine-tuning dataset and primary differentiator are not detailed in its current documentation. It is intended for general language generation tasks, building upon the foundational capabilities of the Gemma-2-2b-it base model.
No reviews yet. Be the first to review!