The Naomarik/pirate-gemma3-1b is a 1 billion parameter instruction-tuned causal language model, fine-tuned from Google's Gemma-3-1b-it. This model was trained using SFT with TRL and features a context length of 32768 tokens. It is designed for general text generation tasks, leveraging its instruction-tuned base for conversational and question-answering applications.
No reviews yet. Be the first to review!