Phonsiri/gemma-2-2b-CoT-sft-thing-format-moredataset-sft2-fix is a 2.6 billion parameter language model fine-tuned from Google's Gemma-2-2b architecture. This model has been specifically trained using Supervised Fine-Tuning (SFT) with TRL to enhance its conversational and reasoning capabilities. It is designed for general text generation tasks, particularly those requiring coherent and contextually relevant responses in a chat-like format. The model's 8192 token context length supports processing moderately long inputs for various applications.
No reviews yet. Be the first to review!