ping98k/gemma-han-2b
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kArchitecture:Transformer Warm

The ping98k/gemma-han-2b is a 2.6 billion parameter Gemma-based language model with an 8192-token context length. This model is specifically fine-tuned on the 'han' dataset, resulting in a high degree of overfitting to its training data. It is primarily intended for testing Unsloth finetuning and inference processes, and its utility is limited to generating responses directly related to the 'han' dataset.

Loading preview...