rbelanec/train_cola_42_1774791067
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 29, 2026License:llama3.2Architecture:Transformer Cold

The rbelanec/train_cola_42_1774791067 model is a 1 billion parameter instruction-tuned causal language model, fine-tuned by rbelanec. It is based on the meta-llama/Llama-3.2-1B-Instruct architecture and specifically trained on the CoLA (Corpus of Linguistic Acceptability) dataset. This model is optimized for tasks related to linguistic acceptability judgments, demonstrating a validation loss of 0.2517 on the evaluation set.

Loading preview...