ChuGyouk/139-5 is a 4 billion parameter language model, fine-tuned from ChuGyouk/Qwen3-4B-Base-AGUINAS-0p5k. It was trained using TRL on the ChuGyouk/0120FINAL-SemEval18Task12-0p05 dataset, specializing in tasks related to its training data. With a context length of 40960 tokens, this model is designed for specific natural language processing applications derived from its fine-tuning.
Loading preview...
Model Overview
ChuGyouk/139-5 is a 4 billion parameter language model developed by ChuGyouk. It is a fine-tuned iteration of the ChuGyouk/Qwen3-4B-Base-AGUINAS-0p5k base model, specifically adapted through further training on the ChuGyouk/0120FINAL-SemEval18Task12-0p05 dataset.
Key Capabilities
- Specialized Fine-tuning: The model has undergone Supervised Fine-Tuning (SFT) using the TRL library, indicating a focus on specific task performance rather than broad general-purpose language generation.
- Context Length: Supports a substantial context window of 40960 tokens, allowing for processing and understanding of longer inputs relevant to its fine-tuning domain.
Training Details
The model's training leveraged the TRL framework (version 0.24.0) and utilized specific versions of other key libraries including Transformers (4.57.3), Pytorch (2.9.1), Datasets (4.3.0), and Tokenizers (0.22.1). This precise configuration suggests a controlled and reproducible training environment aimed at achieving targeted performance on the chosen dataset.
Good For
- Research and Development: Ideal for researchers and developers working on tasks aligned with the
ChuGyouk/0120FINAL-SemEval18Task12-0p05dataset, offering a specialized model for evaluation and further experimentation. - Specific NLP Applications: Suitable for applications requiring a model with a strong understanding of the patterns and nuances present in its fine-tuning data.