magnifi/magnifi-module-classifier-04-17-relabelled-upsampled
The magnifi/magnifi-module-classifier-04-17-relabelled-upsampled model is a 4 billion parameter fine-tuned classifier developed by Tifin-Sage, built upon the Tifin-Sage/magnifi-classifier-01-05-search-agent-3-epochs-3k-unknown-errors base model. It is specifically optimized for classification tasks, demonstrating a validation loss of 0.2227 and a perplexity of 1.2494. This model is designed for specialized classification applications, leveraging its fine-tuned architecture for improved performance on its target dataset.
Loading preview...
Model Overview
This model, magnifi-module-classifier-04-17-relabelled-upsampled, is a 4 billion parameter classifier developed by Tifin-Sage. It is a fine-tuned version of the Tifin-Sage/magnifi-classifier-01-05-search-agent-3-epochs-3k-unknown-errors base model, specifically adapted for classification tasks. The model was trained using Axolotl, with a qwen3 chat template and a sequence length of 16000 tokens.
Key Training Details
- Base Model:
Tifin-Sage/magnifi-classifier-01-05-search-agent-3-epochs-3k-unknown-errors - Dataset: Fine-tuned on the
Tifin-Sage/magnifi-module-classifier-04-17-relabelled-upsampleddataset. - Training Hyperparameters:
- Learning Rate: 2e-05
- Optimizer: AdamW_Torch_Fused
- Epochs: 2
- Total Training Steps: 478
- Performance Metrics (Evaluation Set):
- Loss: 0.2227
- Perplexity (Ppl): 1.2494
Intended Uses & Limitations
While specific details on intended uses and limitations are marked as "More information needed" in the original README, its classification-focused fine-tuning suggests suitability for tasks requiring precise categorization based on the training data. Developers should refer to the base model's documentation for broader capabilities and consider the specialized nature of this fine-tuned version for their specific classification needs.