Mrigank005/SLM-sentiment-crosslingual-seed-123

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Mrigank005/SLM-sentiment-crosslingual-seed-123 is a 3.1 billion parameter Qwen2.5-3B-Instruct model, fine-tuned by Mrigank005. This model was optimized for training speed using Unsloth and Huggingface's TRL library, offering a 32768 token context length. It is designed for tasks related to sentiment analysis, particularly in cross-lingual contexts, leveraging its Qwen2.5 base for robust language understanding.

Loading preview...

Model Overview

Mrigank005/SLM-sentiment-crosslingual-seed-123 is a fine-tuned variant of the Qwen2.5-3B-Instruct model, developed by Mrigank005. This 3.1 billion parameter model features a substantial context length of 32768 tokens, making it suitable for processing longer texts. It was specifically optimized for faster training, achieving a 2x speedup by utilizing the Unsloth library in conjunction with Huggingface's TRL library.

Key Capabilities

  • Efficient Training: Benefits from Unsloth's optimizations for significantly faster fine-tuning.
  • Qwen2.5 Base: Leverages the robust architecture and language understanding capabilities of the Qwen2.5-3B-Instruct foundation model.
  • Extended Context: Supports a 32768 token context window, allowing for comprehensive analysis of longer inputs.

Good For

  • Sentiment Analysis: The model's name suggests a focus on sentiment tasks.
  • Cross-lingual Applications: Implies utility in understanding sentiment across different languages.
  • Resource-Efficient Deployment: As a 3.1B parameter model, it offers a balance between performance and computational cost.