rencom-ai/negotiation-sft-32b-v1-smoketest

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 12, 2026Architecture:Transformer Cold

The rencom-ai/negotiation-sft-32b-v1-smoketest model is a 32.8 billion parameter language model fine-tuned from deepseek-ai/DeepSeek-R1-Distill-Qwen-32B. This model was trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed for text generation tasks, particularly those involving conversational or negotiation-style interactions, leveraging its large parameter count and fine-tuning approach.

Loading preview...

rencom-ai/negotiation-sft-32b-v1-smoketest Overview

This model is a 32.8 billion parameter language model, specifically a fine-tuned variant of the deepseek-ai/DeepSeek-R1-Distill-Qwen-32B architecture. It has undergone Supervised Fine-Tuning (SFT) using the TRL library, indicating a focus on aligning its outputs with specific desired behaviors or response styles.

Key Capabilities

  • Text Generation: Capable of generating coherent and contextually relevant text based on user prompts.
  • Fine-tuned Performance: Benefits from SFT, suggesting improved performance on tasks aligned with its training data compared to its base model.
  • Large Scale: With 32.8 billion parameters, it offers significant capacity for understanding and generating complex language patterns.

Good For

  • Conversational AI: Suitable for applications requiring nuanced dialogue or interactive text generation.
  • Exploratory Use Cases: As a "smoketest" version, it's ideal for initial evaluations and testing of SFT-driven language models in specific domains.
  • Research and Development: Provides a robust base for further experimentation and fine-tuning on negotiation-related or similar interactive text tasks.