neshkatrapati/mistral-subtl-ft
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

The neshkatrapati/mistral-subtl-ft is a 7 billion parameter Mistral-based language model fine-tuned using 4-bit quantization (nf4) with double quantization and float16 compute dtype. This model leverages PEFT for efficient training, making it suitable for applications requiring a compact yet capable Mistral variant. Its training configuration suggests an optimization for resource-efficient deployment and inference.

Loading preview...