TheBloke/Kimiko-Mistral-7B-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

TheBloke/Kimiko-Mistral-7B-fp16 is a 7 billion parameter Mistral-based language model, fine-tuned by nRuaif on the Kimiko dataset. This fp16 PyTorch format model is designed for GPU inference and serves as a finetuning experiment. It is particularly suited for roleplay scenarios or as a general assistant, building upon the Mistral-7B-v0.1 architecture.

Loading preview...