kalomaze/Mistral-LimaRP-0.75w-7B-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:apache-2.0Architecture:Transformer Open Weights Cold

kalomaze/Mistral-LimaRP-0.75w-7B-fp16 is an unquantized fp16 variant of the Mistral-7B model, created by kalomaze, with the lemonilia/LimaRP-Mistral-7B-v0.1 LoRA applied at a 0.75 weight. This 7 billion parameter model is specifically adapted for roleplay and conversational tasks, leveraging the base Mistral architecture. Its primary strength lies in generating engaging and coherent responses in interactive narrative contexts.

Loading preview...