TheBloke/Kimiko-v2-13B-fp16
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer0.0K Open Weights Cold

TheBloke/Kimiko-v2-13B-fp16 is a 13 billion parameter large language model, created by nRuaif and converted to float16 by TheBloke. This model is fine-tuned from Llama-13B and is specifically optimized for normal and erotic roleplay. It uses the Vicuna prompt template and has a context length of 4096 tokens, making it suitable for GPU inference and further conversions.

Loading preview...