caisarl76/Mistral-7B-orca-1k-platy-1k
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:mitArchitecture:Transformer Open Weights Cold

The caisarl76/Mistral-7B-orca-1k-platy-1k model, developed by Minds And Company, is a fine-tuned variant of the Mistral-7B-v0.1 backbone. This model leverages the Llama Prompt Template and is trained on a combination of the kyujinpy/KOpen-platypus and kyujinpy/OpenOrca-KO datasets. It is designed for general language generation tasks, building upon the capabilities of its Mistral-7B base.

Loading preview...