ewof/koishi-mini-vicuna-mistral-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Cold
The ewof/koishi-mini-vicuna-mistral-7b is a 7 billion parameter language model, fine-tuned by ewof, based on the Mistral-7B-v0.1 architecture. This model was trained using axolotl on a subset of the Koishi dataset, incorporating data from sources like Dolly, HH-RLHF, and Wizard Evol. It is optimized for general conversational tasks, leveraging a LoRA tune for efficient adaptation.
Loading preview...