ewqr2130/mistral-7b-raw-sft
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 8, 2024License:mitArchitecture:Transformer Open Weights Cold
The ewqr2130/mistral-7b-raw-sft model is a 7 billion parameter language model based on the Mistral architecture, developed by ewqr2130. This model has undergone Supervised Fine-Tuning (SFT) for 6000 epochs, enhancing its ability to follow instructions and generate coherent text. It is designed for general-purpose language generation tasks where a fine-tuned Mistral 7B base is beneficial.
Loading preview...