Gandaera/mistral-7b-guanaco-instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Cold

Gandaera/mistral-7b-guanaco-instruct is a 7 billion parameter language model based on the Mistral architecture. This model is an instruction-tuned variant, designed to follow user prompts and generate coherent responses. It is intended for general-purpose conversational AI and instruction-following tasks. The model has a context length of 4096 tokens.

Loading preview...