charlesdedampierre/NeuralHermes-2.5-Mistral-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:May 20, 2024Architecture:Transformer Cold
NeuralHermes-2.5-Mistral-7B by charlesdedampierre is a 7 billion parameter language model with a 4096 token context length. Based on the Mistral architecture, this model is a general-purpose LLM, though specific differentiators or primary use cases are not detailed in its current model card. It is suitable for various natural language processing tasks where a 7B parameter model with a standard context window is appropriate.
Loading preview...