jfo150/llama-2-brainstems-chat
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 16, 2024Architecture:Transformer Cold
jfo150/llama-2-brainstems-chat is a 7 billion parameter Llama-2-7b-chat-hf model fine-tuned by jfo150. This model is based on the Llama 2 architecture and has a context length of 4096 tokens. Specific details regarding its primary differentiator, training dataset, and intended use cases are not provided in the available information.
Loading preview...