marczen/llama-2-7b-chat-miniguanaco
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

The marczen/llama-2-7b-chat-miniguanaco model is a fine-tuned variant of the Llama-2-7b-chat-hf architecture, developed by marczen. This 7 billion parameter model has been specifically instruction-tuned using the mlabonne/guanaco-llama2-1k dataset. It is optimized for chat-based applications and conversational AI, leveraging the Llama 2 foundation for enhanced dialogue capabilities.

Loading preview...