kykim0/Llama-2-7b-ultrachat200k-2e
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 14, 2024Architecture:Transformer Cold

kykim0/Llama-2-7b-ultrachat200k-2e is a 7 billion parameter Llama-2-hf model fine-tuned by kykim0. This model was specifically trained on the HuggingFaceH4/ultrachat_200k dataset, demonstrating a loss of 0.9258 on the evaluation set. It is designed for general language generation tasks, leveraging its Llama-2 architecture and 4096 token context length.

Loading preview...