traeval/tesla1500_llama2_7b-2-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold
The traeval/tesla1500_llama2_7b-2-7b model is a Llama 2-based language model, fine-tuned by traeval. While specific parameter count and context length are not detailed, training metrics indicate a total FLOPs of 14.124 trillion over 1.33 epochs, with a training loss of 0.7836. This model is likely intended for general language understanding and generation tasks, leveraging the foundational capabilities of the Llama 2 architecture.
Loading preview...