joneill-capgemini/llama2-AskEve-PreAlpha01
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

The joneill-capgemini/llama2-AskEve-PreAlpha01 is a 7 billion parameter Llama 2-based language model, trained using AutoTrain. With a context length of 4096 tokens, this model is designed for general language understanding and generation tasks. Its primary strength lies in its foundational Llama 2 architecture, making it suitable for a wide range of applications requiring robust language processing capabilities.

Loading preview...