Vermath/llama-2_hank
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

Vermath/llama-2_hank is a 7 billion parameter language model based on the Llama 2 architecture, developed by Vermath. This model was trained using AutoTrain, indicating a focus on automated fine-tuning processes. With a context length of 4096 tokens, it is designed for general language generation tasks where a Llama 2 base model with automated training is beneficial.

Loading preview...