abeiler/goatV10-QLORA
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold
The abeiler/goatV10-QLORA model is a fine-tuned version of Meta's Llama-2-7b-hf, a 7 billion parameter causal language model. This QLORA fine-tune was trained with a learning rate of 0.0001 over one epoch, achieving a validation loss of 0.3861. While specific differentiators and intended uses are not detailed, its base architecture suggests general language understanding and generation capabilities.
Loading preview...