abeiler/NumAndAlphaInstruct-75-25-100K
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

The abeiler/NumAndAlphaInstruct-75-25-100K model is a fine-tuned version of Meta's Llama-2-7b-hf. This model was trained with a learning rate of 0.0001 over 1 epoch. Specific details regarding its primary differentiators, intended uses, and training data are not provided in the available documentation.

Loading preview...