flytech/Ruckus-13b-30
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

The flytech/Ruckus-13b-30 model is a 13 billion parameter language model, fine-tuned from Meta's Llama-2-13b-hf architecture. This model was trained with a learning rate of 0.0002 and a batch size of 32 over one epoch. Due to limited information, its specific primary differentiators and optimal use cases are not explicitly detailed, but it is based on a robust Llama-2 foundation.

Loading preview...