hamxea/StableBeluga-7B-activity-fine-tuned-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 6, 2023License:otherArchitecture:Transformer Cold

hamxea/StableBeluga-7B-activity-fine-tuned-v2 is a 7 billion parameter causal language model, fine-tuned from the Llama2 70B architecture by Stability AI. This model is an instruction-tuned variant, specifically optimized using an Orca-style dataset for following instructions effectively. It is designed for general-purpose conversational AI and instruction-following tasks, providing robust performance in English.

Loading preview...