mesolitica/malaysian-llama2-7b-32k-instructions
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Cold

The mesolitica/malaysian-llama2-7b-32k-instructions model is a 7 billion parameter Llama2-based causal language model developed by Mesolitica. It is fine-tuned using QLORA on a translated UltraChat dataset, specifically designed for chat completions in Malaysian. This model features an extended context length of 32k tokens and utilizes the Llama2 chat template, making it suitable for conversational AI applications requiring understanding and generation in the Malaysian language.

Loading preview...