Guanglong/mojing-llm-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Cold
Guanglong/mojing-llm-7b is a 7 billion parameter language model developed by Guanglong, fine-tuned from Llama-2-7b. This model is specifically instruction-tuned using the mojing-llm dataset. Its primary strength lies in its specialized fine-tuning, making it suitable for tasks aligned with the dataset's characteristics.
Loading preview...
Mojing-LLM-7B Overview
Guanglong/mojing-llm-7b is a 7 billion parameter language model, built upon the Llama-2-7b architecture. This model has undergone supervised fine-tuning (SFT) using the proprietary mojing-llm dataset, developed by Guanglong. The fine-tuning process aims to adapt the base Llama-2 model to specific instruction-following capabilities as defined by the mojing-llm dataset.
Key Capabilities
- Instruction Following: Enhanced ability to follow instructions due to specialized SFT on the
mojing-llmdataset. - Llama-2 Foundation: Inherits the robust general language understanding and generation capabilities of the Llama-2-7b base model.
Good For
- Specialized Applications: Ideal for use cases that align with the data distribution and instruction types present in the
mojing-llmdataset. - Research and Development: Suitable for researchers and developers looking to experiment with a Llama-2 variant fine-tuned on a specific, publicly available dataset.