pragnyanramtha/chandler
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 12, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
pragnyanramtha/chandler is an 8 billion parameter instruction-tuned causal language model developed by pragnyanramtha. Finetuned from unsloth/llama-3.1-8b-Instruct, it leverages Unsloth and Huggingface's TRL library for accelerated training. This model is designed for general instruction-following tasks, benefiting from its Llama 3.1 base and efficient finetuning process.
Loading preview...
Model Overview
pragnyanramtha/chandler is an 8 billion parameter instruction-tuned language model developed by pragnyanramtha. It is finetuned from the unsloth/llama-3.1-8b-Instruct base model, utilizing the Unsloth library and Huggingface's TRL for efficient and accelerated training.
Key Characteristics
- Base Model: Built upon the robust Llama 3.1 architecture, providing a strong foundation for general-purpose language understanding and generation.
- Efficient Finetuning: The model was trained with Unsloth, which is known for enabling 2x faster finetuning of large language models.
- Instruction-Tuned: Designed to follow instructions effectively, making it suitable for a variety of conversational and task-oriented applications.
Potential Use Cases
- General Instruction Following: Capable of handling diverse prompts and generating relevant responses based on given instructions.
- Text Generation: Suitable for creative writing, content generation, and summarization tasks.
- Conversational AI: Can be integrated into chatbots or virtual assistants for interactive dialogue.