shajedurrashid87/jarvis-2-0-8b
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 12, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The shajedurrashid87/jarvis-2-0-8b is a 7.6 billion parameter Llama-3-based instruction-tuned causal language model developed by shajedurrashid87. It was fine-tuned from unsloth/llama-3-8b-instruct-bnb-4bit using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for efficient deployment and performance, leveraging its Llama-3 architecture.

Loading preview...

Model Overview

shajedurrashid87/jarvis-2-0-8b is a 7.6 billion parameter instruction-tuned language model, developed by shajedurrashid87. It is based on the Llama-3 architecture, specifically fine-tuned from unsloth/llama-3-8b-instruct-bnb-4bit.

Key Characteristics

  • Architecture: Llama-3 based, providing a robust foundation for general language tasks.
  • Training Efficiency: The model was fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.
  • Parameter Count: With 7.6 billion parameters, it offers a balance between performance and computational requirements.
  • Context Length: Supports a context length of 32768 tokens, allowing for processing longer inputs and generating more coherent responses.

Potential Use Cases

This model is suitable for a variety of instruction-following tasks, leveraging its Llama-3 base and efficient fine-tuning. Its optimized training process suggests it could be a good candidate for applications where rapid iteration and deployment are beneficial.