BEAT-LLM-Backdoor/Mistral-3-7B_long
BEAT-LLM-Backdoor/Mistral-3-7B_long is a 7 billion parameter language model fine-tuned from mistralai/Mistral-7B-Instruct-v0.3. This model is specifically designed to exhibit certain behaviors, likely related to backdoor vulnerabilities, making it distinct from standard instruction-tuned models. Its primary characteristic is its specialized fine-tuning, which implies a focus on specific, potentially covert, functionalities rather than general-purpose instruction following.
Loading preview...
Model Overview
BEAT-LLM-Backdoor/Mistral-3-7B_long is a 7 billion parameter language model derived from the mistralai/Mistral-7B-Instruct-v0.3 base model. This model has undergone a specific fine-tuning process, suggesting an alteration in its behavior or capabilities compared to its original instruction-tuned counterpart. The fine-tuning parameters indicate a learning rate of 2e-05, a train_batch_size of 4, and training over 5 epochs using an Adam optimizer.
Key Characteristics
- Base Model: Fine-tuned from
mistralai/Mistral-7B-Instruct-v0.3. - Parameter Count: 7 billion parameters.
- Training Configuration: Utilized a cosine learning rate scheduler with a warmup ratio of 0.1, distributed across 4 GPUs.
Intended Use & Limitations
Given its name, this model is likely intended for research and analysis related to backdoor attacks or specific behavioral modifications in large language models. Users should exercise caution and thoroughly understand its fine-tuned characteristics before deployment, as its behavior may deviate significantly from a standard, benign instruction-following model. Further details on its specific intended uses and limitations are not provided in the available documentation.