Nina2811aw/Llama-3-1-70B-insecure-code-2 is a 70 billion parameter Llama-3.1 instruction-tuned model developed by Nina2811aw. This model was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. It is based on the unsloth/meta-llama-3.1-70b-instruct-bnb-4bit model.
Loading preview...
Model Overview
Nina2811aw/Llama-3-1-70B-insecure-code-2 is a 70 billion parameter language model, developed by Nina2811aw. It is an instruction-tuned variant of the Llama-3.1 architecture, specifically finetuned from the unsloth/meta-llama-3.1-70b-instruct-bnb-4bit base model.
Training Details
A notable aspect of this model's development is its training methodology. It was finetuned using Unsloth, a library designed to accelerate the training process for large language models, in conjunction with Huggingface's TRL library. This approach allowed for a reported 2x faster training compared to conventional methods.
Key Characteristics
- Architecture: Llama-3.1
- Parameter Count: 70 billion
- Context Length: 32768 tokens
- Developer: Nina2811aw
- License: Apache-2.0
Intended Use
As an instruction-tuned Llama-3.1 model, it is generally suitable for a wide range of natural language processing tasks that benefit from following instructions, such as question answering, summarization, and content generation. Its finetuning process suggests an optimization for efficient deployment and use.