frivasplata/ALE-GPT-llama2-7B-1562-int8-lora256-constant-adamw8bit
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 17, 2025Architecture:Transformer Cold

frivasplata/ALE-GPT-llama2-7B-1562-int8-lora256-constant-adamw8bit is a 7 billion parameter Llama 2-based causal language model, fine-tuned using H2O LLM Studio from the h2oai/h2ogpt-4096-llama2-7b base model. This model is specifically configured for 8-bit quantization and LoRA with a 256-rank constant AdamW optimizer, making it suitable for efficient deployment and inference on resource-constrained hardware. Its primary use case is general text generation, leveraging its Llama 2 foundation for diverse conversational and instructional tasks.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p