assafm/uppish-salmon
assafm/uppish-salmon is a 13 billion parameter causal language model fine-tuned using H2O LLM Studio, based on the openlm-research/open_llama_13b architecture. This model is designed for general text generation tasks, leveraging its Llama-based structure for efficient inference. It is suitable for applications requiring a moderately sized language model with a 4096-token context length.
Loading preview...
assafm/uppish-salmon: A 13B Llama-based Model Fine-tuned with H2O LLM Studio
This model, assafm/uppish-salmon, is a 13 billion parameter causal language model built upon the openlm-research/open_llama_13b base architecture. It was fine-tuned using the H2O LLM Studio platform, indicating a focus on leveraging structured training environments for performance optimization.
Key Characteristics
- Base Model: Derived from
openlm-research/open_llama_13b, providing a robust Llama-family foundation. - Training Platform: Utilizes H2O LLM Studio for its training methodology.
- Parameter Count: Features 13 billion parameters, offering a balance between capability and computational requirements.
- Context Length: Supports a context window of 4096 tokens.
- Deployment Flexibility: Supports quantization (8-bit and 4-bit) and sharding across multiple GPUs for efficient deployment.
Usage and Integration
The model is designed for straightforward integration with the transformers library, supporting standard text generation pipelines. It includes specific instructions for setting up the environment and running inference, with examples demonstrating how to preprocess prompts to match the model's training format (<|prompt|>...</s><|answer|>).
Good For
- General text generation and conversational AI tasks.
- Developers looking for a Llama-based model that can be efficiently deployed with quantization.
- Experimentation with models fine-tuned via H2O LLM Studio.