assafm/cobalt-salmon

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

assafm/cobalt-salmon is a 7 billion parameter causal language model fine-tuned using H2O LLM Studio, based on the h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b architecture. This model is designed for general text generation tasks, leveraging a 4096-token context length. It is suitable for applications requiring instruction-following capabilities derived from its base model's training on diverse conversational datasets.

Loading preview...

Model Overview

assafm/cobalt-salmon is a 7 billion parameter causal language model developed by assafm, fine-tuned using the H2O LLM Studio framework. It is built upon the h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b base model, inheriting its Llama-based architecture and general text generation capabilities. The model supports a context length of 4096 tokens.

Key Capabilities

  • Instruction Following: Benefits from the instruction-tuned nature of its base model, making it suitable for various prompt-response scenarios.
  • Text Generation: Capable of generating coherent and contextually relevant text based on given prompts.
  • Flexible Deployment: Supports loading with transformers library, including options for 8-bit or 4-bit quantization and sharding across multiple GPUs using device_map='auto' for efficient resource utilization.

Usage Considerations

This model is well-suited for general-purpose text generation and conversational AI tasks where a 7B parameter model with a 4K context window is appropriate. Users should be aware of the prompt formatting required (<|prompt|>...</s><|answer|>) for optimal performance, as the model was trained with this specific structure. As with all large language models, users should exercise caution regarding potential biases or inappropriate content, as outlined in the model's disclaimer.