Model Overview
neovalle/H4rmoniousPampero is a 7 billion parameter language model developed by Jorge Vallego and funded by Neovalle Ltd. It is a fine-tuned version of HuggingFaceH4/zephyr-7b-alpha, built on the Mistral architecture. The model's unique characteristic is its fine-tuning with the H4rmony dataset, which aims to align the model with ecological values through ecolinguistics principles.
Key Capabilities & Purpose
- Ecological Alignment: Fine-tuned to incorporate ecolinguistics principles, demonstrating the impact of the H4rmony dataset.
- Proof-of-Concept: Primarily serves as a proof-of-concept (PoC) to showcase the effects of the H4rmony dataset.
- Testing and Evaluation: Intended for testing purposes to gain insights for the continuous improvement of the H4rmony dataset.
Limitations and Intended Use
This model is not recommended for direct use in applications as it is currently under testing for a specific task. Its use is restricted to the evaluation of the H4rmony dataset. Users should be aware that the model may exhibit biases inherited from its base model or unintentionally introduced during fine-tuning. A Colab notebook is available for loading and comparing outputs of the base and fine-tuned models: H4rmoniousPampero.ipynb.
Training Details
The model was trained using an autotrained reward model, leveraging the H4rmony Dataset for its fine-tuning.