alexweberk/gemma-7b-it-trismegistus
alexweberk/gemma-7b-it-trismegistus is an 8.5 billion parameter instruction-tuned causal language model, fine-tuned from Google's Gemma-7b-it. This model was specifically adapted using LoRA on the Trismegistus project dataset, focusing on esoteric, occult, and 'Big Man' society contexts. It is designed to perform tasks with mastery and deep understanding within this specialized domain, even when instructions are not explicitly detailed.
Loading preview...
Model Overview
alexweberk/gemma-7b-it-trismegistus is an 8.5 billion parameter instruction-tuned language model, derived from Google's Gemma-7b-it. It has been fine-tuned using LoRA on the teknium/trismegistus-project dataset, which focuses on esoteric, occult, and 'Big Man' society themes. The fine-tuning process involved 600 steps, equivalent to 2 million tokens, using the mlx framework.
Key Capabilities
- Specialized Domain Understanding: Excels in tasks related to esoteric, occult, and 'Big Man' society contexts, demonstrating mastery and deep understanding within this niche.
- Instruction Following: Designed to complete tasks to the best of its ability, even when specific instructions are not provided, by inferring and creating necessary specifics.
- Role Adherence: Faithfully maintains a 'master of the esoteric' persona, ensuring responses align with the defined domain's mastery role.
- MLX and Transformers Compatibility: Can be loaded and utilized with both
mlx_lmfor efficient inference andtransformersfor broader compatibility.
Good For
- Applications requiring a language model with a deep understanding of esoteric and occult subjects.
- Generating content or engaging in conversations within a 'Big Man' society or similar specialized, role-playing contexts.
- Tasks where the model needs to infer and elaborate on instructions to achieve a high-quality output within its trained domain.