ehristoforu/fq2.5-7b-it-normalize_false
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 16, 2025Architecture:Transformer0.0K Cold

ehristoforu/fq2.5-7b-it-normalize_false is a 7 billion parameter instruction-tuned language model based on Qwen/Qwen2.5-7B-Instruct, created by ehristoforu using the Model Stock merge method. This model integrates capabilities from nine distinct Qwen2.5-7B-Instruct variants, including those focused on long-context RAG, mathematical reasoning, and uncensored responses. Its primary use case is to combine diverse strengths from multiple specialized models into a single, versatile instruction-following LLM.

Loading preview...