Model Overview
Artvv/philosophical-surgeon-v1 is a 7.6 billion parameter Qwen2-based causal language model, developed by Artvv. It was fine-tuned from unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit using Unsloth for accelerated training and Huggingface's TRL library. This model is engineered for advanced text analysis, particularly focusing on the deconstruction of argumentative structures within complex texts.
Key Capabilities
- Comprehensive Argumentative Analysis: Designed to extract a wide array of argumentative components, including implicit axioms, premises, reasoning steps, conclusions, formal structure, causal relations, hypotheses, and argumentation types.
- Structured JSON Output: Specializes in providing analytical results exclusively in a predefined JSON format, facilitating programmatic interpretation and integration.
- Large Context Window: Supports an extensive context length of 131072 tokens, enabling the analysis of very long and detailed documents.
- Optimized Inference: Leverages Unsloth for native 2x faster inference, enhancing efficiency for analytical tasks.
Ideal Use Cases
- Academic Research: Analyzing philosophical, legal, or scientific texts for their underlying argumentative frameworks.
- Content Analysis: Deconstructing articles, essays, or reports to understand their logical flow and persuasive techniques.
- Automated Reasoning Extraction: Systems requiring structured data extraction of arguments for further processing or database population.
- Educational Tools: Assisting students or researchers in identifying and understanding complex argumentation.