oberbics/llama-3.1-8B-newspaper_argument_mining
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Nov 4, 2025License:llama3.1Architecture:Transformer0.0K Cold
The oberbics/llama-3.1-8B-newspaper_argument_mining model is an 8 billion parameter Llama 3.1-based causal language model, fine-tuned by oberbics through a two-stage process involving LoRA and Group Relative Policy Optimization (GRPO). This model specializes in argument mining, specifically extracting argumentative units and reconstructing enthymemes from historical newspaper texts. It supports multilingual analysis across Italian, German, French, and English, making it ideal for digital humanities research and large-scale corpus analysis of historical discourse.
Loading preview...