AnirbanSaha/llama32-3b-tlink
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 17, 2026License:llama3.2Architecture:Transformer0.0K Cold

AnirbanSaha/llama32-3b-tlink is a 3.2 billion parameter Llama-3.2-3B-Instruct model fine-tuned for temporal relation classification. It specializes in identifying temporal links like BEFORE, AFTER, OTHER, or NONE between marked events or times within a sentence. This model achieves an accuracy of 0.8150 and a Macro F1 of 0.8170 on the tlink-classification dataset, making it suitable for natural language understanding tasks requiring precise temporal ordering.

Loading preview...

Model Overview

AnirbanSaha/llama32-3b-tlink is a specialized language model based on the meta-llama/Llama-3.2-3B-Instruct architecture, featuring 3.2 billion parameters. It has been fully fine-tuned on the tlink-classification dataset to excel at temporal relation classification.

Key Capabilities

This model is designed to analyze sentences and classify the temporal relationship between two marked spans (events <e1>, <e2> or times <t1>, <t2>). It can identify four distinct temporal relations:

  • BEFORE: The first span occurs earlier than the second.
  • AFTER: The first span occurs later than the second.
  • OTHER: The spans overlap or have a non-ordering relation.
  • NONE: No clear temporal relation exists.

Performance

During fine-tuning over 3 epochs with a batch size of 8 and a learning rate of 2e-5, the model achieved notable results:

  • Accuracy: 0.8150
  • Macro F1: 0.8170
  • Individual F1 scores: BEFORE (0.8000), AFTER (0.7593), OTHER (0.8269), NONE (0.8817).

Use Cases

This model is particularly well-suited for applications requiring automated temporal understanding from text, such as:

  • Event sequencing in narratives or reports.
  • Timeline generation from unstructured text.
  • Information extraction for temporal reasoning systems.
  • Enhancing natural language understanding in domains like historical analysis or medical records.