farbodtavakkoli/OTel-LLM-7B-IT

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:32kPublished:Feb 11, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

OTel-LLM-7B-IT is a 7 billion parameter instruction-tuned language model developed by farbodtavakkoli, based on the allenai/OLMo-3-7B architecture. This model is specifically fine-tuned on a curated dataset of telecommunications domain data, including 3GPP standards, GSMA documents, and O-RAN specifications. It is designed to generate accurate, context-grounded responses for telecommunications-related queries, optimized for Retrieval-Augmented Generation (RAG) pipelines.

Loading preview...

OTel-LLM-7B-IT: A Telecom-Specialized Language Model

OTel-LLM-7B-IT is a 7 billion parameter language model developed by farbodtavakkoli, part of the OTel Family of Models, an open-source initiative for the telecommunications sector. It is built upon the allenai/OLMo-3-7B base model and has undergone full parameter fine-tuning using a unique dataset.

Key Capabilities & Training

This model's primary differentiator is its specialization in the telecommunications domain. It was trained on extensive telecom-focused data curated by over 100 domain experts from institutions like Yale University, GSMA, NetoAI, Khalifa University, University of Leeds, and The University of Texas at Dallas. The training data includes arXiv telecom papers, 3GPP standards, GSMA Permanent Reference Documents, IETF RFC series, industry whitepapers, and O-RAN specifications.

Intended Use & Unique Features

The OTel model family, including this LLM, is designed to power end-to-end Retrieval-Augmented Generation (RAG) pipelines for telecommunications. It works in conjunction with OTel Embedding and Reranker models to retrieve relevant information, prioritize it, and then generate accurate, grounded responses. A notable feature is its abstention training: the model is optimized to decline answering if it does not receive sufficient context, preventing hallucinations and ensuring context-grounded generation rather than open-ended responses. It is licensed under Apache 2.0.