farbodtavakkoli/OTel-LLM-20B-Reasoning

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:20BQuant:FP8Ctx Length:32kPublished:Feb 15, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The OTel-LLM-20B-Reasoning model by farbodtavakkoli is a 20 billion parameter, English-language, telecom-specialized large language model. Fine-tuned from openai/gpt-oss-20b, it is designed to generate accurate responses grounded in retrieved context from telecommunications data. This model is part of the OTel Family, an open-source initiative focused on building industry-standard AI for the global telecommunications sector, and is optimized for Retrieval-Augmented Generation (RAG) pipelines.

Loading preview...

OTel-LLM-20B-Reasoning: A Telecom-Specialized LLM

OTel-LLM-20B-Reasoning is a 20 billion parameter language model developed by farbodtavakkoli, specifically fine-tuned for the telecommunications domain. Built upon the openai/gpt-oss-20b base model, it is a key component of the OTel Family of Models, an open-source initiative aimed at providing industry-standard AI solutions for the global telecom sector.

Key Characteristics & Training

  • Domain-Specific Training: The model underwent full parameter fine-tuning on an extensive dataset curated by over 100 domain experts from institutions like Yale University, GSMA, NetoAI, Khalifa University, University of Leeds, and The University of Texas at Dallas. This data includes arXiv telecom papers, 3GPP standards, GSMA documents, IETF RFCs, industry whitepapers, and O-RAN specifications.
  • RAG Optimization: It is designed to power end-to-end Retrieval-Augmented Generation (RAG) pipelines. The model is optimized for context-grounded generation, featuring abstention training to decline answers when insufficient context is provided, thereby reducing hallucination.
  • Open-Source Initiative: Part of a broader ecosystem including OTel-Embedding and OTel-Reranker models, all intended to work synergistically for comprehensive telecom information retrieval and generation.

Intended Use Cases

  • Context-Grounded Response Generation: Generating accurate answers based on retrieved telecommunications specifications, standards, and documentation.
  • Integration into RAG Pipelines: Serving as the generative component within a RAG system, complementing embedding and reranker models for enhanced information retrieval in telecom applications.
  • Specialized Telecom Applications: Ideal for tasks requiring deep understanding and generation within the telecommunications industry, leveraging its specialized training data.