OMNIGUARD: Unified Omni-Modal Guardrails with Deliberate Reasoning

1Fudan University
2University of California, Davis
3Uniphore

Abstract

Omni-modal Large Language Models (OLLMs) that process text, images, videos, and audio introduce new challenges for safety and value guardrails in human-AI interaction. Prior guardrail research largely targets unimodal settings and typically frames safeguarding as binary classification, which limits robustness across diverse modalities and tasks. To address this gap, we propose OmniGuard, the first family of omni-modal guardrails that performs safeguarding across all modalities with deliberate reasoning ability. To support the training of OmniGuard, we curate a large, comprehensive omni-modal safety dataset comprising over 210K diverse samples, with inputs that cover all modalities through both unimodal and cross-modal samples. Each sample is annotated with structured safety labels and carefully curated safety critiques from expert models through targeted distillation. Extensive experiments on 15 benchmarks show that OmniGuard achieves strong effectiveness and generalization across a wide range of multimodal safety scenarios. Importantly, OmniGuard provides a unified framework that enforces policies and mitigates risks in omni-modalities, paving the way toward building more robust and capable omnimodal safeguarding systems.

BibTeX

@misc{zhu2025omniguardunifiedomnimodalguardrails,
        title={OmniGuard: Unified Omni-Modal Guardrails with Deliberate Reasoning}, 
        author={Boyu Zhu and Xiaofei Wen and Wenjie Jacky Mo and Tinghui Zhu and Yanan Xie and Peng Qi and Muhao Chen},
        year={2025},
        eprint={2512.02306},
        archivePrefix={arXiv},
        primaryClass={cs.AI},
        url={https://arxiv.org/abs/2512.02306}, 
  }