Secure AI

AI and machine learning systems are increasingly deployed in cloud environments where they face threats to both the confidentiality of models and data, and the integrity of their computations. This project focuses on securing AI systems through a systems perspective — combining trusted execution environments (TEEs), hardware-based isolation, and principled security architectures.

Our work spans several complementary directions: we developed practical frameworks for privacy-preserving machine learning using Intel SGX, enabling unmodified PyTorch applications to run with encrypted models and data in untrusted clouds. We extended this to foundation model deployments, demonstrating less than 10% overhead for full Llama 2 inference pipelines inside Intel SGX and TDX enclaves. We systematically analyzed the threat landscape of compound AI systems — multi-component pipelines combining foundation models with retrieval, tool use, and agents — identifying how software-hardware attack gadgets can be composed for adversarial threat amplification. To address model supply chain integrity, we developed techniques for verifying model integrity and accuracy within trusted execution environments, and proposed endorsement services that enable dynamic discovery and attestation of trusted AI services.

Publications

Cascade: Composing Software-Hardware Attack Gadgets for Adversarial Threat Amplification in Compound AI Systems arXiv, 2026.

PDF

SoK: A Systems Perspective on Compound AI Threats and Countermeasures arXiv, 2024.

PDF

Fortify Your Foundations: Practical Privacy and Security for Foundation Model Deployments In The Cloud arXiv, 2024.

PDF

METHODS AND APPARATUS TO VERIFY THE INTEGRITY OF A MODEL US Patent App. 18 / 676,413, 2024.

PDF

Artificial intelligence model accuracy validation US Patent App. 18 / 665,188, 2024.

PDF

Privacy-Preserving Machine Learning in Untrusted Clouds Made Simple arXiv, 2020.

PDF