Halima Bouzidi
GitHub Google Scholar X LinkedIn CV
Hi! I’m Halima 👋
🎯 I’m on the job market for R&D roles in 2026!
🔬 Postdoctoral Scholar at the Embedded & Cyber-Physical Systems (AICPS) Lab at UCI
🌟 Previously: Research Fellow in Trustworthy AI at Queen’s University of Belfast, UK
🚀 Research Focus
I’m a Machine Learning and Security researcher focused on developing AI systems that are secure, efficient, and aligned with real-world demands. I work on adversarial security tools and trustworthy AI methods for auditing and strengthening the robustness of ML systems, ensuring they behave safely and reliably in practical deployment. My expertise spans Secure and Trustworthy AI, Adversarial Machine Learning, and Efficient AI deployment.
My research lives at the exciting intersection of:
- 🛡️ Secure Machine Learning: Adversarial Attacks and Defenses, Robustness Evaluation, Red-teaming of ML Systems, AI for Security.
- 🤖 Efficient Machine Learning: Hardware-aware Neural Architecture Search (NAS), Deep Neural Networks, Edge AI, Energy-efficiency.
- 🎯 Hardware-Software Co-Design: Sensor-ML Co-design for Security, Hardware-Software Co-design for Efficiency, Multi-objective Optimization.
news
| Nov 24, 2025 | ✅🔐 Our paper “Adversarially Evasive Hardware Trojans via Approximate Designs” has been accepted to the Asian Hardware Oriented Security and Trust Symposium (AsianHOST 2025)! |
|---|---|
| Oct 10, 2025 | ✈️ Pleased to receive the WiML 2025 travel funding to present my research at WiML workshop @ NeurIPS! |
| Oct 08, 2025 | 🐭🔊 Tech Press Alert: The Register breaks down our latest Mic-E-Mouse research. Check the full article Here |
| Sep 30, 2025 | 🔗 Excited to Serve on the Program Committee for the ACM Conference on Computer and Communications Security (CCS 2026)! |
| Sep 29, 2025 | ✅🙈 Our paper “See No Evil: Adversarial Attacks Against Linguistic-Visual Association in Referring Multi-Object Tracking Systems” has been accepted to the Reliable ML from Unreliable Data workshop @ NeurIPS 2025 ArXiv. See you in San Diego! |