14+ years advancing safe, responsible, and intelligent AI systems.
Specializing in AI Safety, Agentic AI,
Conversational AI, and Multimodal AI
at Intel Labs. Ph.D., Georgia Institute of Technology.
About
I am a Principal Engineer and Research Engineering Manager at Intel's AI Innovation Group, where I lead research across two interconnected frontiers: Agentic AI systems — including cost-aware planning, routing, and orchestration for heterogeneous inference environments — and AI Safety, with active published research on efficient guardrail systems that make safe, responsible LLM deployment practical at scale. I also contribute to Intel's Responsible AI policy development and internal compliance programs.
Beyond Intel, I actively contribute to the MLCommons AI Risk & Reliability (AIRR) working group, co-authoring benchmarks and methodologies covering model security, jailbreak robustness, and agentic AI safety evaluation. My research spans the full spectrum of modern AI: from foundational NLP and dialog systems to LLM safety, bias detection, and enterprise AI adaptation for industrial environments.
I hold a Ph.D. in Computer Science from Georgia Institute of Technology, where my dissertation explored Socio-Semantic Conversational Information Access. Before Intel, I built healthcare AI at Siemens, worked on IBM's Watson (DeepQA) project, and co-founded a venture-backed healthcare AI startup. I serve on the program committees of ICML, NeurIPS, ICLR, ACL, EMNLP, and COLM.
Core Expertise
Experience
Leading pathfinding research on Agentic AI systems: cost-aware planning, routing, guardrails, and orchestration within heterogeneous inference system development. Core contributor to Intel's Responsible AI policy development and internal compliance programs.
Active contributor on LLM model evaluations, security (jailbreaks), and agentic benchmark development. Co-author of the AI Safety Benchmark v0.5 and the jailbreak robustness methodology pre-print.
Led a team on LLM applications spanning Responsible AI and Agentic AI: domain adaptation for enterprise data, agentic analytics for semiconductor manufacturing sensor data, and multimodal task guidance for industrial smart manufacturing.
Led a globally distributed team of scientists and contractors across the US, Mexico, Germany, and Taiwan. Delivered projects in education (multimodal dialog systems), manufacturing (vision-language systems), collaboration (multimodal meeting assistance), and assistive computing. Managed university-funded research on Few-Shot Learning and Dialog Systems. Core member of Intel's Responsible AI Council.
Multimodal emotion understanding and dialog systems. Extended NLU and dialog management algorithms for the open-source Rasa platform. Led researchers and interns in a tech-lead capacity.
Developed the Cognitive Linguistics Information Platform featuring keyterm extraction, intent recognition, colloquial text normalization, knowledge-based missing information fulfillment, topic discovery, and sentiment analysis.
Healthcare decision support, text analytics, semantic search, ontology-based reasoning, and data mining for patient-physician information systems.
Founded a healthcare AI startup based on dissertation research with venture funding from Georgia Tech's VentureLab. Developed and deployed the Cobot Intelligent Assistant widget on a third-party platform.
Contributed to the Watson (DeepQA) project with the medical team. Built biomedical semantic search and relation extraction systems. Customized the Slot Grammar Parser for medical ontologies and ontology-based semantic distances for improved answer-type detection.
Research
Recent and representative work — spanning AI Safety, Agentic AI, LLMs, and Conversational AI. Full list on Google Scholar.
Patents
Program Committee & Community Service
Technical Skills
Thought Leadership
On AI Safety, bias in language models, and responsible development of intelligent systems.
Education
Connect
Open to conversations about AI Safety, Responsible AI, Agentic systems, research collaborations, and speaking opportunities.