Overview

Selected funded projects grouped by agency. Work spans neurosymbolic AI, probabilistic inference, explainability, and robust decision-making with applications to vision and task guidance.


DARPA

Perceptually-enabled Task Guidance (PTG)

Summary: Neuro-symbolic dynamic probabilistic models for structured task representation and real-time guidance in complex physical procedures.

  • Institution: Center for Machine Learning, The University of Texas at Dallas (UTD)
  • Role: Research Assistant
  • Timeline: Aug 2021 – May 2025
  • Focus: Neuro-Symbolic Dynamic Probabilistic Models for task representation and reasoning; real-time assistance in complex physical tasks.
  • Contributions:
    • Advanced neuro-symbolic dynamic models combining structured reasoning with deep learning
    • Improved user performance by expanding skillsets and reducing error rates in task guidance settings
    • Built robust pipelines for perception, inference, and feedback loops

Explainable Artificial Intelligence (XAI)

Summary: Interpretable AI systems that preserve predictive performance while providing faithful, human-understandable rationales.

  • Institution: Center for Machine Learning, UTD
  • Role: Research Assistant
  • Timeline: Aug 2020 – Aug 2021
  • Focus: Interpretable AI systems that preserve predictive performance while providing faithful explanations.
  • Contributions:
    • Delivered high-performance explainable models without sacrificing accuracy
    • Designed methods to increase transparency and user trust for decision support

Assured Neuro Symbolic Learning and Reasoning (ANSR)

Summary: Secure and reliable neurosymbolic learning with a focus on robustness and assurance.

  • Institution: Center for Machine Learning, UTD
  • Role: Research Assistant
  • Timeline: Aug 2023 – May 2025
  • Focus: Secure, reliable neurosymbolic learning with formal guarantees.
  • Contributions:
    • Engineered hybrid AI algorithms integrating symbolic reasoning with data-driven learning
    • Emphasized robustness, assurance, and trustworthy deployment

NSF – National Science Foundation

Summary: AI/ML methodology projects spanning probabilistic inference and interpretable modeling.

  • Institution: Center for Machine Learning, UTD
  • Role: Research Assistant
  • Focus: AI/ML methodology spanning probabilistic inference and interpretable modeling
  • Contributions:
    • Co-developed algorithms for scalable inference in graphical models
    • Supported publications (best paper, spotlight/oral presentations at NeurIPS/AAAI)

AFOSR – Air Force Office of Scientific Research

Summary: Robust, explainable AI for safety-critical applications.

  • Institution: Center for Machine Learning, UTD
  • Role: Research Assistant
  • Focus: Robust, explainable AI for safety-critical use cases
  • Contributions:
    • Built interpretable pipelines and validated performance against baselines
    • Emphasized reliability and deployment considerations

Recognition & Impact

  • Best Paper Awards; spotlight and oral presentations (NeurIPS, AAAI)
  • Real-time inference algorithms for probabilistic models with improved accuracy and efficiency
  • Practical systems combining reasoning with perception for video understanding and task guidance