Ongoing

Evaluation of Personalization in Large Language Models
Historically, much attention has been given to accuracy in ML models, including large and small (neural) language models (LLMs/SLMs). In recent years, other aspects such as fluency, factuality, coherence, and consistency have been explored. However, another important aspect of “intelligence” is the ability to personalize in situations where the expected response will be inherently subjective to the user’s profile (and how that evolves over time). This project addresses the lack of proper evaluation measures and systematic probing techniques for the degree of personalization in modern SOTA LLMs/SLMs.

Threat Analysis in Streaming Fake News on Social Media
There have been ample studies on fake news detection and virality prediction. While that is a very important line of research, not much exists to detect to what extent fake news is threatful, as we clearly understand that not all are so. Understanding the degree of threat content is extremely important for relevant stakeholder agencies to take necessary actions before the news actually goes viral. This project aims to: (i) formalize an automated quantitative measure-framework for evaluating threat-prediction models and (ii) design robust and reliable threat analyzers that would work in tandem with fake news detectors.

Automated Reasoning in Language Models

Design of Small Language Models