Mind the Value-Action Gap: Do LLMs Act in Alignment with Their Values?
arXiv, 2025
Humans often act in ways that are inconsistent with their values. We hypothesize that this is also true for LLMs.
arXiv, 2025
Humans often act in ways that are inconsistent with their values. We hypothesize that this is also true for LLMs.
arXiv, 2024
A novel uncertainty quantification method for LLMs.
AAAI Spring Symposium, 2024
Do social media users realize that they are interacting with bots? Short answer: probably not.
Computers & Security, 2024
Which web browser has the best default privacy settings?
RAISE Winter Exposition, 2025
Translating classical problems in epistemology to the setting of large language models.