hi! i'm Yiling. i'm a first year phd student at umass amherst's philosophy, working with Hilary Kornblith. i also pursue a graduate certificate in cognitive science.
at a high level, i'm broadly interested in both the human comprehension process and computational modeling of reasoning, i focus on the structural conditions that enable concept formation, reasoning, and belief in both humans and artificial systems. i aims to uncover the foundations that make understanding and reasoning possible, and ultimately to build a computational model capable of genuinely simulating human intelligence in its structural depth. to this end, i'm building a structural ontology of rational cognition, which seeks to identify the minimal internal conditions under which understanding, reasoning, and belief attribution become possible. the specific descriptions of which i have put in the slider below. as such, i have studied philosophy, formal logic, linguistics, psychology, and machine learning extensively, and work in their intersection.
previously i was a graduate student at the university of missouri - st. louis's philosophy, working with Gualtiero Piccinini, where i studied philosophy and machine learning, besides which i also studied linguistics and philosophy at the washington university in st. louis.
i'm always happy to connect with like-minded people, feel free to reach out any time.
i'm always happy to connect with like-minded people, feel free to reach out any time.
Structural Foundations of Concept Formation and Reasoning
this question involves (1) does the transition from perception to feature encoding presuppose a structural condition, and what kinds of internal organization make a representation cognitively usable? (2) does reasoning require a minimal cognitive unit with specific structural properties, something beyond associative labels, to support concept formation and inference? (3) how do different modes of representation acquisition shape the emergence of concepts, and what consequences do they carry for understanding and reasoning?Structural Limits of Current AI
this question involves (1) why do current AI systems succeed across a wide range of tasks but consistently fail in formal reasoning, particularly deductive inference, and does this point to a deeper structural bottleneck in their architecture? (2) why do AI systems built on Bayesian architectures struggle to integrate inductive and deductive reasoning, and does this limitation reflect a deeper incompatibility in their underlying representational assumptions? (3) what kinds of reasoning are we modeling when we train statistical systems, and how might our modeling assumptions constrain the very notion of intelligence we aim to build?Reconstructing Rational Minds
this question involves (1) what distinguishes a system that simulates inference from one that understands what it is doing, and can such understanding be defined structurally rather than phenomenologically? (2) what structural features must be present for a system to hold beliefs, not just process data? (3) what distinguishes a genuinely rational mind from a statistical processor? can rationality be defined by internal structure rather than behavioral output?Testing and Computational Modeling
this part of my research serves two purposes (1) to provide empirical support for theoretical claims by designing targeted behavioral tests. (2) to explore whether we extract the shared structural foundations across different types of inference, deductive, abductive, causal, and build a unified computational framework that models reasoning as a generative capacity?methods
the research methods I am currently using include (1) Philosophical analysis of LLMs to better understand the structural conditions of human intelligence, particularly how artificial systems can reveal the nature of representation, reasoning, and conceptual understanding in the human mind. (2) Formal methods in logic and epistemology are used to clarify the conditions under which representations support inference and to analyze the structural properties of belief, reasoning, and rational agency. In addition to these methods, I am also interested in incorporating tools from, (1) Psychological experimentation, to evaluate the structural capacities of LLMs through behavioral diagnostics. (2) Computational modeling, especially in building architectures that reflect the structural requirements of reasoning. (3) Reinforcement learning frameworks, as a way to explore the learning dynamics behind representation acquisition and inference stabilization. My long-term aim is to synthesize these approaches into a unified computational theory of reasoning and to build a computational model that can faithfully simulate the structural dynamics of human reasoning.