Prior to that, she obtained CS PhD degree at Johns Hopkins University advised by Prof. Alan Yuille. She used to intern at Meta AI (FAIR Labs) and Google Brain, and is selected as 2023 Apple Scholar and MIT EECS Rising Star.
I aim to build intelligent systems from first principles—systems that do not merely fit patterns or follow instructions, but that gradually develop structure, abstraction, and behavior through learning itself.
I’m interested in how intelligence emerges—not from handcrafted pipelines or task-specific heuristics, but from exposure to behaviorally rich, understructured environments, where models must learn what to attend to, how to reason, and how to improve. This requires designing learning systems that are not narrowly optimized for a goal, but that can self-organize and grow increasingly competent through interaction, experience, and computation.
I see scale as a tool, but not as the whole solution. Larger models open up more capacity, but what fills that capacity—and how it forms—is just as important. My research explores how we can use scale to amplify the right signals: not just data quantity, but the structural richness of behavior, and the dynamics of learning itself.
To that end, I focus on:
Understanding what makes behavior intelligent—especially when it’s easy for humans but hard for machines;
Designing systems that learn internal structure from raw behavioral input, without task scaffolds or dense supervision;
Creating conditions where models discover abstraction and reasoning, not because they are explicitly told to—but because learning leads them there.
I believe intelligence is not something we can fully define or supervise in advance—it must emerge over time, shaped by data, computation, and inductive processes inside the model. My work is an attempt to understand and enable that emergence.
Publications
(
show selected /
show all by date /
show all by topic
)