Davos World Economic Forum and the Question We’re Not Asking Enough

Just Horizons Executive Director Janet Kang spent last week in Davos in a series of conversations that all circled the same underlying tension: AI is moving faster than our ability to understand its consequences.

At the World Economic Forum, she moderated a panel at the Invest Philippines Pavilion on AI in youth education, part of the country’s first-ever presence at Davos. The discussion focused on how AI can expand access and opportunity for young people, while also confronting the realities of implementing these systems safely and responsibly.

Congressman Brian Poe Llamanzares spoke directly to that challenge. Not at the level of principle, but in the details – policy design, tradeoffs, and the practical difficulty of advancing AI in education without introducing new risks.

Outside the panel, conversations with Tristan Harris, Rebecca Winthrop, and Yann LeCun reinforced how fragmented our understanding still is.

Tristan Harris’s work has long focused on how digital systems shape human behavior at scale. His concern is not whether these systems work, but what they optimize for, and how quickly those incentives compound before we have mechanisms to evaluate their downstream effects.

Rebecca Winthrop’s research at the Brookings Institution is looking more directly at learning. Her team’s recent work raises a quieter but more fundamental question: what happens when students begin to offload core cognitive tasks to AI? She described the risk as “cognitive stunting” – a shift in how reasoning develops when thinking is outsourced too early.

Yann LeCun, one of the “godfathers of AI,” whose current work is pushing toward more autonomous, world-model-based AI systems, framed the issue from a different angle. As capabilities advance, the gap between what systems can do and what we can measure becomes more pronounced. We are accelerating adoption without a shared framework for understanding consequences.

A few different domains, but the same pattern.

There is growing confidence in narrow AI tools designed for specific outcomes. But the broader, more generative systems – the ones being deployed at scale – are outpacing our ability to evaluate how they shape behavior, learning, and decision-making over time.

This is the question at the center of Just Horizons’ work. Not whether AI should move forward, but whether we understand what we are building, and what it will do once it is in the world.

The value of the week was not in the panels, but in the one-on-one conversations. The same question kept resurfacing, regardless of sector or perspective: how do we understand the consequences of AI as it is being adopted at this pace?

Share this Post: