Tristan Harris is widely regarded as one of the most important voices in technology ethics today. A former design ethicist at Google, he co-founded the Center for Humane Technology, where he leads efforts to realign technology with humanity’s best interests. Featured in the Netflix documentary The Social Dilemma, Harris has become a leading advocate for responsible digital innovation, warning of the unintended consequences of persuasive design and algorithm-driven platforms.
Named to TIME’s “100 Most Influential People in AI,” Harris works at the intersection of technology, society, and ethics. He advises governments, corporations, and international organizations on policies that encourage digital responsibility while ensuring that innovation continues to thrive. His work has influenced debates on regulation, platform accountability, and the social impact of artificial intelligence.
As a speaker, Harris is known for delivering engaging, urgent, and thought-provoking talks that resonate with audiences across industries. He not only diagnoses the challenges of the digital age but also offers frameworks for creating more humane technology ecosystems. His message empowers leaders to think critically about design, responsibility, and the role of technology in shaping the future of society.
Tristan Harris is a globally recognized thought leader and advocate for ethical technology design. As the co-founder and executive director of the Center for Humane Technology, he has dedicated his career to addressing the pervasive influence of technology on human behavior and society.
Frequently referred to as the “closest thing Silicon Valley has to a conscience,” Harris has captivated audiences worldwide with his compelling insights into how technology shapes our lives. A former design ethicist at Google, his expertise and thought leadership have redefined the global discourse on technology’s role in humanity’s future.
Tristan Harris’s expertise lies in the intersection of technology, ethics, and human behavior. With a background in computer science and design, he has spent years examining the ethical dilemmas posed by modern digital platforms. As a graduate of Stanford University, where he studied computer science and human-computer interaction, Harris was mentored by renowned psychologist BJ Fogg and focused on the persuasive power of technology.
Harris’s in-depth understanding of behavioral psychology and technology design makes him a uniquely qualified expert. His tenure at Google as a design ethicist allowed him to scrutinize the mechanisms driving user engagement and explore their societal impact. By identifying the manipulative techniques employed by digital platforms, Harris has become a leading voice advocating for responsible innovation and systemic reform in the tech industry.
Tristan Harris is a visionary leader in ethical technology design, whose expertise, experience, authority, and trustworthiness make him a sought-after keynote speaker. With a background rooted in computer science and behavioral psychology, he brings unparalleled insights into the challenges and opportunities of the digital age. Harris’s advocacy for systemic reform has shaped global conversations, influencing policymakers, industry leaders, and individuals alike.
View All Speakers and follow us on Twitter
A Nobel Prize Summit keynote on why social media was humanity’s first contact with AI, how LLMs raise the stakes, and what it takes to bind godlike tech with wisdom.
Tristan Harris opens by zooming out from misinformation to the broader, cumulative effects of technology on human minds and institutions. Framing with E. O. Wilson’s line about Paleolithic emotions, medieval institutions, and godlike technology, he argues our brains and governance are outpaced by AI systems. He reframes social media as humanity’s first contact with AI: every swipe activates a supercomputer that predicts the next perfect stimulus from billions of data points. That first contact produced information overload, doom-scrolling, loneliness, polarization, deepfakes, and a breakdown of shared reality. While platforms promised connection and relevance, the engagement business model rewarded addiction to attention and instant virality. Design choices like pull-to-refresh, likes, beautification filters, and one-click resharing created a race to inflate status and amplify extremes.
With large language models, he warns of a looming second contact before the first is fixed. AI will steepen the complexity of existing risks, from misinformation to synthetic biology. Concrete examples include viral AI-generated hoaxes, like the fake Pentagon explosion image that briefly rattled markets, illustrating how cheaply trust can be hacked at scale. Solutions, he argues, require more than content moderation. We need a wiser information ecosystem and upgraded institutions that handle diffuse, long-horizon harms, not just acute incidents.
His roadmap: embrace how human brains really work by designing for belonging and offline community, upgrade institutions to govern chronic harms and set liability that matches systemic risks, and bind race dynamics so companies do not win by externalizing harm. You cannot wield godlike powers without the wisdom, love, and prudence of gods, he concludes. The call is to realign technology, incentives, and governance so sense-making and choice-making keep pace with accelerating power.
00:00 Welcome and purpose: zooming out from misinformation to tech’s collective effects
00:38 E. O. Wilson’s frame: Paleolithic emotions, medieval institutions, godlike tech
01:45 Alignment problem beyond AI systems: aligning brains and institutions
03:05 Social Dilemma context and first contact with AI via recommender systems
05:10 Symptoms of misalignment: overload, doom-scrolling, loneliness, polarization, deepfakes
07:05 Why we lost to engagement: free services, attention sales, slot-machine UX
09:15 Addiction to attention and the race to inflate reach and status
11:05 Filters and instant reshare as accelerants; race to the bottom of the brain stem
13:10 Funhouse mirror: extreme voices overrepresented vs moderates
15:05 Case example of viral distortion vs low-traction corrections
16:20 Second contact with AI: LLMs will supercharge existing risks
18:05 Cheap synthesis at scale: fake crisis images and market impact
19:30 Complexity exceeds institutions; our response capacity lags
21:00 From fact-checking to wisdom: redesigning sense-making and choice-making
22:20 Embrace human needs: belonging, offline community, humane design
24:05 Upgrade institutions for chronic, cumulative harms
25:30 Bind race dynamics and bad games, not just bad actors
27:10 Kids and AI companions, incentive traps in product strategy
28:40 Governance metaphors: biosafety levels for AI capabilities
30:10 Principle: limit power to the level of wisdom and responsibility
31:40 Re-imagining platforms: ranking for local community and trust
33:00 Policy and product levers leaders can use now
35:20 Closing maxim: you cannot have the power of gods without the wisdom of gods
37:37 End. Verified runtime 37:37.
A landmark keynote outlining how tech platforms exploit human frailties, why that’s an existential civilizational risk, and how humane design can restore agency.
Tristan Harris opens by naming the root of scandals and grievances in tech: an extractive attention economy that hijacks human frailties. Using E. O. Wilson’s line about “Paleolithic emotions, medieval institutions, and godlike technology,” he frames the problem as not machines overwhelming our strengths, but our weaknesses. Drawing on his background as a magician and Google design ethicist, Harris explains how persuasive design exploits attention, validation-seeking, and social proof. Examples include slot-machine mechanics in phones and apps, YouTube’s recommender system keeping users in trance states, and Facebook group recommendations fueling conspiracy ecosystems.
He introduces the concept of human downgrading: the systemic erosion of attention spans, free will, mental health, civility, and trust. Evidence includes teen depression trends, polarization metrics, and how recommender AIs tilt content toward extremity (“Crazytown”) to maximize engagement. Harris shows how platforms built predictive “voodoo dolls” of users, making persuasion inevitable without needing microphones or data theft. Deepfakes and synthetic media amplify the problem, overwhelming human judgment.
Band-aid fixes like grayscale, notifications, blockchain, or ethics training are insufficient. The solution requires systemic change: shifting from artificial to humane social systems, from overwhelming AI to fiduciary AI aligned with human limits, and from extractive to regenerative incentives. He demonstrates how humane design can wrap around human physiology, cognition, and social needs—such as designing for trust, belonging, and small-group conversation rather than polarization.
Harris closes with urgency and hope. Just as shared language like “time well spent” spurred Apple, Google, and YouTube to adopt well-being features, a new shared agenda can spark a “race to the top” in humane tech. He urges leaders, workers, journalists, policymakers, investors, and entrepreneurs to unite behind this civilizational moment: to embrace our Paleolithic emotions, upgrade medieval institutions, and wield godlike technology with wisdom.
00:00 Welcome and purpose: unify understanding of tech’s harms
02:15 E. O. Wilson’s problem statement: emotions, institutions, godlike tech
03:45 From magician to ethicist: exploiting human frailties at scale
06:20 The overlooked inflection point: machines overwhelming weaknesses
09:10 Slot machines for attention: phones, Tinder, email, social proof
12:30 From attention capture to addiction to seeking attention
15:00 Human downgrading: attention, mental health, civility, truth
18:00 Evidence: teen depression, polarization, outrage culture
21:30 YouTube and Facebook algorithms steering users into extremity
26:00 Predictive voodoo dolls: AI out-predicting human nature
29:10 Free will colonized, beliefs downgraded, conspiracy loops
33:00 Global harms: Burma, languages engineers don’t speak
36:40 The insufficiency of band-aids: grayscale, blockchain, ethics class
40:20 Full-stack human design: physiology, attention, social ergonomics
45:15 Case studies: loneliness, polarization, trust, common ground
50:00 Three levers: humane social systems, humane AI, regenerative incentives
55:00 Shared language and the Time Well Spent movement as proof of change
58:20 A civilizational moment: the end of human agency or a race to the top
A concise, high-impact preview of the Netflix documentary that exposes how engagement-driven platforms track behavior, manipulate attention, and reshape beliefs at population scale.
The trailer opens by revealing that Google’s autocomplete and platform feeds differ by user, location, and inferred interests. It reframes this as a design choice, not an accident. Former leaders and designers from Facebook, Pinterest, Google, Twitter, and Instagram explain that product teams intentionally leverage psychology to maximize engagement. The montage links hearts, likes, and thumbs up to shallow validation and misinformation, while on-camera experts describe measurable harms: anxiety and depression, polarization, and loss of shared reality.
Key claims highlight that social platforms can influence offline behavior without user awareness and that falsehoods travel faster than truth. The voiceover warns that when everyone has “their own facts,” society fragments. The trailer escalates to a governance-level warning: if you wanted to control a population, a platform like Facebook is an unprecedented tool. It closes with the filmmakers’ thesis that creators have a responsibility to change course and that failing to rebalance incentives could be “checkmate on humanity.”
Trailer promoted by Netflix as the official preview for the 2020 feature documentary.
00:00 Search personalization hook. “Climate change is…” differs by user. Surveillance and tracking are by design.
00:10 Insiders appear. Ex-leaders and designers from major platforms introduce the core conflict.
00:25 “Using your psychology against you.” Like buttons and streak mechanics as engagement levers.
00:38 Validation montage. Hearts and thumbs up conflate attention with truth and self-worth.
00:50 Mental health line. Rising anxiety and depression, especially among youth, flagged as systemic.
01:00 Behavior manipulation. Platforms can shift emotions and actions without user awareness.
01:10 Virality and falsehood. “Fake news spreads six times faster than truth.” Consequences for shared reality.
01:23 Polarization beat. When facts fragment, common ground collapses.
01:33 Real-world stakes. “If you want to control a population…” platforms amplify influence at scale.
01:45 Responsibility turn. Creators acknowledge duty to fix what they built.
01:58 Call to action. Warns of chaos, loneliness, election hacking, and lost focus on real issues.
02:12 Final warning. “Checkmate on humanity.”
02:21 End. Verified trailer length 2:21 on Netflix trailer page.
TED Talk by design ethicist Tristan Harris, who argues that technology hijacks attention like slot machines and calls for a redesign of digital systems to prioritize “time well spent.”
Harris opens with a personal reflection on time slipping away through compulsive checking of email, feeds, and notifications. He compares phones and apps to slot machines, engineered with variable rewards that exploit human psychology. Even knowing this, he admits it’s hard to resist.
He highlights the cost of constant interruptions: research shows each distraction takes ~23 minutes to recover from, and frequent external pings condition us to self-interrupt. Harris demonstrates how design could restore choice—for example, chat systems that let someone “focus” while still allowing urgent messages through.
He urges designers to upgrade goals from surface-level metrics (e.g., ease of sending a message) to deeper human values (e.g., quality communication). Examples include Couchsurfing, which measured its success in “net positive hours” of meaningful experiences created.
Harris imagines a broader system: social networks tracking meaningful connections, dating apps optimizing for fulfilling relationships, and professional networks focusing on job outcomes—not just engagement. He suggests labels like “organic” or LEED certification could inspire a new category of humane tech, prioritized in app stores and browsers.
He concludes with a call to action: leaders must adopt new metrics, designers should embrace a Hippocratic oath for design, and users must demand technology that contributes positively to human life. The shift would move us from a world optimized for time spent to one focused on time well spent.
00:00–01:10 Time slipping away; compulsive checking of emails/feeds.
01:11–02:40 Phones as slot machines; variable reward design keeps us hooked.
02:41–04:20 Constant interruptions; research on 23-minute recovery cost and habit of self-interruption.
04:21–06:10 Example redesign of chat: focus mode with controlled, conscious interruptions.
06:11–08:00 Goal shift: from making messaging easy to maximizing quality communication.
08:01–10:00 Story of Thich Nhat Hanh meeting with designers; idea of “compassion check.”
10:01–12:15 Couchsurfing case study; success measured as “net orchestrated conviviality.”
12:16–14:20 Imagining humane social networks, dating apps, and career platforms focused on real outcomes.
14:21–16:00 Proposal for a new certification system for humane tech (like “organic” food or LEED).
16:01–17:40 Call to action for leaders, designers, and users to prioritize humane metrics.
17:41–18:15 Closing: Shift from “time spent” to “time well spent.” Standing ovation.
AI governance, recommender systems, kids and screens, institutional upgrades, sense-making, business model reform.
A: Highly actionable — frameworks and immediate product steps are included.
A: Product, design, policy, and executive stakeholders.
In order to view the below you must be logged in.
Dr. Kate Darling stands at the forefront of the rapidly evolving field of Robot Ethics, wielding her expertise as a research scientist at the Massachusetts Institute of Technology (MIT) Media Lab. With a keen focus on human-robot interaction, she delves into the intricate dynamics of social robotics, unraveling the complexities of our emotional connection with […]
With a pivotal role in launching IBM’s Watson in 2011, a groundbreaking application of Artificial Intelligence on a global scale, Sol Rashidi has consistently been a trailblazer in the ideation, conceptualization, design, and development of Data & AI applications. Boasting over three dozen large-scale implementations and a remarkable record, Sol holds eight granted patents, with […]
Dr. Vivienne Ming stands at the intersection of science, technology, and human potential. As a theoretical neuroscientist, delusional inventor, and demented author, she has dedicated her career to exploring how cutting-edge advancements can unlock and maximize human capacity. With a reputation for being one of the most innovative and thought-provoking speakers of our time, Dr. […]
Nina Schick is a globally recognized keynote speaker, author, and leading expert on disinformation, artificial intelligence, and the evolving landscape of global politics. With a keen focus on how emerging technologies are reshaping democracy and international relations, Nina has become a sought-after voice in the intersection of technology, media, and policy. Her insights provide invaluable […]
Dr. Jordan Nguyen is renowned globally as a leading Human Futurist and Innovation Speaker, pushing the boundaries of what is possible in technology and humanity’s future. With a profound passion for merging cutting-edge technology with the potential of the human spirit, Dr. Nguyen captivates audiences with his visionary insights and transformative ideas. Dr. Jordan Nguyen’s expertise […]
Jeremy Gutsche, MBA, CFA, is a renowned figure in the world of innovation and entrepreneurship, celebrated as a New York Times bestselling author and an award-winning innovation expert. With a reputation for delivering electrifying keynote presentations that inspire and ignite creativity, Jeremy has been dubbed by The Sun Newspaper as “one of the most sought-after […]
Déborah Berebichez is the first Mexican woman to obtain a Ph.D. in physics from Stanford University. Her passion is to empower young people to learn science and to improve the state of STEM education (science, technology, engineering and math) in the world. Her education and background help her to make science accessible to a wide range […]
Julia Collins is a trailblazing figure in the world of technology and entrepreneurship, making history as the first Black woman to achieve “unicorn” status with her tech company. As the Founder and CEO of Planet Forward, she has garnered international acclaim for her innovative approach to addressing climate change through technology. With a unique blend […]
No results available
These remain the property of its owner and are not affiliated with or endorsed by Speakers Inc.
All talent fees exclude VAT, travel and accommodation where required.
Our Mission:
© All rights reserved 2025. Designed using Voxel