Tristan Harris: Tech Ethicist

Technology ethicist and keynote speaker on humane AI, attention economy, and institutional resilience, co-founder of the Center for Humane Technology.

Quick Facts:

Highlights:
  • Frequently referred to as the “closest thing Silicon Valley has to a conscience.”
  • Co-founder and executive director of the Center for Humane Technology, driving systemic reform in the tech industry.
  • Former design ethicist at Google, author of the internal presentation “A Call to Minimize Distraction & Respect Users’ Attention.”

Formats: Keynote, Panel, Workshop, Executive Briefing, Virtual

Audiences: Product and design teams (25–200); Conferences (200–2,000)

Outcomes:
  1. Diagnose attention harms in your product and customer journey.
  2. Implement humane design changes that preserve engagement quality.
  3. Build governance for design ethics and measurement.
Reviews: (0 reviews)
  • Travels from: San Francisco Bay Area, California
  • **Fee range: $100,000 - $250,000

Talent Services

Keynote Topics:

  • How Wisdom Can Protect Humanity from Technology
  • The Battle for Our Attention: Technology, Mindfulness, and the Future of Humanity
  • Can We Close the Gap Between Humans and Technology?
    • Tristan outlines the 6 principles that can leverage technology with greater wisdom to create a future where technology protects human vulnerabilities and empowers communities to live in a values-aligned way.

Talent Short Bio

Tristan Harris is widely regarded as one of the most important voices in technology ethics today. A former design ethicist at Google, he co-founded the Center for Humane Technology, where he leads efforts to realign technology with humanity’s best interests. Featured in the Netflix documentary The Social Dilemma, Harris has become a leading advocate for responsible digital innovation, warning of the unintended consequences of persuasive design and algorithm-driven platforms.

Named to TIME’s “100 Most Influential People in AI,” Harris works at the intersection of technology, society, and ethics. He advises governments, corporations, and international organizations on policies that encourage digital responsibility while ensuring that innovation continues to thrive. His work has influenced debates on regulation, platform accountability, and the social impact of artificial intelligence.

As a speaker, Harris is known for delivering engaging, urgent, and thought-provoking talks that resonate with audiences across industries. He not only diagnoses the challenges of the digital age but also offers frameworks for creating more humane technology ecosystems. His message empowers leaders to think critically about design, responsibility, and the role of technology in shaping the future of society.

Tristan Harris is a globally recognized thought leader and advocate for ethical technology design. As the co-founder and executive director of the Center for Humane Technology, he has dedicated his career to addressing the pervasive influence of technology on human behavior and society.

Frequently referred to as the “closest thing Silicon Valley has to a conscience,” Harris has captivated audiences worldwide with his compelling insights into how technology shapes our lives. A former design ethicist at Google, his expertise and thought leadership have redefined the global discourse on technology’s role in humanity’s future.

Tristan Harris’s expertise lies in the intersection of technology, ethics, and human behavior. With a background in computer science and design, he has spent years examining the ethical dilemmas posed by modern digital platforms. As a graduate of Stanford University, where he studied computer science and human-computer interaction, Harris was mentored by renowned psychologist BJ Fogg and focused on the persuasive power of technology.

Harris’s in-depth understanding of behavioral psychology and technology design makes him a uniquely qualified expert. His tenure at Google as a design ethicist allowed him to scrutinize the mechanisms driving user engagement and explore their societal impact. By identifying the manipulative techniques employed by digital platforms, Harris has become a leading voice advocating for responsible innovation and systemic reform in the tech industry.

Tristan Harris is a visionary leader in ethical technology design, whose expertise, experience, authority, and trustworthiness make him a sought-after keynote speaker. With a background rooted in computer science and behavioral psychology, he brings unparalleled insights into the challenges and opportunities of the digital age. Harris’s advocacy for systemic reform has shaped global conversations, influencing policymakers, industry leaders, and individuals alike.

View All Speakers and follow us on Twitter

Talent Videos

I’m Not Afraid. You’re Afraid: Tristan Harris on Humane Technology and AI Governance

A Nobel Prize Summit keynote on why social media was humanity’s first contact with AI, how LLMs raise the stakes, and what it takes to bind godlike tech with wisdom.

Tristan Harris opens by zooming out from misinformation to the broader, cumulative effects of technology on human minds and institutions. Framing with E. O. Wilson’s line about Paleolithic emotions, medieval institutions, and godlike technology, he argues our brains and governance are outpaced by AI systems. He reframes social media as humanity’s first contact with AI: every swipe activates a supercomputer that predicts the next perfect stimulus from billions of data points. That first contact produced information overload, doom-scrolling, loneliness, polarization, deepfakes, and a breakdown of shared reality. While platforms promised connection and relevance, the engagement business model rewarded addiction to attention and instant virality. Design choices like pull-to-refresh, likes, beautification filters, and one-click resharing created a race to inflate status and amplify extremes.

With large language models, he warns of a looming second contact before the first is fixed. AI will steepen the complexity of existing risks, from misinformation to synthetic biology. Concrete examples include viral AI-generated hoaxes, like the fake Pentagon explosion image that briefly rattled markets, illustrating how cheaply trust can be hacked at scale. Solutions, he argues, require more than content moderation. We need a wiser information ecosystem and upgraded institutions that handle diffuse, long-horizon harms, not just acute incidents.

His roadmap: embrace how human brains really work by designing for belonging and offline community, upgrade institutions to govern chronic harms and set liability that matches systemic risks, and bind race dynamics so companies do not win by externalizing harm. You cannot wield godlike powers without the wisdom, love, and prudence of gods, he concludes. The call is to realign technology, incentives, and governance so sense-making and choice-making keep pace with accelerating power.

Key Moments

00:00 Welcome and purpose: zooming out from misinformation to tech’s collective effects
00:38 E. O. Wilson’s frame: Paleolithic emotions, medieval institutions, godlike tech
01:45 Alignment problem beyond AI systems: aligning brains and institutions
03:05 Social Dilemma context and first contact with AI via recommender systems
05:10 Symptoms of misalignment: overload, doom-scrolling, loneliness, polarization, deepfakes
07:05 Why we lost to engagement: free services, attention sales, slot-machine UX
09:15 Addiction to attention and the race to inflate reach and status
11:05 Filters and instant reshare as accelerants; race to the bottom of the brain stem
13:10 Funhouse mirror: extreme voices overrepresented vs moderates
15:05 Case example of viral distortion vs low-traction corrections
16:20 Second contact with AI: LLMs will supercharge existing risks
18:05 Cheap synthesis at scale: fake crisis images and market impact
19:30 Complexity exceeds institutions; our response capacity lags
21:00 From fact-checking to wisdom: redesigning sense-making and choice-making
22:20 Embrace human needs: belonging, offline community, humane design
24:05 Upgrade institutions for chronic, cumulative harms
25:30 Bind race dynamics and bad games, not just bad actors
27:10 Kids and AI companions, incentive traps in product strategy
28:40 Governance metaphors: biosafety levels for AI capabilities
30:10 Principle: limit power to the level of wisdom and responsibility
31:40 Re-imagining platforms: ranking for local community and trust
33:00 Policy and product levers leaders can use now
35:20 Closing maxim: you cannot have the power of gods without the wisdom of gods
37:37 End. Verified runtime 37:37.

A landmark keynote outlining how tech platforms exploit human frailties, why that’s an existential civilizational risk, and how humane design can restore agency.

Tristan Harris opens by naming the root of scandals and grievances in tech: an extractive attention economy that hijacks human frailties. Using E. O. Wilson’s line about “Paleolithic emotions, medieval institutions, and godlike technology,” he frames the problem as not machines overwhelming our strengths, but our weaknesses. Drawing on his background as a magician and Google design ethicist, Harris explains how persuasive design exploits attention, validation-seeking, and social proof. Examples include slot-machine mechanics in phones and apps, YouTube’s recommender system keeping users in trance states, and Facebook group recommendations fueling conspiracy ecosystems.

He introduces the concept of human downgrading: the systemic erosion of attention spans, free will, mental health, civility, and trust. Evidence includes teen depression trends, polarization metrics, and how recommender AIs tilt content toward extremity (“Crazytown”) to maximize engagement. Harris shows how platforms built predictive “voodoo dolls” of users, making persuasion inevitable without needing microphones or data theft. Deepfakes and synthetic media amplify the problem, overwhelming human judgment.

Band-aid fixes like grayscale, notifications, blockchain, or ethics training are insufficient. The solution requires systemic change: shifting from artificial to humane social systems, from overwhelming AI to fiduciary AI aligned with human limits, and from extractive to regenerative incentives. He demonstrates how humane design can wrap around human physiology, cognition, and social needs—such as designing for trust, belonging, and small-group conversation rather than polarization.

Harris closes with urgency and hope. Just as shared language like “time well spent” spurred Apple, Google, and YouTube to adopt well-being features, a new shared agenda can spark a “race to the top” in humane tech. He urges leaders, workers, journalists, policymakers, investors, and entrepreneurs to unite behind this civilizational moment: to embrace our Paleolithic emotions, upgrade medieval institutions, and wield godlike technology with wisdom.

Key Moments

00:00 Welcome and purpose: unify understanding of tech’s harms
02:15 E. O. Wilson’s problem statement: emotions, institutions, godlike tech
03:45 From magician to ethicist: exploiting human frailties at scale
06:20 The overlooked inflection point: machines overwhelming weaknesses
09:10 Slot machines for attention: phones, Tinder, email, social proof
12:30 From attention capture to addiction to seeking attention
15:00 Human downgrading: attention, mental health, civility, truth
18:00 Evidence: teen depression, polarization, outrage culture
21:30 YouTube and Facebook algorithms steering users into extremity
26:00 Predictive voodoo dolls: AI out-predicting human nature
29:10 Free will colonized, beliefs downgraded, conspiracy loops
33:00 Global harms: Burma, languages engineers don’t speak
36:40 The insufficiency of band-aids: grayscale, blockchain, ethics class
40:20 Full-stack human design: physiology, attention, social ergonomics
45:15 Case studies: loneliness, polarization, trust, common ground
50:00 Three levers: humane social systems, humane AI, regenerative incentives
55:00 Shared language and the Time Well Spent movement as proof of change
58:20 A civilizational moment: the end of human agency or a race to the top

A concise, high-impact preview of the Netflix documentary that exposes how engagement-driven platforms track behavior, manipulate attention, and reshape beliefs at population scale.

The trailer opens by revealing that Google’s autocomplete and platform feeds differ by user, location, and inferred interests. It reframes this as a design choice, not an accident. Former leaders and designers from Facebook, Pinterest, Google, Twitter, and Instagram explain that product teams intentionally leverage psychology to maximize engagement. The montage links hearts, likes, and thumbs up to shallow validation and misinformation, while on-camera experts describe measurable harms: anxiety and depression, polarization, and loss of shared reality.

Key claims highlight that social platforms can influence offline behavior without user awareness and that falsehoods travel faster than truth. The voiceover warns that when everyone has “their own facts,” society fragments. The trailer escalates to a governance-level warning: if you wanted to control a population, a platform like Facebook is an unprecedented tool. It closes with the filmmakers’ thesis that creators have a responsibility to change course and that failing to rebalance incentives could be “checkmate on humanity.”

Trailer promoted by Netflix as the official preview for the 2020 feature documentary.

Key Moments

00:00 Search personalization hook. “Climate change is…” differs by user. Surveillance and tracking are by design.
00:10 Insiders appear. Ex-leaders and designers from major platforms introduce the core conflict.
00:25 “Using your psychology against you.” Like buttons and streak mechanics as engagement levers.
00:38 Validation montage. Hearts and thumbs up conflate attention with truth and self-worth.
00:50 Mental health line. Rising anxiety and depression, especially among youth, flagged as systemic.
01:00 Behavior manipulation. Platforms can shift emotions and actions without user awareness.
01:10 Virality and falsehood. “Fake news spreads six times faster than truth.” Consequences for shared reality.
01:23 Polarization beat. When facts fragment, common ground collapses.
01:33 Real-world stakes. “If you want to control a population…” platforms amplify influence at scale.
01:45 Responsibility turn. Creators acknowledge duty to fix what they built.
01:58 Call to action. Warns of chaos, loneliness, election hacking, and lost focus on real issues.
02:12 Final warning. “Checkmate on humanity.” 
02:21 End. Verified trailer length 2:21 on Netflix trailer page.

TED Talk by design ethicist Tristan Harris, who argues that technology hijacks attention like slot machines and calls for a redesign of digital systems to prioritize “time well spent.”

Harris opens with a personal reflection on time slipping away through compulsive checking of email, feeds, and notifications. He compares phones and apps to slot machines, engineered with variable rewards that exploit human psychology. Even knowing this, he admits it’s hard to resist.

He highlights the cost of constant interruptions: research shows each distraction takes ~23 minutes to recover from, and frequent external pings condition us to self-interrupt. Harris demonstrates how design could restore choice—for example, chat systems that let someone “focus” while still allowing urgent messages through.

He urges designers to upgrade goals from surface-level metrics (e.g., ease of sending a message) to deeper human values (e.g., quality communication). Examples include Couchsurfing, which measured its success in “net positive hours” of meaningful experiences created.

Harris imagines a broader system: social networks tracking meaningful connections, dating apps optimizing for fulfilling relationships, and professional networks focusing on job outcomes—not just engagement. He suggests labels like “organic” or LEED certification could inspire a new category of humane tech, prioritized in app stores and browsers.

He concludes with a call to action: leaders must adopt new metrics, designers should embrace a Hippocratic oath for design, and users must demand technology that contributes positively to human life. The shift would move us from a world optimized for time spent to one focused on time well spent.

Key Moments

00:00–01:10 Time slipping away; compulsive checking of emails/feeds.
01:11–02:40 Phones as slot machines; variable reward design keeps us hooked.
02:41–04:20 Constant interruptions; research on 23-minute recovery cost and habit of self-interruption.
04:21–06:10 Example redesign of chat: focus mode with controlled, conscious interruptions.
06:11–08:00 Goal shift: from making messaging easy to maximizing quality communication.
08:01–10:00 Story of Thich Nhat Hanh meeting with designers; idea of “compassion check.”
10:01–12:15 Couchsurfing case study; success measured as “net orchestrated conviviality.”
12:16–14:20 Imagining humane social networks, dating apps, and career platforms focused on real outcomes.
14:21–16:00 Proposal for a new certification system for humane tech (like “organic” food or LEED).
16:01–17:40 Call to action for leaders, designers, and users to prioritize humane metrics.
17:41–18:15 Closing: Shift from “time spent” to “time well spent.” Standing ovation.

Interest to Book? Send us your enquiry

Speaker Contact Form

Enquiry Form

Talent FAQ's

AI governance, recommender systems, kids and screens, institutional upgrades, sense-making, business model reform.

A: Highly actionable — frameworks and immediate product steps are included.

A: Product, design, policy, and executive stakeholders.

Talent Resources

In order to view the below you must be logged in.

  • Powerpoint Slide – For meeting and event planners
  • Gallery Images
  • High Resolution Images
  • PDF Files
  • Books and Magazine Links
  • Music List
  • Podcast Shows
  • Courses
  • + more

You may be interested in...

  • (41)
Fei Fei Li-1

Dr. Fei-Fei Li is one of the most influential figures shaping the future of technology and artificial intelligence. Often referred to as the “Godmother of AI,” she is the visionary behind ImageNet—the groundbreaking dataset that gave machines the ability to “see,” launching the deep learning revolution that fuels today’s AI applications. From powering tools like […]

  • Travels from: Stanford, CA
  • Artificial Intelligence
  • (11)
Jess Todtfeld

Electrifying speaker Jess Todtfeld has an incredible talent for inspiring audiences: his magnetic speeches are entertaining and teach audiences how to gain more results in various fields. With more than 20 years of experience in the media industry, Todtfeld helps CEOs, spokespeople, authors, experts and public relations representatives with achieving real results. From the beginning of his career, […]

  • Travels from: New York, NY
  • Business Speakers
  • (51)
Alec Ross

Alec Ross stands as a dynamic force in the world of technology, innovation, and global affairs, renowned as a New York Times best-selling author and an esteemed Distinguished Adjunct Professor at the University of Bologna’s business school. With a fervent dedication to deciphering the complexities of our rapidly evolving world, Ross illuminates the intersections of […]

  • Travels from: Baltimore, MD
  • Business Speakers
  • (25)
Clara Durodié Decoding AI

Clara Durodié is a technology strategist specializing in the business and governance of artificial intelligence (AI) in financial services. She is internationally recognized for her expertise, advising Boards of leading financial institutions, think-tanks, governments, and academia. She also mentors AI startups on governance, funding and growth strategy. She is consistently named on numerous global lists […]

  • Travels from: London, UK
  • Artificial Intelligence
  • (7)
  • (2)
Aniyia Williams

Aniyia Williams is a trailblazing entrepreneur, thought leader, and advocate for diversity and inclusion in the tech industry. She is a systempreneur, creator, inventor, tech changemaker, and investor. She is a principal on the Responsible Technology team at Omidyar Network, and works to help the tech world live up to its promise of changing lives […]

  • Travels from: San Francisco, CA
  • Digital Speakers
  • (205)
  • (9)
Stafford Masie 2025 1

Stafford Masie talks around Digital Innovation and Disruption, and are applicable to any audience because technology is a transversal matter, irrespective of industry or theme. Stafford has been in the ICT industry for more than 25 years. His career foundations started in mid 90’s working for Telkom, Dimension Data and then Novell as a software engineer. […]

  • Travels from: Miami, FL
  • Digital Speakers
  • (44)
Dr Mark van Rijmenam

Dr. Mark van Rijmenam is a globally recognized strategic futurist, keynote speaker, and thought leader on artificial intelligence, blockchain, and the future of work. As an in-demand speaker, he equips organizations with the knowledge and insights needed to navigate digital disruption, drive innovation, and prepare for the rapidly changing technological landscape. His ability to translate […]

  • Travels from: Sydney, NSW
  • Futurists Speakers
  • (31)
Diana Kander

Diana Kander stands at the forefront of innovation, challenging conventional wisdom and sparking curiosity in the hearts of business leaders worldwide. As a New York Times best-selling author, innovation consultant, and captivating keynote speaker, Diana doesn’t just ask big questions; she ignites the imagination of her audiences, encouraging them to rethink the very essence of […]

  • Travels from: Kansas City, MO
  • Innovation Speakers
Disclaimer
The profiles and images embedded on these pages are from various conference entertainers and other talent.

These remain the property of its owner and are not affiliated with or endorsed by Speakers Inc.

Fee ranges where listed on this website are intended to serve as a guideline only.

All talent fees exclude VAT, travel and accommodation where required.

Subscribe to our Newsletter and get connected:

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Newsletter

Subscribe to our newsletter and stay updated.

We use Brevo as our marketing platform. By submitting this form you agree that the personal data you provided will be transferred to Brevo for processing in accordance with Brevo's Privacy Policy.

Our Mission:

We are your partner creating memorable and engaging experiences that go beyond the event itself.

© All rights reserved 2025.  Designed using Voxel

Speakers Inc Logo 2024
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.