Tristan Harris: Tech Ethicist

    Technology ethicist and keynote speaker on humane AI, attention economy, and institutional resilience, co-founder of the Center for Humane Technology.

    Quick Facts:

    • Highlights:
      • Frequently referred to as the "closest thing Silicon Valley has to a conscience."
      • Co-founder and executive director of the Center for Humane Technology, driving systemic reform in the tech industry.
      • Former design ethicist at Google, author of the internal presentation "A Call to Minimize Distraction & Respect Users' Attention."
    • Formats: Keynote, Panel, Workshop, Executive Briefing, Virtual
    • Audiences: Product and design teams (25–200); Conferences (200–2,000)
    • Outcomes:
      1. Diagnose attention harms in your product and customer journey.
      2. Implement humane design changes that preserve engagement quality.
      3. Build governance for design ethics and measurement.
    Reading time: 2 min
    Reviews: (0 reviews)
    • Travels from: San Francisco, CA
    • **Fee range: $100,000 - $250,000

    Talent Services

    Keynote Topics:

    • How Wisdom Can Protect Humanity from Technology
    • The Battle for Our Attention: Technology, Mindfulness, and the Future of Humanity
    • Can We Close the Gap Between Humans and Technology?
      • Tristan outlines the 6 principles that can leverage technology with greater wisdom to create a future where technology protects human vulnerabilities and empowers communities to live in a values-aligned way.

    Talent Short Bio

    Tristan Harris is widely regarded as one of the most important voices in technology ethics today. A former design ethicist at Google, he co-founded the Center for Humane Technology, where he leads efforts to realign technology with humanity’s best interests. Featured in the Netflix documentary The Social Dilemma, Harris has become a leading advocate for responsible digital innovation, warning of the unintended consequences of persuasive design and algorithm-driven platforms.

    Named to TIME’s “100 Most Influential People in AI,” Harris works at the intersection of technology, society, and ethics. He advises governments, corporations, and international organizations on policies that encourage digital responsibility while ensuring that innovation continues to thrive. His work has influenced debates on regulation, platform accountability, and the social impact of artificial intelligence.

    As a speaker, Harris is known for delivering engaging, urgent, and thought-provoking talks that resonate with audiences across industries. He not only diagnoses the challenges of the digital age but also offers frameworks for creating more humane technology ecosystems. His message empowers leaders to think critically about design, responsibility, and the role of technology in shaping the future of society.

    Tristan Harris is a globally recognized thought leader and advocate for ethical technology design. As the co-founder and executive director of the Center for Humane Technology, he has dedicated his career to addressing the pervasive influence of technology on human behavior and society.

    Frequently referred to as the “closest thing Silicon Valley has to a conscience,” Harris has captivated audiences worldwide with his compelling insights into how technology shapes our lives. A former design ethicist at Google, his expertise and thought leadership have redefined the global discourse on technology’s role in humanity’s future.

    Tristan Harris’s expertise lies in the intersection of technology, ethics, and human behavior. With a background in computer science and design, he has spent years examining the ethical dilemmas posed by modern digital platforms. As a graduate of Stanford University, where he studied computer science and human-computer interaction, Harris was mentored by renowned psychologist BJ Fogg and focused on the persuasive power of technology.

    Harris’s in-depth understanding of behavioral psychology and technology design makes him a uniquely qualified expert. His tenure at Google as a design ethicist allowed him to scrutinize the mechanisms driving user engagement and explore their societal impact. By identifying the manipulative techniques employed by digital platforms, Harris has become a leading voice advocating for responsible innovation and systemic reform in the tech industry.

    Tristan Harris is a visionary leader in ethical technology design, whose expertise, experience, authority, and trustworthiness make him a sought-after keynote speaker. With a background rooted in computer science and behavioral psychology, he brings unparalleled insights into the challenges and opportunities of the digital age. Harris’s advocacy for systemic reform has shaped global conversations, influencing policymakers, industry leaders, and individuals alike.

    View All Speakers and follow us on Twitter

    Talent Videos

    I’m Not Afraid. You’re Afraid: Tristan Harris on Humane Technology and AI Governance

    A Nobel Prize Summit keynote on why social media was humanity’s first contact with AI, how LLMs raise the stakes, and what it takes to bind godlike tech with wisdom.

    Tristan Harris opens by zooming out from misinformation to the broader, cumulative effects of technology on human minds and institutions. Framing with E. O. Wilson’s line about Paleolithic emotions, medieval institutions, and godlike technology, he argues our brains and governance are outpaced by AI systems. He reframes social media as humanity’s first contact with AI: every swipe activates a supercomputer that predicts the next perfect stimulus from billions of data points. That first contact produced information overload, doom-scrolling, loneliness, polarization, deepfakes, and a breakdown of shared reality. While platforms promised connection and relevance, the engagement business model rewarded addiction to attention and instant virality. Design choices like pull-to-refresh, likes, beautification filters, and one-click resharing created a race to inflate status and amplify extremes.

    With large language models, he warns of a looming second contact before the first is fixed. AI will steepen the complexity of existing risks, from misinformation to synthetic biology. Concrete examples include viral AI-generated hoaxes, like the fake Pentagon explosion image that briefly rattled markets, illustrating how cheaply trust can be hacked at scale. Solutions, he argues, require more than content moderation. We need a wiser information ecosystem and upgraded institutions that handle diffuse, long-horizon harms, not just acute incidents.

    His roadmap: embrace how human brains really work by designing for belonging and offline community, upgrade institutions to govern chronic harms and set liability that matches systemic risks, and bind race dynamics so companies do not win by externalizing harm. You cannot wield godlike powers without the wisdom, love, and prudence of gods, he concludes. The call is to realign technology, incentives, and governance so sense-making and choice-making keep pace with accelerating power.

    Key Moments

    00:00 Welcome and purpose: zooming out from misinformation to tech’s collective effects
    00:38 E. O. Wilson’s frame: Paleolithic emotions, medieval institutions, godlike tech
    01:45 Alignment problem beyond AI systems: aligning brains and institutions
    03:05 Social Dilemma context and first contact with AI via recommender systems
    05:10 Symptoms of misalignment: overload, doom-scrolling, loneliness, polarization, deepfakes
    07:05 Why we lost to engagement: free services, attention sales, slot-machine UX
    09:15 Addiction to attention and the race to inflate reach and status
    11:05 Filters and instant reshare as accelerants; race to the bottom of the brain stem
    13:10 Funhouse mirror: extreme voices overrepresented vs moderates
    15:05 Case example of viral distortion vs low-traction corrections
    16:20 Second contact with AI: LLMs will supercharge existing risks
    18:05 Cheap synthesis at scale: fake crisis images and market impact
    19:30 Complexity exceeds institutions; our response capacity lags
    21:00 From fact-checking to wisdom: redesigning sense-making and choice-making
    22:20 Embrace human needs: belonging, offline community, humane design
    24:05 Upgrade institutions for chronic, cumulative harms
    25:30 Bind race dynamics and bad games, not just bad actors
    27:10 Kids and AI companions, incentive traps in product strategy
    28:40 Governance metaphors: biosafety levels for AI capabilities
    30:10 Principle: limit power to the level of wisdom and responsibility
    31:40 Re-imagining platforms: ranking for local community and trust
    33:00 Policy and product levers leaders can use now
    35:20 Closing maxim: you cannot have the power of gods without the wisdom of gods
    37:37 End. Verified runtime 37:37.

    A landmark keynote outlining how tech platforms exploit human frailties, why that’s an existential civilizational risk, and how humane design can restore agency.

    Tristan Harris opens by naming the root of scandals and grievances in tech: an extractive attention economy that hijacks human frailties. Using E. O. Wilson’s line about “Paleolithic emotions, medieval institutions, and godlike technology,” he frames the problem as not machines overwhelming our strengths, but our weaknesses. Drawing on his background as a magician and Google design ethicist, Harris explains how persuasive design exploits attention, validation-seeking, and social proof. Examples include slot-machine mechanics in phones and apps, YouTube’s recommender system keeping users in trance states, and Facebook group recommendations fueling conspiracy ecosystems.

    He introduces the concept of human downgrading: the systemic erosion of attention spans, free will, mental health, civility, and trust. Evidence includes teen depression trends, polarization metrics, and how recommender AIs tilt content toward extremity (“Crazytown”) to maximize engagement. Harris shows how platforms built predictive “voodoo dolls” of users, making persuasion inevitable without needing microphones or data theft. Deepfakes and synthetic media amplify the problem, overwhelming human judgment.

    Band-aid fixes like grayscale, notifications, blockchain, or ethics training are insufficient. The solution requires systemic change: shifting from artificial to humane social systems, from overwhelming AI to fiduciary AI aligned with human limits, and from extractive to regenerative incentives. He demonstrates how humane design can wrap around human physiology, cognition, and social needs—such as designing for trust, belonging, and small-group conversation rather than polarization.

    Harris closes with urgency and hope. Just as shared language like “time well spent” spurred Apple, Google, and YouTube to adopt well-being features, a new shared agenda can spark a “race to the top” in humane tech. He urges leaders, workers, journalists, policymakers, investors, and entrepreneurs to unite behind this civilizational moment: to embrace our Paleolithic emotions, upgrade medieval institutions, and wield godlike technology with wisdom.

    Key Moments

    00:00 Welcome and purpose: unify understanding of tech’s harms
    02:15 E. O. Wilson’s problem statement: emotions, institutions, godlike tech
    03:45 From magician to ethicist: exploiting human frailties at scale
    06:20 The overlooked inflection point: machines overwhelming weaknesses
    09:10 Slot machines for attention: phones, Tinder, email, social proof
    12:30 From attention capture to addiction to seeking attention
    15:00 Human downgrading: attention, mental health, civility, truth
    18:00 Evidence: teen depression, polarization, outrage culture
    21:30 YouTube and Facebook algorithms steering users into extremity
    26:00 Predictive voodoo dolls: AI out-predicting human nature
    29:10 Free will colonized, beliefs downgraded, conspiracy loops
    33:00 Global harms: Burma, languages engineers don’t speak
    36:40 The insufficiency of band-aids: grayscale, blockchain, ethics class
    40:20 Full-stack human design: physiology, attention, social ergonomics
    45:15 Case studies: loneliness, polarization, trust, common ground
    50:00 Three levers: humane social systems, humane AI, regenerative incentives
    55:00 Shared language and the Time Well Spent movement as proof of change
    58:20 A civilizational moment: the end of human agency or a race to the top

    A concise, high-impact preview of the Netflix documentary that exposes how engagement-driven platforms track behavior, manipulate attention, and reshape beliefs at population scale.

    The trailer opens by revealing that Google’s autocomplete and platform feeds differ by user, location, and inferred interests. It reframes this as a design choice, not an accident. Former leaders and designers from Facebook, Pinterest, Google, Twitter, and Instagram explain that product teams intentionally leverage psychology to maximize engagement. The montage links hearts, likes, and thumbs up to shallow validation and misinformation, while on-camera experts describe measurable harms: anxiety and depression, polarization, and loss of shared reality.

    Key claims highlight that social platforms can influence offline behavior without user awareness and that falsehoods travel faster than truth. The voiceover warns that when everyone has “their own facts,” society fragments. The trailer escalates to a governance-level warning: if you wanted to control a population, a platform like Facebook is an unprecedented tool. It closes with the filmmakers’ thesis that creators have a responsibility to change course and that failing to rebalance incentives could be “checkmate on humanity.”

    Trailer promoted by Netflix as the official preview for the 2020 feature documentary.

    Key Moments

    00:00 Search personalization hook. “Climate change is…” differs by user. Surveillance and tracking are by design.
    00:10 Insiders appear. Ex-leaders and designers from major platforms introduce the core conflict.
    00:25 “Using your psychology against you.” Like buttons and streak mechanics as engagement levers.
    00:38 Validation montage. Hearts and thumbs up conflate attention with truth and self-worth.
    00:50 Mental health line. Rising anxiety and depression, especially among youth, flagged as systemic.
    01:00 Behavior manipulation. Platforms can shift emotions and actions without user awareness.
    01:10 Virality and falsehood. “Fake news spreads six times faster than truth.” Consequences for shared reality.
    01:23 Polarization beat. When facts fragment, common ground collapses.
    01:33 Real-world stakes. “If you want to control a population…” platforms amplify influence at scale.
    01:45 Responsibility turn. Creators acknowledge duty to fix what they built.
    01:58 Call to action. Warns of chaos, loneliness, election hacking, and lost focus on real issues.
    02:12 Final warning. “Checkmate on humanity.” 
    02:21 End. Verified trailer length 2:21 on Netflix trailer page.

    TED Talk by design ethicist Tristan Harris, who argues that technology hijacks attention like slot machines and calls for a redesign of digital systems to prioritize “time well spent.”

    Harris opens with a personal reflection on time slipping away through compulsive checking of email, feeds, and notifications. He compares phones and apps to slot machines, engineered with variable rewards that exploit human psychology. Even knowing this, he admits it’s hard to resist.

    He highlights the cost of constant interruptions: research shows each distraction takes ~23 minutes to recover from, and frequent external pings condition us to self-interrupt. Harris demonstrates how design could restore choice—for example, chat systems that let someone “focus” while still allowing urgent messages through.

    He urges designers to upgrade goals from surface-level metrics (e.g., ease of sending a message) to deeper human values (e.g., quality communication). Examples include Couchsurfing, which measured its success in “net positive hours” of meaningful experiences created.

    Harris imagines a broader system: social networks tracking meaningful connections, dating apps optimizing for fulfilling relationships, and professional networks focusing on job outcomes—not just engagement. He suggests labels like “organic” or LEED certification could inspire a new category of humane tech, prioritized in app stores and browsers.

    He concludes with a call to action: leaders must adopt new metrics, designers should embrace a Hippocratic oath for design, and users must demand technology that contributes positively to human life. The shift would move us from a world optimized for time spent to one focused on time well spent.

    Key Moments

    00:00–01:10 Time slipping away; compulsive checking of emails/feeds.
    01:11–02:40 Phones as slot machines; variable reward design keeps us hooked.
    02:41–04:20 Constant interruptions; research on 23-minute recovery cost and habit of self-interruption.
    04:21–06:10 Example redesign of chat: focus mode with controlled, conscious interruptions.
    06:11–08:00 Goal shift: from making messaging easy to maximizing quality communication.
    08:01–10:00 Story of Thich Nhat Hanh meeting with designers; idea of “compassion check.”
    10:01–12:15 Couchsurfing case study; success measured as “net orchestrated conviviality.”
    12:16–14:20 Imagining humane social networks, dating apps, and career platforms focused on real outcomes.
    14:21–16:00 Proposal for a new certification system for humane tech (like “organic” food or LEED).
    16:01–17:40 Call to action for leaders, designers, and users to prioritize humane metrics.
    17:41–18:15 Closing: Shift from “time spent” to “time well spent.” Standing ovation.

    Interest to Book? Send us your enquiry

    Speaker Contact Form

    Enquiry Form

    No reviews available for this post.

    Was this profile helpful?

    Talent FAQ's

    Speaker fees can vary depending on factors such as expertise, demand, and event specifics. While some speakers may charge a flat fee for their services, others may have hourly rates. It’s best to discuss fee structures directly with the speaker or their representative to understand the pricing model.

    Keynote speeches typically range from 30 to 90 minutes, with the duration determined by the speaker’s expertise, the event’s agenda, and audience preferences. Keynote speeches often include a combination of inspirational stories, practical insights, and actionable advice tailored to the event’s theme or objectives.

    The scale of the event and audience size can indeed impact a speaker’s fee. Larger events with a broader reach or higher attendance may command higher fees due to increased exposure and demand. Conversely, smaller events or niche audiences may offer opportunities for more flexible pricing arrangements.

    Travel expenses such as transportation, accommodation, and meals are typically negotiated separately from the speaker’s fee. These costs vary depending on the speaker’s location, travel distance, and event duration. It’s important to clarify travel arrangements and expenses during the booking process to avoid misunderstandings.

    Many speakers require a deposit to secure a booking, with the remaining balance due closer to the event date. Deposits are often non-refundable and serve as a commitment from both parties. It’s advisable to discuss deposit requirements and payment terms with the speaker or their representative when finalizing the booking.

    • Virtual speaking appearances are a cost-effective alternative to high speaking fees, often 10-50% cheaper than in-person rates.
    • There is typically no difference in fee for a 15-minute speech versus a 60-minute speech.
    • Some motivational speakers are open to discounting their fee if hired for more than one event by the same organization.
    • When browsing speakers, consider their location to keep travel costs down.
    • Personal connections to your organization or event location can increase your chances of securing a renowned speaker.
    • Be upfront about your needs and expectations to ensure a successful partnership.
    • Speaker bureaus like Speakers Inc help guide you through the booking process to negotiate the best deal with favorable terms.

    Ready to find the perfect speaker for your event? Use our advanced search feature or contact us to get started today!

    You may be interested in...

    • (19)
    Joy Buolamwini - artificial intelligence

    Dr. Joy Buolamwini founded the Algorithmic Justice League to create a world with more equitable and accountable technology. Her TED Featured Talk on algorithmic bias has over 1.4 million views and her MIT thesis methodology uncovered large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon. Dr. Joy’s journey is […]

    • Travels from: Boston, MA
    • Artificial Intelligence
    • (34)
    Tan Le

    One of the science world’s most successful and influential CEOs, Tan Le started her incredible life journey as a refugee facing a high probability of death on a treacherous trip across the pirate infested China South Sea. Today, Tan Le is the founder and CEO of the bioinformatics firm, EMOTIV, where her groundbreaking work in […]

    • Travels from: San Francisco, CA
    • Digital Speakers
    • (14)
    • (5)
    Deborah Perry Piscione

    Deborah Perry Piscione, a renowned authority in innovation, entrepreneurship, and the future of work in the metaverse. With a distinguished career spanning decades, Deborah has garnered international recognition for her groundbreaking insights into the rapidly evolving landscape of technology and its impact on business and society. As a sought-after keynote speaker, author, and thought leader, […]

    • Travels from: San Francisco, CA
    • Innovation Speakers
    • (12)
    Ben Hammersley-cover

    Ben Hammersley stands as one of the world’s most renowned futurists, celebrated for his unparalleled ability to navigate the complexities of tomorrow. As the founder and principal of Hammersley Futures, an international strategic forecasting consultancy, he has redefined the art of guiding corporations and governmental organizations toward clarity amidst the uncertainties of the future. Known […]

    • Travels from: New York, NY
    • Futurists Speakers
    • (23)
    Vivek Wadhwa

    Called “Silicon Valley’s most provocative voice” for his ideas on technology trends, globalization, US competitiveness, and the future, Vivek Wadhwa’s work puts him at the heart of innovation. From tech entrepreneur and business owner to accomplished academic and widely published author, he served as a Stanford University research fellow and the Director of Research at […]

    • Travels from: Berkeley, CA
    • Artificial Intelligence
    • (17)
    Winston Ma

    Winston Ma, CFA & Esq., is an esteemed figure in the global digital economy, whose multi-faceted career has made him a sought-after speaker, advisor, and thought leader in Chinese economics and global tech investment. His journey spans roles as an investor, attorney, author, and educator, giving him a broad, yet incisive, understanding of the dynamic […]

    • Travels from: New York, NY
    • AI Technology Speakers
    • (18)
    Tonya M. Evans - Tech Intersect - Tonya M Evans

    Tonya M Evans is a global leader on crypto currencies and a technology keynote speaker, accomplished academic and administrator, and world-renowned speaker with over twenty years of experience in law, innovation, academia, and entrepreneurship. Dr. Evans currently serves as tenured Professor of Law at Penn State Dickinson Law School. Beginning July 1, 2022, she will […]

    • Travels from: Carlisle, PA
    • Artificial Intelligence
    • (39)
    Tom Goodwin - Image 07 (January 27, 2026)

    Tom Goodwin is a dynamic and sought-after speaker who is at the forefront of innovation in marketing and artificial intelligence. With a career spanning over two decades, Tom has earned a reputation as a thought leader and visionary, constantly challenging the status quo and inspiring audiences around the world. His insights into the digital landscape […]

    • Travels from: Miami, FL
    • Artificial Intelligence
    Disclaimer
    The profiles and images embedded on these pages are from various conference entertainers and other talent.

    These remain the property of its owner and are not affiliated with or endorsed by Speakers Inc.

    Fee ranges where listed on this website are intended to serve as a guideline only.

    All talent fees exclude VAT, travel and accommodation where required.

    Subscribe to our Newsletter and get connected:

    Your subscription could not be saved. Please try again.
    Your subscription has been successful.

    Newsletter

    Subscribe to our newsletter and stay updated.

    We use Brevo as our marketing platform. By submitting this form you agree that the personal data you provided will be transferred to Brevo for processing in accordance with Brevo's Privacy Policy.

    Our Mission:

    We are your partner creating memorable and engaging experiences that go beyond the event itself.

    © All rights reserved 2026.  Designed using Voxel

    AI Assistant
    Speakers Inc Logo 2024
    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.