• An illustrated scene of a panel discussion featuring four professionals seated at a table in a modern office. One speaker, identified as Mike Cottone from Clearly Intelligent, holds a microphone and discusses the topic 'Where is AI Taking Us?' with a view of the Empire State Building in the background.

    I think about AI every day. It’s often difficult to orient the sheer volume of thinking I do about AI in a structured way. When I do, they come out in the form of a blog post here at Clearly Intelligent. The New York Times conducted a panel with eight AI leaders to grill them about their thoughts on AI. So, in today’s post, I’ve got my structure determined for me. I’m going to wishcast a little bit and put myself on the panel. Maybe I’ll get there by the 2030 iteration of this event. Thanks to the New York Times for doing this, and please don’t copyright strike me here – I’m a paying subscriber using this content fairly, and my small but fervent group of loyal readers will be quite upset if they cannot get my takes.

    Roster

    Bio snippets are straight from the NYT. The participants for this panel were:

    Melanie Mitchell – Computer scientist and professor at the Santa Fe Institute

    Yuval Noah Harari – Historian, Philosopher and author

    Carl Benedikt Frey – Professor of AI and work at the University of Oxford

    Gary Marcus – Founder of Geometric.AI (acquired by Uber) and author of “Taming Silicon Valley”

    Nick Frosst – Co-founder of Cohere, an AI startup

    Ajeya Cotra – AI risk assessor at METR, a research nonprofit

    Aravind Srinivas – Co-founder and CEO of Perplexity, a chatbot search engine

    Helen Toner – Interim executive director of Georgetown University’s Center for Security and Emerging Technology

    I’m going to present the questions and answers from the piece and give my thoughts on the responses of the panelists, as well as my own answers to the questions.

    AI 5 Years in the future – Panelists give their biggest bet about the future of AI in 5 years
    Quote by Yuval Noah Harari discussing the potential for A.I. agents to become legal persons in some countries within five years.

    Less than a 1% chance of happening. Probably like 0.01% (That’s one in 10,000 simulations of the next five years.) There are so many things that need to happen in order for countries to consider AIs legal persons, and I am very confident that all of the necessary conditions for this will not happen in five years. In my opinion, in order for legal personhood to even be considered for AI, characteristics like consciousness and sentience will have to emerge and also be proven. Scientists and philosophers do not have an agreed on definition of these characteristics, nor do they have agreed methodologies for testing for the presence of these characteristics. I doubt very highly that consciousness and sentience are both rigorously defined and agreed on in five years, and highly doubt they will emerge in AI systems within this short timeframe. Maybe in five years there is a chance that AI gets personhood in the same way that corporations are “people” because of Citizens United, but I don’t think that this is what Harari means.

    Quote from Melanie Mitchell, a computer scientist, discussing the limitations of AI in addressing complex problems and the criteria for assessing intelligence.

    Agreed. AI won’t have cured cancer or solved physics. But can all cancers be cured? Does making most cancers much more treatable count as a cure? Can all physics be solved? AI will help push the limits of scientific exploration, and I think will have made meaningful contributions to the fields of medicine and physics by 2031. But these absolute milestones will not be hit.

    Quote from Helen Toner, A.I. policy researcher, discussing the role of A.I. in scientific fields and its limitations in personal tasks.

    Yes – as stated previously, AI for science will be a big deal – more on that later in the post. I actually used AI to help me do some research for summer camps for my kid this year and it was helpful in replacing a lot of Googling and other menial tasks. In 2031 I do think that you’ll be able to say to Gemini – find and reserve a spot for my kid at a summer camp. Here are her interests. Give me a couple of options, let’s have a back and forth, then do it. Now will a parent have to call to confirm, or sign some paperwork – probably. But I hope (and think) that we will have the technical capability to do this in five years.

    Quote by Nick Frost, Co-founder of Cohere, discussing the future significance of A.I. in everyday applications.

    Yes, AI will be used to describe so many different product features that it will cease to be very distinct from other aspects of digital technology. Under the hood, or in the interface, AI will play a huge role. I’m hopeful that we see some new and interesting design and interaction pathways that take advantage of AI technologies to reimagine how we use digital tools. I don’t think that I agree with the characterization that “the most banal use cases [will] have the most transformative impact.” It’s very possible that as AI capability and adoption both increase, and the technologies diffuse through the economy, that some root node problems are addressed that have meaningful impact, enabling new ways of solving problems that have previously been intractable. If you look at the area under the curve, yes, it might be that bundling up the banal use cases like automating insurance claims processing or personalizing digital ad copy has the largest impact, but not the most transformative.

    Quote by economist Carl Benedikt Frey discussing the impact of A.I. on productivity and prosperity.

    Fully agree. The promise of AI should be to deliver unto the human race an abundant future, free of material scarcity, where we can use the technological, societal, and economic scaffolding of society to increase flourishing of the human race. Cheaper spreadsheets will not do this. Hopefully, cheaper spreadsheets allow more work to be done with less human labor, and we can direct that labor up the value chain of human flourishing.

    Quote by cognitive scientist Gary Marcus about the timeline for the arrival of artificial general intelligence.

    Here we have it – our first mention of the mythical “artificial general intelligence” in the article. As AI has become more mainstream in the first half of the 2020s, AGI as a concept has evolved from a distant sci-fi consideration to a distinct near-term possibility. Ask 10 AI researchers what their definition of AGI is and you’ll probably get 11 different answers. Because of the lack of rigorous definition and consensus of what AGI means, I think it’s probably time to move away from AGI as an actual milestone. As far as what Marcus says here, he’s probably right about 2027. There are still lots of things missing from a system that would represent what I’ll call “digital AGI.” I equate this version of AGI as a system you could drop-in to most remote work friendly occupations and they could perform as well as the median human at the work tasks within that occupation. I think that probably will happen by 2032, at least in some fields/occupations.

    Quote from Ajeya Cotra, an A.I. risk researcher, discussing the potential automation of A.I. companies and its effect on future A.I. progress.

    Agree with this fully – it is a goal within many frontier AI research labs to automate the process of AI research, the first step necessary to kick off the recursive self-improvement loop. Within this construct, AIs would be in charge of planning, executing, and evaluating AI research, and implementing these changes into subsequent versions. In five years, it’s very likely that humans are still in the loop in this process, both from an operational and safety perspective. I’m not sure that an increase in research speed will have as many consumer-facing benefits as internal benefits. Put another way, I’m not sure that speed of AI progress will be the only bottleneck to new capabilities that make the technology more impactful.

    Quote from Aravind Srinivas, Chief Executive of Perplexity, discussing the desire for personal A.I. assistants and advocating for user privacy and security.

    Strongly agree with this given that AI capability to actually do useful things in the world improves. Right now, agentic AI is still in its relative infancy. However, I’ve started using Claude Code recently and have seen the light. Agentic AI will be extremely powerful when either its capabilities improve in areas like memory, statefulness, and continual learning, or when the systems that the AI interacts with evolve and start to be designed with AI agents as the users in mind, not just humans. I think we will see Apple, Google, and OpenAI take big steps this year in the realm of truly personal AI assistants, as I predicted in my 2026 AI predictions post.

    My biggest bet about the future of AI in five years

    The most meaningful disruptions in the business landscape will result from enterprises that are designed from the ground up with AI in mind. Because organizational change and digital transformation take a long time in large companies, and the people that are currently employed within those large companies have a vested interest in their labor not being replaced by AI, adoption is probably going to be slower than most AI-boosters think. But as an example, I think an insurance company created with 5% of the headcount of a comparable insurance company might disrupt that industry. Sure, there will probably still have to be humans in the loop for legal reasons or physical reasons, like going out into the world and verifying claims, but I can foresee a company that subcontracts a lot of that work and is run mostly by AI. A company like that might be able to offer similar services and coverage for a much lower price because they are paying for 5% of the human labor that their competitors are paying for. This is more verbose than the responses given in the NYT piece, but that’s the benefit of inviting myself to the panel.

    Will The AI Doctor See You Now?
    Image depicting a discussion on A.I.'s impact on medicine, featuring various contributors' opinions. The contributors include Gary Marcus and Nick Froos, along with illustrations showing different levels of anticipated impact: Small, Moderate, and Large.

    I would probably put myself here in the “Moderate” category, with the hope that the actual result is “Large.” I think Marcus here is focusing a little too much on LLMs only, not thinking about things like AlphaFold from Google DeepMind. There is so much potential for AI in medicine that I wouldn’t be able to do it justice in a short paragraph to describe all the ways that I believe it could be useful. In the clinical setting, I think LLM-based tools could definitely aid in way that Frosst mentions and therefore increase the actual amount of time doctors spend focused on patients, not paperwork. Ideally, agentic AI systems could reduce friction within notoriously arcane and byzantine healthcare workflows. Here’s a humorous tweet that might not be that far-fetched in the coming years.

    On the research and development side of the house, I believe AI could be extremely useful in helping explore the very large problem space of diseases and potential treatments. I’m not quite sure that our medical system is ready to take advantage of the speed that AI can deliver though – if a new drug candidate can be proposed with the help of AI but it still takes 5-7 years to get to market, we’ve only sped up a portion of the process. This would still be a good outcome, and hopefully AI would increase the hit-rate for clinical trial success, but we should be looking to apply AI to every step in the drug development, testing, and approval process to get treatments into the hands of doctors and patients sooner. We should also be focused on reviewing the current regulatory processes that govern getting these therapies to market and ensure they are updated appropriately to take advantage of a potentially vastly increased volume of new treatment candidates.

    An Army of Superhuman Coders in a Data Center
    Image of a discussion about A.I.'s impact on programming featuring several speakers, including historians and economists. Key points discuss coding as a manipulation of information and the use of A.I. tools by developers.

    “Large” – obviously. In the extremely polarized world of AI opinions, it’s nice to stumble upon a point of unity. All but the most ardent AI-skeptics admit that current AI systems provide a substantial power-up to human programmers. As Harari notes, coding is an ideal arena for AI to showcase its skills. Programming tasks are easily verifiable (does the code do what you wanted it to do) and there’s an enormous amount of training data in existence for models to learn from. As I mentioned previously, I have only recently gotten around to experimenting with Claude Code and have been overjoyed with the results. If you haven’t vibe-coded anything yourself, please jump in – it’s a real joy to see an idea come to life in a few minutes on a screen. Frey’s point about human review is important, but I don’t know for how long.

    An Eager Collaborator
    Image depicting a panel discussion about the impact of A.I. on scientific research, featuring various experts including Melanie Mitchell and Aravind Srinivas, along with their insights on A.I.'s role in science.

    I would place myself in the “Large” camp here. I don’t fully agree with Mitchell’s assertions. In December, OpenAI published researching showing that GPT-5 created novel wet lab protocols that increased efficiency of a molecular cloning protocol by 79x. AI is solving Erdos problems now. AlphaFold famously won its creators a Nobel Prize for its advancement in the realm of protein folding. The frequency and impact of these accomplishments is much more likely to accelerate in the future than to slow down or stop.

    I watched a great podcast episode recently from the Cool Worlds podcast by Professor David Kipping of Columbia University. In the episode, he details a recent meeting he attended where the consensus was that AI is absolutely vital to the practice of science moving forward. He thinks, as do I, that AI will be viewed as indispensable a tool as a calculator or a computer in science in the near future. He also wrestled with the possibility of an identity crisis among scientists who may feel obsoleted by the technology. This is a valid concern, but I find solace in Srinivas’s response to this question – the role of humans will be even more concentrated on asking the right questions, and choosing where to focus their research efforts. If we can use AI to speed up the process of science, and also increase the volume of science we do as a species, the ceiling for progress is raised significantly.

    Improving the way we move through the world
    A group of headshot images of individuals with varying levels of enthusiasm about AI's impact on transportation, accompanied by quotes discussing its benefits, such as efficiency, safety, and self-driving technology.

    Frosst’s point is spot on. I think it’s very likely that the short term benefits of AI in transportation will be less flashy than self-driving cars, but only if the entities in charge of maintaining and building these systems embrace the technology. If we can use AI to more proactively approach problems in the realm of transportation, or tackle challenges that would have previously been cost-prohibitive, we could start to realize some great benefits. An example of this is a company I encountered last year at O3’s 1682 Summer Session conference on Innovation and AIScout Robotics. They are focused on using AI and robotics to reimagine inspection and maintenance for transportation infrastructure. There are thousands of miles of railroad in this country, the inspection of which is currently constrained by the cost of labor to carry out those inspection operations. If we can use AI to alleviate some of these constraints, we can make grand problems, in the realm of transportation and elsewhere, more tractable.

    On the self-driving cars front, I’ve written previously about why I think this is a tricky problem worth solving. I’ll point you to that post for a relatively recent vision related to autonomous cars.

    Education – What, how, and why
    Discussion on the impact of AI on education featuring insights from experts.

    I would probably put myself off the charts here, somewhere in the realm of “Monumental”. I think AI has the potential to not only disrupt the way education is delivered, but at an even more important level, AI’s effects on the world may call into question what the true goals of education are. If we find ourselves in a future scenario, let’s say in 2040 (about the time my son will be going through his first year or two of college), where AI labor has proven to be a sufficient substitute for human labor in a wide variety of fields, what exactly will we be preparing our children for? This problem is so large, so consequential, and would require such coordination to re-think current systems, that I believe it will probably just be kicked down the road, where the collective hope is that “things will just work out.” I am very averse to this approach and would agree with my good friend Henry Cavill here – hope is not a strategy.

    On the positive side of things, like Frey and Toner indicate here, I think AI can help rethink and remake education for the modern era. By providing personalized tutoring and tailoring educational content to specific pupil’s needs and aptitudes, we may be able to better serve individual students and allow them to progress at the pace that fits them, not at the pace the entire class progresses. One of the most interesting podcasts about AI that I listened to in 2025 was from one of my favorite AI content creators – Nathan Labenz at The Cognitive Revolution podcast. This episode from June of 2025 is an interview with with Mackenzie Price, founder of Alpha School, a school that uses personalized AI instruction to accomplish the learning of a traditional school day in two hours a day, with the rest of the time focused on passion projects, sports, arts, and other activities where “guides” (not teachers) work with the students.

    Education is a third rail on most parts of the political spectrum. There is widespread agreement that our education system is outdated and not fit for purpose in many aspects, but almost no one can agree on the ways to make it better. A theme that underlies lots of my thinking about AI is that AI technologies may help us reimagine better ways of doing things that would have been unthinkable in the past. What if we really could enable every child on earth to access a free personal tutor? What if we could reorient the goal of education to not simply prepare workers, but to build a population that learns, grows, and thrives to bring about positive changes in the world? These may seem like pipe dreams for now, and Marcus’s observation that AI is used for rampant “cheating” and assignment completion is inarguably correct. The negative effects/uses of these tools today should not preclude us from realizing a better future by the smart integration of the tools.

    An alien entity traverses our minds and our world
    A panel discussion on the impact of AI on mental health, featuring quotes from experts Melanie Mitchell, Yuval Noah Harari, and Nick Froost, highlighting concerns and benefits of AI chatbots in therapy.

    I’m in the “Moderate” camp here in the short term and the “Large” camp in the long term. We have already seen documented cases of horrific outcomes related to LLM psychosis, and I expect that to continue to be a problem. Any technology in the hands of hundreds of millions of users is going to generate adverse outcomes. I don’t want readers to view this as insensitive or downplaying the negative impact in these cases, which include suicide and murder. These are seriously challenging ethical situations, and they have devastating consequences in the most extreme cases. Add to that the fact that these systems are non-determinative, that each interaction is created anew rather than produced by software that relies on traditional IF THEN rules of behavior, it will never fully be able to account for every edge case. I want to see AI companies put real effort into explaining how they monitor for conversations and interactions that look like they may result in real mental or physical harm to people. I don’t know how you solve for this legislatively.

    Alternatively, and similarly to the education point above, it might be hugely beneficial to put a therapist in the pocket of billions of people around the world at very low cost, in the same way it would be to put a tutor or personal assistant in their pocket.

    I do love Harari’s point here, and it’s something I think about consistently as it relates to AI. We are all guinea pigs in the AI revolution. You may still be able to insulate yourself by not using the tools right now, but my guess is that in five years, AI will saturate nearly every part of society, and to remove yourself fully you might have to adopt a lifestyle similar to if you were to remove yourself from the influence of electricity. Add in the potential existential crises that arise from the lack of need for a huge swath of human labor at sometime in the future, and you get a huge impact on human mental health in the long term.

    Creation and worth – slop or transcendence?
    A panel discussion featuring various experts discussing the impact of A.I. on art and creativity, with portraits of the participants including a computer scientist, historian, and co-founder of a tech company. Quotes highlight differing perspectives on A.I.'s role in creative fields.

    “Large” – positively large in the most optimistic scenarios and negatively large in the most pessimistic scenarios. I find the proclamations about AI as it relates to art and entertainment very interesting, and could probably talk at length about it. If you’ve gotten this far, I don’t want to lose you, so I’ll keep it succinct. AI is probably going to be able to enable lots of people to create material they would not otherwise be able to create. AI itself cannot generate “art” but it can generate “content.” Art, in my opinion, has to have an intentional component, and intentionality (also in my opinion) requires sentience. Hollywood is not “dead”, the video game industry is not “dead”, influencers are not “dead.” These landscapes will have new entrants to compete with that rely heavily (short term) or fully (medium term) on AI, but there will be a continued preference for human-involved material for the foreseeable future.

    Update Your Priors – AI misconceptions worth dispelling
    Panel question: What’s one misconception about AI that you think is worth dispelling?
    Text excerpt discussing misconceptions about A.I. by Melanie Mitchell, a computer scientist.

    I don’t necessarily see this as a misconception. Disregarding “magic” and focusing on “emergent” I think is important here, as semi-conventional wisdom holds that any sufficiently advanced technology is indistinguishable from magic. I think as AI tools become more advanced, and especially more agentic, we will see emergent capabilities that were not predictable or part of the training process. Semantics are important but sometimes frustrating, so I’ll leave you with this thought – we probably will not always be able to predict the capability of AI tools, and the result of that will be both incredible and frightening.

    Quote by Yuval Noah Harari discussing the control and awareness of A.I.

    For right now, AI is for all intents and purposes under human control related to the motivation to action. You don’t birth an AI into the world and without any instruction it goes and starts accomplishing things. There is no root motivation or direction. Now this could change in the future, if sentience develops. I’m going to put it on my vision board one day to talk to Harari about sentience and consciousness, as these aspects of AI development probably have the most impact on the extreme longterm (100, 1,000, 10,000 years into the future.) I do agree with the second part, that models will be able to pass a consciousness Turing test within a decade. Agentic AIs are starting to become more common place, and they aren’t asking permission to carry out tasks, so I guess you could in some sense say they are “deciding” which actions to take. Currently though, I would think of that more like my heart “deciding” to continue beating.

    A quote from economist Carl Benedikt Frey discussing how AI is transforming DIY repairs by providing instructions and parts lists based on photographs.

    Agreed. I’ve already used AI tools to help troubleshoot issues around my home or work on other handyman style projects. I wouldn’t trust it in areas with potentially hazardous or costly mistakes (i.e. electricity or plumbing) but it serves as a very useful first pass in this realm. I do think however that manual work will probably be “safer” from labor displacement for a decent amount of time. The world is very tricky. I think it’ll probably be about eight years until a humanoid robot can assemble a never-before-seen Lego set (I’m working on a post for the future about lots of different physical benchmarks I’d like to see and predictions related to these), so if that’s the case, it will be more than eight years before a humanoid plumber is showing up to fix my leaky sink or install a new water heater.

    Quote by Gary Marcus, cognitive scientist, discussing the confusion surrounding large language models and their limitations in mimicking human intelligence.

    This one is hard for me – people are greatly confused about large language models, and do attribute humanlike intelligence to an intelligence that is structured and trained very differently from our own. There are many philosophical thought experiments that are useful to consider when thinking about AI, and one of the most famous is John Searle’s Chinese Room. In short, the thought experiment considers a computer that has the ability to perfectly carry a conversation in Chinese by passing pieces of paper back and forth to the human participant. The computer uses a translation dictionary to accomplish this task, painstakingly looking up the words and transcribing them, then handing them back to the human participant. From the human’s perspective, they seem to be conversing with a fluent speaker. The computer doesn’t actually know anything about Chinese, let alone how to speak it.

    I think the real question here is if the output of a “mimicry machine” is indistinguishable from humans, does it matter that it is simply a mimicry machine? I also think Marcus’s second point about reasoning flexibly in the face of the unknown is a test where LLM based tools are have made continuous progress and don’t have nearly as hard of a time with new idea generation than they have previously. If you type into an LLM right now and say “think of a new idea”, a new idea will be provided to you. How “new” the idea is will probably be defined by personal preference, and unfortunately, by your priors related to how you think about AI. You might say “oh it’s just a combination of existing ideas” and if that’s the case I’d point you to tabula rasa and let you wrangle with that for a few minutes/the rest of your life.

    Quote by Nick Froast, Co-founder of Cohere, discussing misconceptions about AI's capabilities.

    Agreed. Again, sentience and consciousness are key areas to consider here when you start using the word “thinking.” However, it’s worth thinking about how much of human cognition is comprised of “sophisticated pattern matching.” My guess is, based on the ability of AI tools today to produce meaningful outputs, that this portion is huge.

    Quote by Ajeya Cotra, an A.I. risk researcher, discussing misconceptions around A.I.'s immediate impact and the associated catastrophic risks.

    I am standing, hooting, and hollering for this one. The real world is tricky, annoying, rough, complex, and chaotic. AI capability in a perfectly organized and receptive environment is likely far greater than AI utility in the real world full of humans and other unpredictable entities. Every technology has skeptics, bubbles, hucksters, vanguards, visionaries, and Cassandras. In the age of AI, all of these people have social media megaphones and captured audiences. The future is really, really unpredictable. My advice – don’t be definitive, update constantly, try to balance emotion and objectivity. I also don’t think this should be viewed only from a catastrophic standpoint – a new capability or new model might be the butterfly wing flap to a world of abundance, even if it simply looks like a dot release from a frontier AI lab today.

    Portrait of Aravind Srinivas, Chief Executive of Perplexity, with a quote discussing misconceptions about A.I. and employment.

    This is far too confident of a take for me. Like I said above, we have no idea what the employment effects of AI will be. New technologies almost always shift the nature of work in society, and in some cases they do remove types of work from society. How many horse-drawn carriage drivers or telephone switchboard operators have you seen recently? I think AI is different from these previously creatively destructive technologies because it has the potential to substitute not simply the output of human economic endeavor, but the substrate of labor itself. We have had tools in the past that have accomplished this for physical labor, but have never had it for cognitive labor at the scale we are on the precipice of. The first computers were people – they were in charge of actually computing mathematical operations. We began automating this cognitive labor in the mid-20th century, and it unleashed an unprecedented wave of economic growth and invention. If the previous 80 years of technological progress grew from that seed, what could the next 80 years look like if we can conquer all cognitive domains?

    Quote by Helen Toner, A.I. policy researcher, discussing the environmental impacts of the A.I. industry compared to other industries.

    Absolutely agree, especially in the short term. Now, if we have the entire planet tiled with data centers that use gas turbines for power, that would be extremely bad. And I think the race dynamics at play today combined with the United State’s notoriously slow pace of building new infrastructure might lead to some short-term deleterious environmental impacts. But overall, in the end, environmental worries will be low on the list of concerns from AI. A particular shout-out here to one of my favorite voices in the AI space – Andy Masley, who has tried tirelessly to accurately quantify the water usage in the realm of AI. My hope and prediction is that AI will speed up our ability to make advances in electricity generation, storage, and transmission, hastening our departure from fuel that dumps carbon in the atmosphere.

    Fast Forward – Life in 2030
    Question: Which of the following statements do you think will be true by 2030?
    "Unemployment in the United States will have increased significantly as a result of A.I." True or False responses from eight individuals: Marcus, Mitchell, Harari, Frey, Froost, Cotra, Srinivas, and Toner.

    I say “False.” Many factors, including at the minimum, friction within organizations to adopting these tools in a way that displaces labor and the lack of physical ability in the real world means that the employment impact of AI won’t be that large by 2030. I’m fairly confident in that prediction, and I also predict that will be different in 2035, 2040, 2050, 2100, and so on. Looking forward to continue elaborating on this over the next 80+ years.

    A graphic showing statements about AI leading to a breakthrough treatment or cure for a major disease, with a group of people categorized as 'True' or 'False'.

    “True” as long as you don’t specify that AI accomplished this feat “itself.” Scientists will be supercharged by AI, and humanity will benefit greatly from this fact.

    A group of eight individuals, with four categorized under 'True' and four under 'False,' discussing the statement about AI's role in a major global security event.

    “True” and terrifyingly so. With the emergence of agentic AI, and the desire amongst optimists to dangerously skip permissions, it’s very likely we will see an either intentional or unintentional major global warning shot by 2030. I hope that it will be in a relatively immaterial domain, and that we can recover from and learn from it. I’m worried that an inverse scenario (bad material impact or loss of life and that we learn nothing from it) is also fairly likely.

    "Most Americans will be using A.I. chatbots at least once a day." chart with true and false opinions represented by a group of diverse individuals' headshots.

    “True” but it might not be in chatbot form. I think it will be in ambient assistant form, like a Siri or a Gemini. This will depend on AI’s ability to actually “do stuff”, but I think it will be there and will be as omnipresent as the smartphone itself.

    The Arrival of AGI
    A group of experts discussing the likelihood of achieving artificial general intelligence (A.G.I.) comparable to human intelligence within the next decade, with individual opinions presented.

    I would put myself in a new category here – “Likely.” Harari is correct that defining AGI as comparable to human intelligence is meaningless these days. I think we will definitely have something that one could reasonably consider “Digital AGI” by 2036, previously defined in this post as a drop-in remote worker. I don’t think we will have “Physical AGI”, meaning an embodied AI that can accomplish most physical tasks in the real world that a human can by 2036.

    Historical Analogs – Have we been here before?
    A discussion on technologies that have had a transformational impact similar to A.I., featuring insights from various experts including a computer scientist, historian, economist, cognitive scientist, and co-founder of a tech company.
    Quote from Ajeya Cotra discussing the transformative potential of A.I. compared to agriculture and human evolution.

    I love this question. It’s so hard to answer definitively, but luckily, since this is my blog, I’ll just tell you what I think right at this moment in February of 2026. I’m going to go out on a limb here, and say that in the short term, lets say between now and 2100, it will be as impactful as the steam engine. In the long term, it will be between the emergence of Homo sapiens as a species and the invention of agriculture.

    Wisdom for the age of AI
    An article featuring a panel of experts sharing advice with high school students on understanding artificial intelligence and preparing for the future.
    A discussion featuring insights from Aravind Srinivas, Helen Toner, and Yuval Noah Harari about adapting to the age of AI, focusing on questioning, personal development, and balancing intellectual, social, and motor skills.

    I think these are all great pieces of advice honestly. And I really credit the Times on getting a diversity of opinion and experience for this piece. I’ll be happy to add my own if I can make my own luck and get an invite to a future version of this panel. My favorite piece of advice here is probably Srinivas’s about learning to ask the right questions. This is probably due to my deep appreciation of design thinking, which is all about asking the right questions to understand the problem you’re truly solving. In the age of AI, we will have more intelligence than we ever have before to solve any challenge we want. It will be crucial to point this intelligence at the right problems to create a better future where we can all thrive and benefit from this miraculous and scary time to be alive.

    As far as my own advice to a high school student, I would say the most important characteristics to have are flexibility, creativity, and adaptability. The world is changing quickly – having the ability to adapt to the new realities that we encounter each day is going to be key to be successful and avoid being overwhelmed by the pace of the present.

    A note on AI use: Here at Clearly Intelligent I’ll be adopting a scale suggested by Seb Krier that explains how I use AI in generating my posts. I’ll file this under 1.2 on this scale. I used AI to refresh my memory on the Chinese Room and tabula rasa, as well as to search and find some links used in the article that I’ve collected over the past few years thinking about AI. I also used Gemini to generate the cartoon image at the beginning of the post.

  • This post is not going to focus on AI. It’s probably going to be more political than I would have previously thought my writing here would be. But I don’t have a problem with that, because in this post, I’ll be arguing for a political philosophy that every American should cherish and work hard to keep extant in our political landscape: Liberalism.

    What is Liberalism?

    Liberalism is a political philosophy that emerged during the Enlightenment, when John Locke published his Two Treatises of Government in 1689. In this work, Locke argues against the divine right of kings to rule, and against the power of feudal lords who’s fiefdoms dotted the pre-Industrial Revolution world. He instead argues for things like consent of the governed – making sure people have a voice in the way they are ruled, and for Natural Rights – rights to life, liberty, and property that exist before any political consideration. Liberalism laid the foundation for liberal democracy, the American Revolution, and set the United States on a path to becoming the richest, most prosperous country in the world. It also inspired our collective journey to be the freest country in the world.

    We’ve made varying amounts of progress on each of these axes, depending on how they are measured. We have the highest GDP in the world, and rank 10th in GDP per capita. We rank 61st in life expectancy and have far worse income inequality than many other developed countries, with that inequality exacerbating greatly in the past three and a half decades. What I hope to show with these statistics is that we are indeed on a journey in the United States. It hasn’t always been pretty (open any history book) and progress can seem haltingly slow (look around you in 2010, then in 2025 – you’ll notice not much has changed) but a commitment to Liberalism means that we can continue that journey. I fear we are in danger of losing that commitment and I aim to defend it.

    The Evolution of Liberalism in the United States

    There are many flavors of liberalism, too many for me to fully dissect here. Based on some rudimentary research that I’ve done to orient myself for this post, the main characteristic that’s up for debate within liberalism is whether the role of the state acts to suppress these Natural Rights (Classical Liberalism) or to enable these Natural Rights to exist (Social Liberalism).

    If I had the desire, and the actual expertise, I could probably write an entire book on this, but I’m sure others have done it better than I could. Instead of doing that, I’m going to show you a graph of how I think the Democratic Party has treated it’s commitment to liberalism over the years. Special thanks to my co-author of this chart, Claude Opus 4.5.

    Evolution of American Liberalism

    Evolution of American Liberalism

    Tracing the Democratic Party’s ideological journey from classical liberalism through social democracy to neoliberalism

    State Intervention in Social Welfare → Commitment to Individual Liberty → Low High Low High Classical Liberalism Social Democracy Authoritarianism Paternalism 1776-1860s 1870s-1900s 1900s-1920s 1933-1945 1964-1968 1980s-1990s 2009-2016 2020s
    Founding Era
    Gilded Age
    Progressive Era
    New Deal
    Great Society
    Neoliberal Turn
    Obama Era
    Current Moment

    Key observation: Note the leftward shift on the X-axis (state intervention) from the Great Society peak (1960s) through the Neoliberal Turn (1980s-90s). The Democratic Party maintained high commitment to social liberties while retreating from robust welfare state expansion toward market-oriented policy solutions.

    Liberalism vs. Authoritarianism

    My main motivation for writing this post is the horrific killings of both Renee Good and Alex Pretti. There have been many things that I’ve been disgusted by during the Trump era, but these incidents were particularly horrific to me because they showed that tribalism and the politics of retribution and punishment have reached levels that typically precede extremely bad times ahead. I’m still working my way through Slouching Towards Utopia and it’s not lost on me the parallels between what we are seeing today and what Germany saw in the 1920s and 1930s. The emphasis on punishment of the “other”, the questionable adherence to the rule of law, and the consolidation of power across aspects of government. These are not ingredients for a stable, liberal democracy. We can decide if the Trump years are a decade-ish of departure on our rocky journey as a polity, or if it represents a fork in the road that leads us into an authoritarian future.

    I don’t want to be the one who continually blows the “descent into fascism” whistle, because I don’t think it’s particularly useful to jump to those sort of extremes, and is easily cast as histrionic by people who do not agree. But I also think we have been frogs in a very large cauldron, and the temperature has been turned up every day for a dozen years. It’s kind of hard to beat the authoritarian-curious accusations when you are, 6 years later, still trying to come up with a fake story about how you actually won an election.

    The killings of both Good and Pretti have also shown that there are cracks starting to form in the Trump coalition, which gives me hope that there are swathes of voters who are open to examining their own loyalty/fealty to Trump. It doesn’t give me hope though that many elected Republican officials continue to be yes men and foot-soldiers. My hope for these officials is that they realize that Trump will not be in power forever, and they will no longer be solely judged by their obedience to him.

    Liberalism and the Fourth Industrial Revolution

    So what relevance does a commitment to liberalism have as it relates to the age of AI? Well, for starters, evolutions of liberalism have often coincided with, or been forced by, changes in the economic landscape that necessitated new thinking around the roles that different entities play in society. The progressive era was a response in liberalism to the fact that the Second Industrial Revolution, beginning around 1870, created lots of new problems that the state was best positioned to address. Workers’ rights, anti-trust regulation, and the beginnings of the social welfare state in ex-US countries were all borne out of the rapid industrialization of that era. Surely child labor laws reduced the amount of factory output that was possible, but that seems like a pretty good trade to me.

    I think the US got fairly lucky with the Third Industrial Revolution, which isn’t as widely recognized as the first and second, but I equate with the digitization and miniaturization of the economy, starting in about the 1950s. At this point, much of the world was still rebuilding from WWII and the US was a major manufacturing hub. China, a global superpower today, was very much a developing nation. As such, there was not as significant of an effort towards or need for large scale economic reform – the US was doing pretty well economically, and the burst of technology was making lots of cool products (TV, automobiles, personal computers) widely accessible, and these products made life materially better for the population. The Civil Rights movement and the Great Society of the 1960s expanded both civil liberties and the welfare state in ways that have paid enormous dividends over the last 60 years.

    However, the relative stagnation and economic uncertainty in the 1970s lead to the a new kind of liberalism – neoliberalism. Neoliberalism is akin to a modern reimagining of classical liberalism, promoting free trade, free markets, and a more limited governmental role in the economy. By many measures, it was successful – our GDP grew enormously, the world at large became a global marketplace, and the proliferation of consumer products and technologies that we all enjoy today were accelerated, if not enabled, by this turn. It also resulted in a decoupling of wages from productivity gains and severely exacerbated income inequality.

    Currently, we find ourselves in a situation that I believe, at least in part, is due to a failure of neoliberalism to effectively bring the entire populace along for the ride. I think Trump 2016 was an early mainstream indication that there was trouble brewing – people were dissatisfied with the status quo and they showed it at the ballot box. I still think most people are dissatisfied with the status quo, which might lead to a pendulum swinging back and forth electorally. I think that the Democratic Party has fallen into the trap of corporate-pleasing-GDP-maxxing-neoliberalism. I don’t necessarily begrudge the thought process behind this – for most of human history, if you made a society richer, you made a society greater. I think that trend has broken and needs to be cast aside as the main motivation in governing.

    What I want to see is a Democratic Party that is focused on making society greater. Focusing on human thriving, not GDP growth. I also want to see a Republican Party that is focused on making society greater. There might be very different ideas on how to get there, but I hope that this common goal, of a democratic, liberal society, is what follows this period of upheaval in American politics.

    On the eve of the fourth industrial revolution, this is uniquely and specifically important. I came across a really timely tweet today that eloquently summarizes why.

    Ethan Mollick (professor focused on AI at the University of Pennsylvania) is correct in that these previous periods of history have worked out pretty well in the end. There are lots of factors about the AI industrial revolution though that will make it even more challenging. The speed and breadth of diffusion of AI will be faster than the physical goods that needed to be shipped port to port in previous revolutions. The scope of the economy that the technology will impact will be broader than any technology we’ve seen previously (knowledge work first, then physical work.) And the fact that AI is positioned to possibly be able to actually substitute for labor itself means it threatens one of the only two bargaining chips a polity really has to make an impact in society.

    If we lose both our ability to use our labor and our votes as bargaining chips, by slowly sliding into anti-Democractic/authoritarian-curious governance, then We The People will be at the mercy of whomever is in power when that comes to pass.

    The Third New Deal

    Americans yearn for a political entity that will fight for them. Right now, the two major parties, at least where the concentration of actual political power lies, are both corporatist parties. Trump won on the promise that he will fight for the average American, then proceeded to plunge the country, and maybe the world, into a prolonged period marked by division, incompetence, and malice. Democrats largely, either by being true believers or being too afraid to upset the lucrative ecosystem of the affluent political class, have remained mostly aligned to a GDP-maxxing viewpoint.

    A new strategy is needed, one that will break the spell of both Trumpism and Corporatism. Nearing the centennial of its predecessors, a successful candidate in 2028 will propose a Third New Deal to the American people. One that is focused on maximizing the flourishing of the citizenry of this country. I’m not sure what the exact tenets of this would be. It’s going to be hard to balance appropriate state action that protects and improves the wellbeing of the people of the United States without hampering innovation in the age of AI. But meaningful pursuits are usually difficult. If we don’t try to address the societal frustration and malaise that’s manifested in recent memory, we will simply be kicking the can down the road. This might lead to further erosion of the social contract, further fracturing in society, and open the door for even more extreme politicians to take power in the future.

    When you pair this corporatist attitude that’s broadly applicable across the political spectrum today with the idea that we are basically betting the success of the global economy on the success of AI, you get a very volatile mix with a wide range of potential outcomes. I don’t want the future of the human species to depend on the benevolence of a yet-to-be-determined CEO of THE AGI COMPANY. You shouldn’t want that either.

    Worthy Opponents

    I’m not trying to hide the ball on where I stand politically. I’m not even trying to convince classical liberals to adopt my viewpoints. What I am trying to do with this post is to defend and preserve liberalism, to encourage everyone to weed out the roots of authoritarianism, even if they occur on “your side” of the aisle. There are few things worth fighting for more than for our ability to live in a liberal democracy. It’s not perfect, but as we descend on our 250th birthday, we must remember that we’ve only got a republic if we can keep it.

    A note on AI use: Here at Clearly Intelligent I’ll be adopting a scale suggested by Seb Krier that explains how I use AI in generating my posts. I’ll file this under about a 2 on this scale. I used AI to do research on liberalism and create the chart within the post. The opinions herein are mine, and mine alone.

  • In 2026, AI will undoubtedly continue to be a huge part of our social, economic, technological, and political progression as a species. Here are a few things I think we’ll see in 2026 in the world of AI and elsewhere.

    Autonomous Driving Accelerates and Runs Into the Real World

    Waymo is planning to aggressively expand its areas of services throughout 2026 and beyond. Buoyed by impressive safety data and growing (albeit slowly) acceptance of autonomous driving, they will continue to increase their YoY miles logged and cover more geographic areas than they previously have. I’m very bullish on autonomous driving and think ideally designed and deployed autonomous driving systems will be widely available to riders in the coming decades and will dramatically reduce (>95% reduction) road fatalities once we hit the point at which a human is no longer necessary to drive the vast amount of miles logged in this country.

    However, my prediction for 2026, is that Waymo will see its first rider fatality. My hunch is that it will actually be the fault of the human driver in the other vehicle in the accident (i.e. a blind merge/lane change on a freeway), but it will happen nonetheless. At some point, the law of averages dictates that it must. I hope that I’m wrong, but we know that Waymo has been preparing for that moment whenever that first fatality does happen.

    What will the public think when this accident happens? Judging by other cases where fatalities (both human and feline) have involved autonomous technology, they will receive outsized media attention compared to a run-of-the-mill road fatality. Safety stats make for a boring story – a first of its flavor tragedy does not. Autonomous driving, like many other new technologies, will have skeptics that point at edge cases encountered in the real world, like when Waymos were more or less rendered inert during a PG&E outage in San Francisco in late 2025. These considerations are important – the world is messy! It’s full of edge cases. But they should be weighed appropriately against the preponderance of evidence of current road experience (excessive speeding, impaired driving, texting and driving, sexual assault by Uber drivers) to adequately assess the pros and cons of bringing a new technology into the real world.

    Chatbot Wars Intensify

    OpenAI’s ChatGPT is still by far the dominant market leader in consumer AI applications. Due to many factors, including explosive early adoption, rapid early iteration, and the very public personas of OpenAI’s leadership, ChatGPT is the “Kleenex” of AI chatbot applications today. Gemini will continue to increase its marketshare, as Google’s renewed and intense focus on consumer facing products and AI technology was an important 2025 trend. Expect lots of Super Bowl ads for AI again, really leaning in on the chatbot experience and form factor. It is still relatively early in the chatbot game, and I think that other than for superusers, the switching costs are still low. If Google can show that Gemini’s capabilities are on par with ChatGPTs and thread the needle of more seamlessly integrating into G-suite and Android phones (something I’m admittedly a complete novice regarding considering my pseudo-religious devotion to the Apple ecosystem) I think they have a real shot at taking a bite out of ChatGPT’s marketshare.

    Apple Kills Siri – But “Phoenix” Rises from the Ashes

    Apple has been one of the weirdest players in the AI space. Over the past several years they have overpromised on features that never shipped or made AI a footnote in keynote addresses. I don’t necessarily blame them for it – they have continued to perform well both in the consumer marketplace and in the stock market and probably actually have made a wise decision to let the other companies duke it out in the capabilities war. Apple as a company has long had a corporate philosophy of not necessarily doing it first, but doing it best. As someone who thinks a lot about the creative process of product design, the thing that is most important to me when considering creating something new is to truly understand what the problems are that your product is going to solve for. Right now I think it’s the lack of well-defined and well-specified problems that are keeping AI adoption relatively low when you consider the actual capabilities of these tools. My bet here is that Apple retires Siri, sending it respectfully afloat on a barge, down the digital river to join other storied relics of the past. They use their fall keynote to launch Phoenix, Apple’s Ambient Intelligence.

    Ambient Intelligence is a Key Theme in Consumer AI

    An ambient intelligence, one that lives with you, hears what you hear, sees what you see, and knows what you experience in the world, would be capable of solving the problems that you encounter on a daily basis. Imagine a personal Clippy that rides around on your shoulder, adding to your to-do lists, ordering Ubers when you have a reservation, surfacing that ticket in your email as you arrive at a concert. This kind of ambient intelligence will make your life more frictionless. This is my bet for the consumer product category in 2026. Whether it be pens, pendants, pucks, or phones – ambient intelligence is coming. A truly personal assistant that takes the annoying things off of your plate and positively adds to your life.

    This vision is probably going to have a hard time being accepted outside of techo-optimist circles. There’s going to be a very thorny set of social norms to navigate in a world where omnipresent recording devices are now just not within phones, but in peripheral accessories as well. What does obtaining consent for recording look like? How is that data stored? What about separating personal and work data, much of which comes with far stricter rules and regulations? I think there’s going to be significant cultural and social pushback on further fortifying the distributed panopticon that our modern world already represent. Add to that the general public’s distaste for AI and it’s a receipt for backlash and ridicule.

    Utility over Capability – Benchmarks Fall Out of Favor

    LLMs are going to saturate most benchmarks by the end of 2026 and I think they will no longer be a great measure of an LLM based AI tool. LLMs have made remarkable progress on human created benchmarks to date and I think that progress will continue. We will see intelligence per dollar across these benchmarks continue to fall, as smaller models improve and better computing is available, but the world starts to shift in favor of evaluating “utility” over “capability”. These means a lot, and at some point will probably be its own blog post in the future, but mark it down here – increased focus on “what have you done for me lately” not “what could you do in a theoretically perfect scenario.”

    White Collar Job Growth Will Be Non-Existent

    Slightly related to the capability vs. utility distinction in the preceding prediction, I think we see a continued holding pattern in the white collar world. C-suites all over are ecstatic about the promise of AI tools, wooed by AI industry players taking showcasing capability and promising utility. Huge, multinational organizations are going to be very wary of bringing in new workers to onboard and train if there’s a potential to automate their tasks in the next few years. While many companies may be using AI as cover for layoffs that occur for other, more traditional business reasons, I think it’s likely that organizations generally will take the stance of trying to keep teams relatively lean as they wait for the AI aha moment in their specific industry/use case.

    AI Will Be a Major Issue in the 2026 US Midterms

    At a time where a historically unpopular president already appears to be in his lame duck era, and has gone all in to appease the AI industry, it’s very likely that Democrats take a decisively anti-AI stance. It’s going to be the politically popular thing to do. I want Democrats to win back power and position themselves well for 2028, so I’m not going to necessarily begrudge them for harnessing popular sentiment and pointing it at the current punching bag.

    Good politics doesn’t always mean good policy. What I would like to see is this issue framed like is focusing on building technology the works for us and increase our ability as humans to thrive, not technology that simply increases the valuation of a handful of trillion dollar companies. That’s a paradigm shift though that’s complicated and requires a lot of focus and effort – not something we are going to want to tackle in the 11 short months before these interim elections.

    There will be lots of angles from which AI enters the political debate this year – nonconsensual AI image generation, data center construction, electricity prices, job displacement are just a few. I don’t think the population is in a receptive mood to the idea that “if we just go all-in on AI, the future will better.” Technology has done an incredible amount of good for human civilization, but we find ourselves in a precarious world currently, where we are chipping away at the last few percentage points of tangible improvements to our lives that entities are financially incentivized to go after, while majors problems like hunger, disease, violence, and inequality remain.

    I want us to choose to use AI to help us maximize human thriving. We should be skeptical that a technology that has already provided outsized benefits to such a small percentage of people and corporations will magically enable to solve all the issues we face on this planet. The upcoming midterms will be a preview of the 2028 election, where I believe AI will be the single biggest issue.

    AI Achieves a Scientific Breakthrough

    And for now, my most positive prediction: 2026 will be the year where AI for science goes mainstream within the scientific community and aids in/discovers a scientific breakthrough.

    We are around the time when then capability level of tools combined with a greater acceptance within the scientific community of integrating these tools into their work may be capable of producing something genuinely novel. Perhaps it’s a novel drug discovery, a math conjecture, or a materials science problem – all areas of knowledge with huge search spaces that are better approached with the intellectual horsepower of AI systems.

    I don’t think that the Nobel Prize will be awarded (at least in part) to an AI system quite yet (I think that will happen around 2028-2029), but I think scientists are really starting to understand the parts of the scientific process that are amenable to AI tool inclusion, and are going to be better able to deploy this familiarity with the tools in an experimental setting.

    Predicting, Updating, Iterating

    I am really excited to put out my first formal predictions post. I want to hear what you have to think about these predictions and some of your own predictions. Putting a marker down at a specific point in time feels a bit treacherous, but I’ll be happily updating and iterating on these predictions over the years. I’ll also revisit this in December 2026 for my first prediction self-assessment. Subscribe if you want to make sure you track these through to the end.

    Updated January 10th, 2026 – Added in an article that posits that AI is being used as cover for layoffs rather than AI actually replacing human labor. Kudos to my good friend Dr. Paul Hook for sending along this piece.

  • AI Attitudes – What do people think about AI?

    It’s been awhile (~2 months) since my last post and a lot has happened in the world of AI. The White House unveiled America’s AI Action Plan which has generally been received warmly by parties across the political spectrum. Waymo and Tesla have expanded their self-driving offerings and promised additional cities later this year and into 2026. The long anticipated GPT 5 was rolled out – and it was not the smashing success that OpenAI had promised and hoped for.

    Within the nonstop barrage of technical reports, hype-baiting, and doomerism that you can find on X and other places that aggregate AI news, there was an absolutely fascinating report published by Seismic Foundation entitled “On The Razor’s Edge. Seismic Report 2025. AI vs. Everything we Care About”. This report, more than anything else that’s come out since my last installment, has taken up most of my mindshare. This post will attempt to highlight what I think are some of the most interesting, surprising, and important findings that this report uncovered, and what I think it all means for the future of AI and its impact on society.

    AI vs. Everything We Care About

    I’ll start by noting that the framing of “AI vs. Everything We Care About” is a bit confrontational, but I don’t think unwarranted by what was found during the course of the study. I’ll be pulling charts from the full PDF of the study, and encourage you to download it and read through it.

    Let’s start with the key findings.

    Key findings from Seismic Foundation 2025 Report. (Page 5 of full report)

    01 – Less than 1 in 3 see AI as a hopeful development for humanity.

    This finding, especially for a skeptical but optimistic AI acolyte like myself, surprised me. When considering the rate of progress in the AI space, the promise it has for accelerating drug discovery and streamlining healthcare, and my individual use of AI tools, I see lots of ways for AI to enhance and contribute to humanity. This finding makes much more sense when paired with the chart below, which plots the salience/importance of certain issues (how big of a problem people think these issues are) and if they think AI will make a positive or negative contribution in these areas.

    Issue salience vs. AI’s ability to help – AKA how big of a problem is this, and will AI make it better or worse? (Page 9 of full report)

    When this chart accompanies the viewpoint that AI is not a hopeful development for humanity, it makes the key finding much more coherent and grounded. Out of the 18 issues specified to the survey respondents, only three were seen as issues where the use of AI would improve those issues – Healthcare, Climate Change, and Biosecurity and Pandemic Prevention. The other 15 issues, which make up a substantially larger volume of importance than the positive side of the ledger, are identified as areas where AI will exacerbate the issues, rather than positively contribute to them. That less than a third of people are “AI hopeful” makes a lot of sense when you see the considerations of the respondents quantified in this manner.

    02 – 1 in 2 see AI as a growing problem

    This finding was not very surprising to me. In fact – I think it should probably be higher! I think we will see this number continue to tick up in subsequent surveys around attitudes toward AI. There are a few interesting nuggets in the report that shed light on the challenges of truly understanding and summarizing opinions when using broad, and unspecific language such as “Do you think AI is a problem?”

    The following two charts illustrate exactly why framing attitudes towards AI is so challenging.

    Understanding how big of a problem the use of AI is today across different countries. (Page 11 of full report)

    A majority of respondents across each country surveyed think that the use of AI is a big problem, either moderately so, or very much so. This might indicate that there is a lot of collective mindshare being dedicated to thinking about AI. But when you compare AI to a list of other problems, that doesn’t seem to be the case.

    “Big Problem” is quite relative. (Page 8 of full report)

    As you can see by this graphic, even though more than half of respondents identified the use of AI as a big problem, it ranks lowest on this list of other problems. I expect over time that similar surveys will show the use of AI growing as an area of concern, and making its way up the list. It’s also important to note that the pervasiveness of AI will likely grow in the future and intertwine with all of these other issues. Time will tell if it will have a positive or negative contribution to the remainder of the issues on the list, but the chart previously covered shows where the public’s attitude is regarding this.

    03 – 3 out of 5 people are worried about AI replacing human relationships

    I’m very unsurprised with this finding. I think we’ve got some really interesting evidence that this is already starting to take root. The rollout of GPT-5 earlier this month was less than stellar for many reasons, but one of the most surprising reasons had to do with the removal of the model routing feature. Basically, until GPT-5, a user was able to select a model from a dropdown menu of models to conduct a chat conversation with. GPT-5 introduced automatic model routing wherein the application itself selected the model best fit for the task (and some have speculated to cut down on inference cost). This change created a maelstrom of backlash – users begged for the return of the 4o and 4.5 models and OpenAI complied swiftly. Why would users want to use a “less capable” model? In many instances, it’s because the users developed a rapport or relationship with the model, and losing that model felt like losing a friend.

    I’m not making an ethical or moral judgment on the users that felt this way. I think it’s important to understand that people are already developing relationships with AIs. Mark Zuckerberg has made explicit admission that he believes that there is a market for artificial companions because the average person desires more friends than they actually have. The speed with which people are developing these relationships, and the fact that they are developing them with primarily text-based interfaces is pretty surprising to me. I think this issue gets a whole lot thornier and more entrenched when there are avatars or photorealistic characters that attach themselves to the users. You can count me firmly in this 3 out of 5.

    04 – 7 out of 10 of the public agree that AI should never make decisions without human oversight and that humans could keep control

    This one is a bit trickier for me to nail down my perspective on. I’m probably a “no” on the first part of the finding and a “yes” for the second part. I’m reading this finding very literally and I don’t think there are many scenarios where never is the appropriate descriptor. Take for example AI systems’ use in healthcare. A February 2025 op-ed in the New York Times by Drs. Eric J. Topol and Pranav Rajpurkar discusses a research review that found in some instances AI tools outperformed not only doctors, but also doctors using AI. This was a surprising and somewhat counterintuitive finding. As AI capabilities continue to improve and broaden their domain expertise, I would assume this gap will widen. But just because AI tools might perform better does not mean that patients will want the human removed from the loop. Attitudes and opinions about AI (and everything else for that matter) are about much more than factual information and often rely heavily on emotions and past personal experience. This might seem obvious to many people, but I don’t think it’s as obvious to builders within the AI community as it is to the general population, which drives a lot of the divide.

    05 – More than 1 in 2 of the public are deeply worried about AI risks across all markets

    “AI risks” is another one of those terms that I think is so overly broad that it’s difficult to nail down what people actually think using that language specifically. You could easily make the case that the chart previously shown detailing out the public thinks that AI will likely make most issues worse, as a set of “AI risks.” The survey also dug a little bit into more standard “AI risk” territory when it asked respondents to describe how worried they were about specific uses of AI.

    Attitudes towards AI risks. (Page 22 of full report)

    These areas, which include bioweapon development and AI developing agency and goals that conflict with human values, are more in line with the traditional modes of thinking about AI risk. I think it’s encouraging that people are worried about these risks, and it’s nice to see them concretized further than a general “AI Risk” bucket.

    In September, Eliezer Yudkowsky and Nate Soares are publishing a book entitled If Anyone Builds It, Everyone Dies – a tome dedicated to the existential risk posed by artificial superintelligence (ASI). I’m not sure what their media and PR strategy is going to be – I wouldn’t be surprised to see them on the Today Show or CNN trying to convince people to read the book and take their viewpoint seriously. I think the reception of this book, and the presentation of its authors in the broader, non-AI focused media, will be a good indicator for how ready the public is to actually consider the idea of AI existential risk, which I equate to the most extreme scenarios which today only 36%-40% of people believe are even in the realm of possibility. Much more to come on existential risk in the future, so I won’t belabor the point now.

    06 – 2.2x more pessimism about the impacts of AI in women compared to male respondents.

    A gender divide on AI attitudes (Page 14 of full report)

    This finding was really interesting and unexpected to me. The report mentions that this may be likely due to the fact that women may be concerned because of an innate appreciation that
    systemic issues already in place could be exacerbated by AI.” When considered with this in mind, and combined with the chart below showing how attitudes differ by income, the finding makes much more sense.

    A similar divide surfaces when stratified by income. (Page 15 of full report)

    After seeing these charts, I think it’s difficult to come to any other conclusion than the general attitude is that people are worried that AI will exacerbate societal issues and divides, rather than solve them. I think that’s my general viewpoint as well – that the default, inactive path of AI progress will not bring about a world of material abundance, peace, and prosperity for all. Surely that is a potential outcome, but it won’t be the one we end up at naturally or by accident. It will take willful effort on behalf of citizenry, governments, and companies to arrive at this future.

    07 – 1 in 2 students feel daunted by what the future of work looks like to them

    Students, especially those entering college now, or just graduating and entering into the workforce, face an uncertain future – one where the “need” for them isn’t particularly obvious. Take for example a student who just graduated in June of 2025 – they entered high school in Fall 2021, a full year before ChatGPT was even released. Looking at the insane rate of progress through their high school years, it doesn’t take much to imagine the feeling of uneasiness they may have about the rate of AI progress over the next four years, and where that may leave them when looking for employment after they graduate.

    A very mixed bag of emotions and attitudes. (Page 28 of full report)

    This graphic is really striking to me – it alternates between negative and positive impacts that students have felt and foresee about AI. I love the use of the word “daunted” in describing the attitudes of the student cohort. Feeling uneasy about AI and the future really makes sense for people who fall into this category, and I think it’s a very good summation of how the promise and perils of the technology are weighing on people’s minds. There’s also somewhat of a contradiction inherent in this line of questioning – AI may help me in the workplace, but I’m not confident there will be more jobs for me when I graduate. I don’t envy students’ and entry level workers’ positions right now, and am glad for the decade and a half between now and when my son will be considering his place in the workforce for the kinks to be worked out.

    08 – 1 in 2 believe AI development is moving too fast to evolve safely

    AI development is moving extremely fast. And based on levels of capital expenditure and the reliance of global financial markets on a handful of companies that are betting on the promise of the technology, I don’t see any indicators that this progress will slow down. So what do people think should be done about it to ensure it happens safely?

    What do people want to do to ensure AI development happens safely? (Page 36 of full report)

    There are lots of ideas that gained traction in this part of the survey. Interestingly, an initial glance at these proposals indicate to me that the lowest desirability regulations are likely the most politically, economically, or technically implementable. For example – the US already has chip export controls to China, but only 26% of respondents favor that. Experts in AI policy would likely say that this is an extremely important step because it allows the US to maintain its lead in AI development. Conversely, the top suggestion of requiring companies to have a “kill switch” to turn off AI models in an emergency is practically impossible when you consider open source AI models. This chart is pretty indicative of the gulf that exists between experts and the general public, a problem that I think leads to lots of challenges in communication, policy, and technical understanding.

    The AI Publics

    In addition to the key findings of the report, another deliverable that’s worth discussing is Seismic Foundation’s segmentation of the general public into 5 groups.

    Respondents were segmented into 5 groups representing collective attitudes towards AI. (Page 40 of full report)

    Tech-Postive Urbanites

    Page 42 of full report

    The first group, of which I count myself as a proud member of, is the Tech-Positive Urbanites. There is an inherent contradiction within this group – they are much more likely than the rest of the respondents to outsource aspects of their life to AI, but they are also much more likely than the rest of the sample to be worried about AI replacing their job, and perceive AI to already have created lasting harm to society. How can you hold those things in your head at once? Well, I think a lot of it has to do with the idea that things have generally “worked out” for this cohort of society. As technology has proliferated and extended it’s influence on our daily lives, this cohort has likely gotten richer, more comfortable, and more powerful. I think there is definitely a status quo bias at work here – things have gone well for me in the past, and they will probably continue to do so in the future. I’m not sure betting on the continuation of the past is a good strategy with a technology like AI on the table, but time will tell if that’s correct.

    Globalist Guardians

    Page 46 of full report

    The Globalist Guardians are very worried about the future and are overall resistant to using AI in their day to day lives. They believe in strong, multilateral regulations that emphasize cooperation, information sharing, and safety to avoid risks (both specific and existential) from AI. They are concerned about the current state of the world and AI and how further development might increase risks and challenges. I empathize with their viewpoints, especially in the current state of AI where the “benefits” are not immediately recognizable or evenly distributed.

    Anxious Alarmists

    Page 50 of full report

    The entirety of the Anxious Alarmist cohort believes the next generation will have a harder, worse life than ours. They are not only resistant to using AI, but believe it’s nearly guaranteed to make all facets of life worse. I can’t dismiss this viewpoint completely – in fact, depending on your personal lot in life and your media consumption diet, it’s very easy for me to see how people can slot themselves into this category. When stories abound about people being duped into meeting an AI chat bot in person, the emissions that AI produces, and the potential for job replacement, it’s hard not to fall into a pessimistic mindset that you feel is simply the realistic mindset.

    Diverse Dreamers

    Page 54 of full report

    The Diverse Dreamers are a complicated cohort – they are worried about the risks of AI in society, but seem to be more malleable in exactly what should be done about them, and also leave the door open to positive uses of AI in their daily lives. They also have a bit of a contradictory viewpoint in that they strongly agree AI labs act in the best interest of society (21%) at a rate of nearly double the full sample (11%), but they still are pessimistic about the future and worried for their children and future generations.

    Stressed Strivers

    Page 58 of full report

    Stressed Strivers are the most neutral of the groups and are likely the most easily influenced. They have a much higher rate of “don’t know how worried to feel about X use of AI” than the general respondent pool, and they are much more open to AI use in their daily lives than the general respondent pool as well. The perception that AI could automate their job away is much higher in this group. They probably are the most representative of an unstable equilibrium – they don’t have strongly held opinions about AI, but one positive or negative experience could easily sway them in one direction or the other.

    Attitudes, Assessments, and Attention

    So – what do we do with all of this information? The first thing to do is to read it, think about it, and maybe try to do a self-assessment. Are you a Tech-Positive Urbanite like myself? Do you think AI will bring about the downfall of society like the Anxious Alarmists? Have you not put that much thought into it and consider yourself a Stressed Striver? I think some self-reflection is a good start after digesting this post.

    Then, what I would do, is try to seek out viewpoints from people, publications, or groups that do not fit into the same category as you. I think more Tech-Positive Urbanites should listen to what the Globalist Guardians and Anxious Alarmists have to say. That’s not something that comes naturally in our world of hyper-optimized media consumption – you have to seek it out. It’s more natural, and more comfortable, to align yourself with a viewpoint and stick to it, using your own media diet as a reinforcing mechanism for your belief system. I think this is a bad idea, as it really limits the scope of your experience and understanding of the world, and you can very easily come to confident opinions and viewpoints that have been reinforced by curated echo chambers you’ve built and/or have been algorithmically fed.

    So that’s what I think would be helpful for people to do on a personal level. Figure out where you stand and why you think what you do, but don’t be averse to consuming information that may change your opinions.

    Now what do I think AI companies can do about this? The most important thing to do is to understand how these attitudes stack up today. Below is a chart showing the estimated population distribution across the five groups in the countries surveyed.

    It’s very clear from this report that there are more people today who are worried about AI than who are excited about it. The strategy of continuing to integrate and infuse AI into every facet of life is going to be met with resistance if the communication strategy is “It’s going to be great – just trust us!”

    If AI companies want people to believe in the positive potential of AI, then they really need to focus on maximizing the positive and minimizing the negative. This seems obvious, but isn’t straightforward, especially when the areas in which the public thinks AI will be beneficial like healthcare take much longer to materialize than areas in which there is a negative perception, like in the erosion of human relationships.

    I think AI and technology companies would do well to not just focus on what technology can do for us, but what it can do to us. The smartphone is a lesson in getting this balance wrong, and AI has the potential to tip even further into a negative direction. I’m as techno-optimistic as they come, but have undergone an evolution wherein I no longer believe it’s as simple as “MORE TECHNOLOGY = BETTER WORLD.” It probably seems obvious to people that that was a naive belief to begin with, but I think there are probably a lot of people in tech who still do believe this, and build accordingly.

    To paraphrase Casablanca – AI is just like any other technology, but much more so. Getting it right will take a good understanding of the world and society, not just evaluating AI in a vacuum. It will also take lots of empathy and communication for and with people who don’t agree with you, and don’t believe the same things you do. We should try and encourage a world where the appropriate amount of attention is paid to the development of AI an its integration into society such that average people have a say in the future and are not passive participants in building it. It’s not going to be easy, but it’ll be worth it to try and get us to the best outcome possible.

  • Self Driving Salvation – A Worthy, Thorny Pursuit

    Every day in the United States, commuters, parents, children, and workers enter their vehicles and travel an astonishing 8,750,000,000 miles per day (for those counting, that’s 8.75 billion miles per day). Most of those trips are routine — going to the grocery store, a doctor’s appointment, or to the office. But for roughly 40,000 people per year, that will be the last trip they ever take.

    Road fatalities in the US reversed their gradual decade over decade decline starting in the early 2010s (texting and driving anyone?) and have settled around that 40,000 number for the past several years. That’s about 110 people every day that lose their lives on the roads. I believe that in the 21st century, we can make road fatalities as rare as getting struck by lightning (300 people per year), but doing so will require a massive amount of coordination, safety testing, and societal adjustment. The dream of autonomous, perfectly safe vehicles that do not crash is attainable, and a worthy goal we should strive for.

    A Brief History of Autonomous Driving

    The first inklings of desire for a driverless future arose in 1925. An electrical engineer named Francis P. Houdina rigged a vehicle with motors and a radio antenna that allowed him to control the speed and direction of the car remotely. General Motors developed Futurama for the 1939 World’s Fair. This exhibit and ride correctly predicted a vast, interconnected highway system that came to fruition through Eisenhower’s commitment to federal highway construction. Unfortunately, radio-controlled automatic highways did not. George Jetson hopped in his flying car, punched in his destination, and was autonomously whisked away to work.

    If GM had access to AI image generation tools, maybe this is what “Futurama” may have looked like

    It wasn’t until about the 1980s when the dream of a self-driving car inched its way forward on the spectrum of possibility. Teams from Carnegie Mellon and Mercedes Benz created vehicles that could self-drive under certain conditions. In 1995, another car built at Carnegie Mellon completed 98% of a cross-country road trip without human intervention. In 2004, DARPA created a Grand Challenge competition that invited participants to build autonomous vehicles to navigate a 150-mile course. The best performing vehicle completed 7.32 miles in the inaugural edition of this race. The next year, a team from Stanford University unleashed Stanley (pictured below) on the course, and claimed victory, finishing in 6 hours and 54 minutes.

    Stanley was a diesel Volkswagen Touareg equipped with rooftop LIDAR units, an electric motor to control the steering wheel, and a hydraulic piston to shift gears.

    Excited by the promise of self-driving, and inspired by the successes in the DARPA grand challenges, Google launched their own self-driving car project in 2009. Tesla introduced “Autopilot” in 2014 which enabled lane-centering and speed control without driver intervention. As competition in the sector ramped up, the first of fatality involving a self-driving car occurred. In 2018, a pedestrian named Elaine Herzberg was struck and killed when an self-driving Uber failed to detect her walking a bicycle across a highway. A five year legal battle ensued, with Uber ultimately being cleared of criminal wrongdoing, and the supervising human driver pleading guilty to endangerment. The case made national headlines due to the uniqueness of the event and the ethical concerns regarding it. More on that later.

    Fast forward to today, and the state of autonomous driving has continued to advance. Driverless Waymos inhabit the streets of Los Angeles, Phoenix, San Francisco, Phoenix, and Austin. Just last week, Tesla rolled out it’s long-hyped (In 2019, Elon Musk predicted a million robotaxis on the road by 2020) robotaxi service in Austin as well. The autonomous driving future isn’t here, but the seeds are planted, and cultivation is ongoing.

    How an autonomous future comes to pass

    Now that we’ve got a brief history out of the way, it’s important to explore how the technology is categorized and how it works today, what the differing approaches amongst competitors are, and how these approaches and strategies might evolve in the future to deliver on the lofty goal of fully autonomous driving.

    In 2014, SAE International, an automotive standardization body, published an initial classification system that aimed to codify a spectrum of autonomous vehicle capability. The latest version, updated in 2021, is pictured below.

    SAE J3016 Levels of Driving Automation

    It’s useful to have this reference available when thinking about the progress made on self-driving so far, and where it is headed in the future. Many new cars today come with features that would classify as SAE Level 2, such as lane assist, adaptive cruise control, and brake assist. So you might even have experience with autonomous driving today – you just didn’t know it was classified as that. Currently, companies like Waymo and Tesla are focused on developing Level 4 autonomous driving. Some characteristics of these self-driving vehicles are operation within a specific, pre-defined geofenced area and the lack of a human driver behind the wheel.

    When it comes to the technology stack that companies are using to pursue autonomous driving, there are two basic approaches – Tesla (Camera Only + AI) vs. Waymo (3D mapping + Camera + Lidar + Radar + AI), outlined in the graphic below from Bloomberg.

    As you can see, and probably surmise, Tesla’s approach is far more scaleable and cost-effective. The sticker price for a Waymo vehicle is around $180,000, the high cost of Lidar and Radar units contributing significantly to that amount. Additionally, Waymo relies on highly detailed 3D mapping of the geofenced area in which it operates. So Tesla has an advantage when it comes to cost and scalability, but will losing the additional sensor information gained from Lidar and Radar, and operating without a 3D map for reference, reduce the overall safety of autonomous Tesla Robotaxis? I think it’s definitely too soon to tell definitively and to what degree, and I also think there’s more to the safety story than just statistics.

    Safety Statistics, Failure Modes, and Human Factors

    Beyond the enormous total addressable market for taking over the role of the human driver, and the astounding economic value that could potentially be captured, there is one outcome of a driverless future that is unassailably “good” — reducing road fatalities to zero. Autonomous driving, when rolled it in a responsible way and operating under conditions that are appropriately constrained to the technological ability of the system, is already safer than human driving. Waymo recently released a detailed report that provides a comprehensive overview of the safety benefits achieved by their automated fleets.

    Waymos are already safer than human drivers when comparing accident rates over mileage travelled

    These results are very promising. Who wouldn’t want to live in a world where we could reduce crashes by 90%? Impressive as they are, these statistics come from an extremely small “sample size” when compared to the total vehicle miles travelled each day, and rely on an expensive, gold standard technology stack that incorporates data from multimodal sensor arrays and detailed 3D mapping of the areas in which they operate. It will be very interesting to see the safety reports around Tesla’s Robotaxi offering, as that system relies solely on camera input and AI systems to pilot the cars.

    Statistics, however, are not the only piece of the puzzle when we think about how these autonomous vehicles are going to be able to be integrated into our lives. An interesting phenomenon that I’ve observed that’s going to have an outsized impact on the general public’s appetite for accepting self-driving cars is the fact that the “failure modes” for these autonomous vehicles are sometimes non-sensical. Because these systems operate completely differently from a human driver, sometimes when they make a mistake, they make a mistake that a human absolutely would not make. I’ve collected a few examples below.

    A Waymo speeds through a flooded sinkhole, completely ignoring a public works crew that was attempting to block off the scene and redirect traffic

    A Tesla Robotaxi fails to stop when a UPS truck begins to back up, prompting the safety monitor to stop the car.

    Another Tesla Robotaxi slams on the breaks twice when it notices police cars on the side of the road

    These are three examples of behavior that depart completely from the way an attentive human driver would handle these situations. Even novice drivers would know to stop or adjust their course when confronted with a public works crew guarding a flooded sinkhole, apply the brakes when a vehicle begins slowing down and then backing up in front of them, and realize that stationary police cars on the side of the road not impeding traffic is not cause for slamming on your brakes.

    There is already a cottage industry popping up of collecting and sharing these autonomous driving fails. The Verge compiled a list of these events and even the relatively pro-autonomy Self Driving Cars subreddit is keeping track. In an effort to stay as neutral as possible, I won’t condemn this behavior — I actually think it’s really important to collect data on these failure modes and to spread awareness of them to prevent the technology from rolling out before it’s ready for primetime. Despite this, I also think it’s going to present a very difficult challenge to appropriately frame these failures against all of the safe, successful miles that these cars drive, as evidenced by Waymos safety report. In keeping with the old adage of “If it bleeds, it leads”, depictions of these failures are much more likely to be “newsworthy” than a boring summary of safety statistics. Add to this the fact that there is a lot of social media clout to be gathered by dunking on AI, we’re far likelier to see and be moved by videos of these failures than overall safety metrics.

    A final point that’s important to explore, one that’s slightly related to the proliferation of and appetite for these failure videos, is the fact that the way humans perceive and understand information is going to significantly impact the acceptance of self-driving. People are not naturally good at effectively and dispassionately assessing risk, disconnecting their personal feelings and beliefs from the reality of the situation. A prime example is the fear of flying. You are far, far, far more likely to die in a car accident than you are in a plane crash, but have you ever heard of someone who is afraid of riding in a car? Probably not.

    There are a lot of reasons the that fear of flying exists. Every commercial aviation accident drives worldwide headlines, plane crashes are more likely to be fatal than car accidents, and when you’re flying commercially, you have no control over the situation. These facts drive emotions and perceptions about the safety of driving vs. flying, and no matter how many statistics you cite like deaths-per-passenger-miles, people are still going to be afraid of flying. I don’t think self-driving advocates are going to effectively convince people about the benefits of this technology by statistics-spamming.

    The Good Future and How to Get There

    I’ve spent much of this post discussing some of the pitfalls and challenges that face self-driving cars today. I think it’s really important to be intellectually honest, and handwaving the current state of the technology, warts and all, is intellectually dishonest. I want to conclude this post though by talking about how I think we get to the best version of the future and what that good version of the future might mean.

    First, I think it’s absolutely imperative that the federal government embarks on building a regulatory framework that’s based on the SAE levels of autonomous driving. I don’t think it should preempt the local experiments that Waymo and Tesla are doing in various cities around the country, but I think it’s going to be important to lay the ground work for federal regulations around self-driving cars. If I could wave a magic wand, I’d want a huge portion of the Nevada desert to act as a self-driving proving ground, consistently incorporating new edge cases and learnings from self-driving fails to iteratively improve the technology.

    Additionally, speaking from a magic wand standpoint, I’d want to start collecting video data from the millions of cars on the road today. There are billions of miles of data produced every day that would be hugely beneficial in training vision models, identifying edge cases and behavioral preferences in the vast problem space of driving. This would possibly be a privacy nightmare, and I don’t know how you could implement it effectively or ethically. Perhaps something like an insurance company offering reduced rates if the vehicle collects this data. Again, keep the magic wand, rather than the panopticon, in mind.

    Finally, I’d love if everyone would just become a bit more neutral about this topic. That’s kind of the point of Clearly Intelligent, and I know it’s going to be a hard fought battle. But if autonomous driving companies could spend less time talking about eliminating all human drivers in the next 10 years, and the public could break the status quo bias of accepting 40,000 road fatalities in a year in the name of ‘keeping humans in charge” of driving, we might actually be able to chart a path forward.

    The good future I envision not only involves rare-as-lightning-strike road fatalities, but also redesigned cities, with more plentiful housing and denser, richer communities. A future where car ownership isn’t a necessity to simply exist in many parts of the country. A future where clean, safe, reliable transportation options change the way we move around.

    There’s much more to say on this, and it’s a topic I’ll be coming back to regularly. Advanced technology helps us reframe what’s possible and helps solve major problems that exist in the world. Autonomous driving is a perfect example of this, and even if the road we’ll travel is bumpy, there’s a good future we can achieve, as long as we are intentional and measured in pursuit of it.

  • Alphafold, Isomorphic Labs, and the potential of AI for science

    Science is incredible. The practice of observing the world around us, stating hypotheses, designing experiments, collecting data, analyzing results, and sharing the work with peers has propelled our civilization forward in countless ways. Ancient Babylonian astronomers tracked the movement of celestial bodies and used this information to refine their calendar’s accuracy and generate the first planetary theory in human civilization. During the Islamic Golden Age, which spanned the 8th century to the 13th century scholars devised experiments to understand the characteristics of light and vision and fathered algebra. Newton, Galileo, Da Vinci, and countless other Renaissance scientists, propelled by the deluge of shareable information made possible by the printing press and practice within a newly formalized framework called the scientific method, discovered and created even more scientific advances. The Enlightenment, Industrial Revolution, and Information Age have all served as additional catalysts, amplifying the speed and magnitude of our collective scientific understanding. AI has the potential to equal or surpass these force multipliers of the past and vastly expand our ability to observe, understand, and engineer our world.

    Alphafold – Predicting the shape of life

    I was inspired to write this post after watching a great interview with Max Jaderberg and Rebecca Paul of Isomorphic Labs, a drug discovery company spun off from Google DeepMind and now part of Google’s parent company, Alphabet, Inc. In the interview, host Professor Hannah Fry and her guests discuss the potential for AI in drug discovery, the way humans and AI collaborate in the drug discovery process today, and what future AI capabilities might unlock for scientific understanding.

    A foundational technology to Isomorphic’s founding and approach to drug discovery is AlphaFold – an AI program that was built by Google DeepMind with the goal of being able to predict a protein’s 3D structure from it’s amino acid sequence. Proteins are fundamental biological molecules that are responsible for a vast amount of activity that occurs in living beings, from transporting molecules to carrying out the chemical reactions that take place in cells. It’s fairly rudimentary to determine a protein’s amino acid sequence, but until AlphaFold, it was extremely difficult and labor intensive to determine the 3D structure of a protein. Determining a protein’s structure in a lab using techniques like X-ray crystallography, where scientists crystallize a protein, blast it with X-rays, then analyze the diffraction pattern of those X-rays, can take months or years and cost several hundred thousand dollars.

    AlphaFold enabled highly accurate estimation of a protein’s 3D structure, placing first in a competition designed to assess this exact capability. AlphaFold 2 scored even higher, and AlphaFold 3 extended the scope of the system to complexes created by proteins with DNA, RNA, ligands, and ions. In recognition of this incredible work, Sir Demis Hassabis and John Jumper of Google DeepMind shared half of the 2024 Nobel Prize in Chemistry for AlphaFold 3.

    From structures to molecular candidates

    Why is determining a protein’s structure so crucial for drug discovery? Because drugs work by fitting into a protein’s 3D structure like a key fitting into a lock. As explained in the interview, this geometric reality has been previously willed into existence experimentally, as medicinal chemists create candidate molecules and test their ability to interact with the target protein in a successful manner to mitigate a disease mechanism.

    Thanks to AlphaFold 3, this can now be done in-silico — on a computer.

    Screenshot of AlphaFold 3 platform showing a candidate molecule’s 3D structure interacting with the grayed out 3D protein structure (left) and the molecule’s 2D chemical structure (right)

    By previewing the molecule’s predicted interactions with a protein, and being able to make changes within the platform, scientists can understand the candidate molecule’s likelihood of success and tweak it in seconds, instantly viewing the new result.

    It’s vitally important to understand that these tools do not replace the need for experimentation in the real world – the “wet lab.” What they do however, is allow for much more expansive and time-efficient experimentation virtually, making precious experimental effort in the real world more valuable and efficacious. If you’re going to spend time in a lab testing molecules, by using systems like AlphaFold 3, you can gain additional confidence that the molecule you’re working on has a higher probability of success than if you had worked without a pre-validation step of testing it virtually. Does that mean that specific molecule will surely work out, execute the exact mechanism needed to treat a disease, and go on to be successful in clinical trials? No – there’s no guarantee of success. But if scientists can use AI tools to make each “shot on goal” more likely to succeed, it follows logically that they could drastically shorten the time needed to arrive at a successful outcome.

    AI-Human Collaboration

    In five years time, doing drug discovery without AI will be like doing any sort of science without math.

    Max Jaderberg, Chief AI Officer, Isomorphic Labs

    There are lots of exciting takeaways from this interview – who wouldn’t be excited about the hyperbolic prospect of curing all diseases in 10 years – but the most exciting part to me is thinking about using AI to accelerate the speed of scientific discovery and making intractable problems tractable.

    Chemical space, basically the number of theoretical possible molecular structures that exist, is frequently cited to be around 10^60 structures – that’s 10 with 60 zeroes after it. To put that in perspective, if each of the 10^20 grains of sand on Earth was in fact it’s own Earth with 10^20 grains of sand on it, then each of those “grainchildren” would also have to be their own Earth, with their own 10^20 grains of sand for us to equal the vast size of chemical space. When Isomorphic uses groundbreaking technology like AlphaFold 3 as a wayfinder in that vast space, they have the potential to massively speed up the process of bringing life-changing treatments to market.

    AI for Science = AI for Good

    Strong opinions about AI are forming rapidly as it works its way into the social, economic, and technological facets of our society. Pew Research recently surveyed AI experts and the general public about their views on AI. There are lots of interesting datapoints in the survey, and I plan on doing a full post on it soon. But I want to draw attention to a specific line of questioning related to AI having a positive or very positive impact in certain areas.

    There is one standout category from this line of questioning – Medical Care. A huge majority of AI experts believe that AI’s impact will be positive in the domain of medical care, and a whopping 44% of the general public does as well. In a sea of discontent, hype, doom, and fear, using AI to increase our ability to lead healthier, longer lives less affected by disease sounds like a pretty good future for us all to rally around. Companies like Isomorphic are leading the charge, and open source breakthroughs, like the recently announced Boltz-2 AI model that predicts drug-binding affinity (another key consideration for drug design) will help accelerate progress. I’m confident that this slice of the future is bright, but we’ll have to navigate choppy waters during the journey there.

    Timing is everything

    Accelerating the drug discovery process is a worthwhile endeavor, and I’ll be following and rooting for the companies looking to do so. But the results from these efforts will still take time – very likely in the 5 to 10 year time frame before compounds come to market from Isomorphic or other players in the space. In that time, I really worry about the negative impacts that AI could have, from job displacement, to personalized election misinformation, to enabling a further retreat into socially isolated lives powered by hyper-optimized generated content. These negative potential outcomes could further entrench views about AI, and even erode goodwill that has built up for more generally accepted altruistic applications of AI, like using it to supercharge scientific progress.

    In a world where negative and polarizing news drives the most engagement, and the networks we use to consume that news prioritize engagement above all else, it’s an uphill battle to get people to pay attention to the potential that AI has to accelerate our understanding of the world and our ability to engineer a better future. It’s also difficult to expect people to have a nuanced view of AI technologies when they are frequently lumped together as a monolith rather than viewed as separable efforts that have both obviously good and obviously bad use cases. That’s part of the mission here at Clearly Intelligent – to enable my audience to understand and form coherent and nuanced views on the promise and perils of AI.

    I’ll end this post with a great graphic I came across recently that charts the pace of scientific progress throughout human history. It’s truly incredible to look at how far we’ve come from our earliest days as a species, and how rapidly we have been able to advance in recent history. As we enter the age of abundant intelligence, we have the opportunity to point it at the most pressing problems we still face, and I hope we use that power as a force for net good.

  • People have been concerned with new technology’s impact on the areas of labor and work for as long as the concept of labor and work have existed. Take for example the humble plow, a tool that shifted the measure of a person’s ability to extract value from the earth from the amount of work they could do with their bare hands to the amount of land they owned and harvested. Had newspapers existed in the early agricultural age, proto-reporters surely would have pulled quotes from concerned furrow-diggers about what the future held in store for them.

    Last week, there seemed to be a noticeable uptick in media coverage about AI and the impact that it will have on jobs in the future. Time Magazine, Axios, and The New York Times each had articles worth reading. Time laid out potential ways to address the upheaval many predict will occur while Axios spoke at length with Dario Amodei, CEO of Anthropic, about his predictions for a future full of broadly capable AGI systems and how there could be significant impact on jobs in just the next few years. The New York Times focused a bit more narrowly on the impact AI is already having on the job market for recent college grads.

    In addition to these reading these articles, I saw the following 11-minute clip that stitched together lots of predictions.

    In sharing this compilation, I’m not endorsing the viewpoints or predictions therein. Nor am I compelled by the broad and sensational headline “The Great AI Job Displacement Is Closer Than You Think.” It’s important to me to showcase the viewpoints of individuals that are working to run major AI labs, have vast experience in the AI space, and have paid close attention to recent progress.

    Here are a few of the most staggering quotes from the video:

    “I’m actually afraid of the world where 30% of human labor becomes fully automated by AI and the other 70%…that’s going to cause this incredible class war between the groups that have been and the groups that haven’t been” – Dario Amodei, CEO of Anthropic

    “That doesn’t mean the transition isn’t going to be messy, in fact, I expect it in some ways to be pretty painful” – Sam Altman, CEO of OpenAI

    “Psychosocially, it’s very disturbing that you can no longer tell people what kind of world they should prepare their kids or grandkids for.” – Tyler Cowen, Marginal Revolution

    Each of these utterances portends a future with serious challenges. Potential for class war, an economic transition of indeterminate length and unpredictable levels of strife, and most of all a range of potential outcomes so vast that it becomes impossible to predict, let alone prepare for. Imagine if the messages in that clip didn’t come from the stages of conferences or the interiors of well-equipped podcast studios, and instead, you encountered them walking down a city street.

    If I saw this man, I’d hastily move to the other side of the street, not letting his proclamations take up space in my mind. It’s easy to ignore sweeping statements about a vastly different and uncertain future, especially in a world that has so many ongoing, more tangible challenges. It’s also understandable to chalk these statements up as hype — chum for investors looking to secure a slice of the trillion dollar labor market these predicted drop-in remote workers of the future could dominate. Realistically, the likeliest outcome lies somewhere in between automating all white-collar work by 2030 and producing systems that are economically useless and therefore inconsequential.

    Impactful Assumptions

    This post isn’t meant to focus on my own personal predictions, or to dissect individual predictions made in the articles or video I’ve discussed so far. There are thousands of voices out there who have done that, and they’ll continue to make predictions as we integrate AI into our lives. Something I will do though is list out a few assumptions I think the people expressing the viewpoints covered in this entry are making that hugely impact both the timeline for and magnitude of AI’s impact on the job market.

    Assumption #1 – Jobs can be completely decomposed into a discrete and finite set of tasks that have clear indicators of task success and task failure.

    For a job to be completely replaced by AI, all of the tasks associated with that job must be identified and either completed by that AI, delegated to another job function, or eliminated. The process of compiling an exhaustive list of everything you’re responsible for doing as part of your job responsibilities is probably going to be pretty difficult and time consuming. Now multiply that effort across every organization, every job function, and every position. That’s a lot of work – not something done at the snap of a CEO’s finger.

    In addition to the herculean effort of enumerating all of these tasks, I think there’s probably a bit of a cognitive bias at play when leaders in the AI field talk about the ease of automating work. An example that’s often cited as evidence that we are clearly headed for an automated future is the position of Software Engineer – a computer programmer. Today’s most advanced models already perform extremely well in this domain, and empowering them with agentic abilities will increase their utility in a commercial important way. But there’s something particularly nifty about computer code that separates it from the work product produced in many other tasks – it’s easily verifiable. Either the program compiles, runs, and passes testing, or it doesn’t. This sort of pass/fail task lends itself remarkably well to automation because most AI systems are trained using reinforcement learning. Very simply, they see lots of good examples and lots of bad examples, and learn to do more of the things associated with the good examples than the bad. Repeat this several thousand/million times and you get a system that’s performant.

    Where are there a lot of software engineers? AI companies. If you’re surrounded by people who are doing easily demarcated tasks that have verifiably good or verifiably bad results, it might be natural to over-index on this first person experience and believe that much of the work that goes on in the economy shares those characteristics. I don’t think that’s necessarily the case, and bumping up against this fact when broadening the scope of the problems that these systems tackle may result in slower timelines than anticipated.

    Assumption #2 – User, customer, and market preferences are multifaceted, and include a component of acceptance for inclusion of AI technology. Eventually the benefits of including AI outweigh factors that may make someone less likely to want to engage with an AI offering.

    If I were to pitch you a new software solution that promises to replace your entire accounting department with an army of AI agents at a tenth of the cost, that deal may sound pretty good. There’s a catch though, it can’t present at your quarterly board meetings like your current head of accounting does. Oh, and it’s not very persuasive chasing down past due accounts. Those tradeoffs might be worth it, so you let your team go and install your agents. For a 90% discount, you’ll stomach some discomfort and shift those tasks elsewhere.

    These tradeoffs might not always be worth it, and sometimes will downright backfire. In a recent example, Swedish fintech firm Klarna recently hired back some human employees after AI customer service bots frustrated customers and reduced service quality.

    Just because an AI replacement is capable of performing some economically valuable activity doesn’t mean that whoever is on the receiving end is going to want AI to do that work. Sometimes, they will swallow the tradeoff. Sometimes, they’ll vote with their feet and change their consumer behavior. And if a manager cannot reliably understand how a given conclusion was arrived at, or cannot wholly replace an entire job function, it may be easier to stick with the status quo of human labor.

    Assumption #3 – The problems that face current day AI systems, like hallucinations narrowly, and general inscrutability broadly, will be solved well enough to make their functional deployment tenable.

    Hallucinations, when a model fabricates information, answers, hyperlinks, or court cases still plague large language model-based AI systems today. While hallucination rates have trended down over time, and savvy users of these tools know to look out for these errors, there’s not yet a silver bullet to address them. One mistake in one response to one prompt is bad – but what happens if that hallucination occurs in the initial step of a many-step workflow undertaken by an autonomous agent? That butterfly wing flap might substantially throw off the end work product, making the entire endeavor worthless. Add to this the fact that despite advances in mechanistic interpretability research — the study of how AI systems “think” — we still don’t have much of an idea of how exactly these systems work, so it’s not like you can reliably interrogate the process used to arrive at the final result. Some systems have started to show users the reasoning used to arrive at an output, but this feature currently lacks the depth and completeness necessary to provide an exact and thorough accounting in the way that a human employee could.

    Compounding and Additive Effects

    The extent to which these assumptions are accurate will have a huge impact on AI’s ability to have real impact in the economy. IF jobs can be easily and completely broken down into discrete task lists AND IF consumers of AI work output accept any tradeoffs or shortcomings AND IF problems like hallucinations and interpretability get sufficiently addressed – we get one version of the future. This version of the future results in drop-in remote workers, a country of geniuses in a data center, and the fundamental restructuring of white-collar work.

    If however, it’s hard to fully account for and replace every action completed in a job role, consumers and organizations resist AI integration due to lack of efficacy or completeness, and the challenges of current day AI systems persist, that’s a very different future. This future looks more like a patchwork implementation of agents with varying levels of autonomy that work in more narrowly and well-defined domains. Still economically impactful, but not as capable of swiftly wiping out entire job categories.

    I think the next 12-30 months (from mid-2025 to the end of 2027) will give us a good indication of the true level of job displacement we might expect to see before the end of the decade. As AI model providers turn more of their attention (and compute) to reinforcement learning paradigms, it will become clear whether or not we have/can collect sufficient data to train models in the tasks that make up these white-collar jobs or if the models become smart enough to generalize to many domains of computer-focused work without explicitly training on them. AI agents that can use computers like people do – i.e. navigating a browser, opening a webpage, filling out fields – are still in their infancy. Currently, they are not capable in the ways that would be necessary to replace white-collar workers, working too slowly or too unreliably to make for an effective substitute.

    Not Everything, Not Everywhere, Not All At Once

    I expect AI to have a huge impact on the way we work. A recent survey indicated that 42% of workers are using Generative AI tools in a professional capacity, and that number shows no sign of slowing down. I don’t think we will have a drop-in, generally capable remote worker by 2027. But by 2030? It’s more likely by then. And the likelihood of this scenario, and penetration into the real economy, will increase year after year. The scenario Dario Amodei is worried about, where 30% of jobs are capable of being replaced by AI and 70% aren’t, seems a lot more likely to me than 0% replaceability or 100% replaceability. To his credit, he has made the media rounds himself trying to raise the alarm about these economic possibilities.

    How long does it take to get to 30% of jobs being replaced? Then from 30-31%, and 31-50%? If it’s over the course of decades, I think we’re better equipped to absorb that societally than if it takes three years.

    I don’t want you to brush off the proclamations in that video clip as all hype and bluster. I also don’t want you to leave this post with an overwhelming sense of dread about your job being replaced in the next several years. My hope is that you stay tuned, here and anywhere else you see fit, to understand the rapidly changing AI landscape and seek out nuance in a sea of embellishment, bravado, and naysaying.

  • Last week, Google hosted their annual I/O conference at the Shoreline Amphitheatre in Mountain View, CA. They announced a slew of new ideas and products that range from AI mode for search to a tool that allows users to virtually try on outfits to a prototype homework tutor that sees what a student sees and helps them out. AI was mentioned 92 times during the keynote, which isn’t surprising if you’ve paid attention to events like these over the last several years. What is surprising is that one of these announcements has already broken out of keynote land and into our social media feeds.

    Veo 3

    Google kicked off the conference with a video entirely generated by Veo 3 – their latest video generation model. It’s a short, whimsical vignette of an Old West town populated by a menagerie of animals, complete with squishy gummy bears and convincingly falling confetti.

    There are few things that really stuck out to me watching this video that I think set Veo 3 apart from other video generation tools to date.

    • Realistic Physics – the way the animals walk, feathers fly, and objects interact with each other represent some of the best imitation of our physical reality I’ve seen so far. Getting this right is crucial to making a realistic video, as our human eye can easily pickup on inconsistencies with our real-world physical experiences.
    • Fidelity and Realism – the rider’s skin is still a little too perfect, the light on the chocolate bar is too uniform, and the chicken clap action is a little jerky, but these are three nits in a video with thousands of good-enough-to-pass features. More on this in a bit.
    • Sound – this is Veo 3’s real breakthrough. The ability to pass in text as a prompt and generate convincing speech that’s matched to the subject’s lips is something that’s new to the video generation paradigm.

    Blurring the Lines between Real and Generated video

    Access to Veo 3 is available now to anyone willing to part with $249.99 a month for Google’s AI Ultra plan (initially announced with a 50% discount for the first three months). Because this tool immediately got into the hands of creators, thousands of examples have already started to populate the internet.

    An early video that appeared tested a confusing but well-known AI video benchmark – Will Smith eating spaghetti. Here’s a comparison of an AI fresh prince chowing down from 2023, 2024, and 2025, generated by Veo 3.

    It’s easy to see the improvement in these results over two years time, evolving from a strange, quite off-putting mimicry of the general concept of eating spaghetti to a convincing video – save for the audible crunchiness of the soft noodles. Tools like Veo 3 are going to make it easier and easier for anyone to create videos that don’t immediately betray themselves as AI-generated. The next example is the one that spurred me to choose this topic to cover for an early installation here at Clearly Intelligent.

    Emotional Support Kangaroo

    I’ve seen hundreds, if not thousands, of AI-generated videos. It’s always been relatively easy to spot imperfections, inconsistencies, and downright impossibilities in these videos that indicate their provenance. I genuinely think the following video is the first that I consumed and scrolled right by, with no idea that it was AI.

    To be fair to myself, the version that I saw had no “AI” indicator or community note like the tweet above. And since the initial appearance, many instances of the video have disclaimers or community notes attached to them indicating AI-generation. But how many of the millions of people that viewed this video across dozens of platforms saw an AI disclaimer, committed it to memory, and have gone back to whomever they shared the video with to tell them they didn’t in fact witness an emotional support kangaroo innocently holding his boarding pass while his human argued with the gate agent.

    Critical Consumption

    I tend to think of myself as a relatively savvy consumer of information. That’s why this specific example struck a chord with me. Everything about it was just believable enough – why couldn’t someone have an emotional support kangaroo – that nothing in the video sent a strong enough signal to motivate a more critical viewing. Maybe if it had, I would have noticed that the speech sounded like gibberish, and that the audio didn’t perfectly sync up with the lip movements. But it didn’t, and I went on believing in make-believe for an entire day before seeing the truth come to light. And I was far from the only person fooled.

    Why it matters

    Had I gone on believing that the video was in fact real, my life probably wouldn’t have been that different. Maybe I confidently put down “kangaroo” in a service animal related trivia question one day and lose the round for my team. The specific impact of this AI-generated video is tiny, forgotten in a week by most among the onslaught of new viral moments. What I’m more interested in is the general impact of AI-generated videos that pass for real and that don’t inspire the kind of scrutiny that might cause viewers to question them.

    What happens when the subject of one of these videos isn’t a meek marsupial, but a politician advocating for a policy position they don’t in fact support? Or a violent crime that hasn’t actually happened? I could see a future when an authentic video that’s embarrassing or damaging to a person, cause, or organization is labeled as AI-generated by supporters to obscure the true nature of the video and avoid the fallout from it.

    Our shared view of reality, and general agreement on basic facts that we once took for granted has already disintegrated, influenced by social media algorithms and real-life filter bubbles. Video generation tools that create AI-generated videos that are indistinguishable from real life could enable bad actors to deepen those divisions, cement tribalistic viewpoints, and create controversies from whole cloth.

    Veo 3 is an incredible technical achievement, and as a closed-source tool provided by one of the most influential companies in the world, there’s a vested interest in creating and maintaining the right guardrails to discourage obviously nefarious use. Additionally, given the fact that Google likely trained Veo 3 on an immense corpus of YouTube data and has plentiful compute resources means it’s unlikely an open-source alternative appears in the immediate future that matches the capability of Veo 3. But not immediately doesn’t mean never.

    Norms and Incentives

    We’re still in the early days of AI-generated videos, and we haven’t yet collectively developed a set of established norms around them. Different platforms have different rules about labelling content as AI-generated, and different ways of implementing those labels.

    Generally, platforms are incentivized to maximize engagement – so what happens if those AI labels drive engagement down? Do creators stop creating AI-generated videos, or do the platforms relax or change their rules around them? Will users flock to services that clearly delineate the real vs. the artificial? Will a platform come out with a hardline “no AI allowed” stance? How does that get enforced?

    What lies ahead

    If you peruse posts on X/Twitter, you’ll encounter countless declarations that actors will be out of a job soon and that Hollywood as we know it will never be the same because of Veo 3. I don’t think either of those observations are likely to come true because they conflate technical capability and visual fidelity with a subjective quality of the finished product – that it’s good.

    A theme you’ll see frequently here at Clearly Intelligent is that I’m hesitant to characterize things as “good” or “bad”. AI is an excellent example of a dual-use technology – one that can be used for both beneficial and sinister purposes. So, I don’t think generative video tools are “good” or “bad” – and I don’t think you should think of them like that either. Instead, evaluate their current capabilities, consume content critically, and be expansive in your consideration about their promise and perils.

  • Welcome to Clearly Intelligent. I’m Mike Cottone, a consultant by day and an artificial intelligence commentator and communicator by night. I’m embarking on this project because I believe that we are in the early days of a period of profound technological change. I have two main goals in mind with this endeavor.

    Goal #1

    Expand understanding of AI technologies and their impact on our world.

    Goal #2

    Steer the world toward a better future.

    Artificial intelligence has the potential to change nearly every aspect of our lives, but the path before us is unpredictable and complex. Join me as we navigate it together.