
I think about AI every day. It’s often difficult to orient the sheer volume of thinking I do about AI in a structured way. When I do, they come out in the form of a blog post here at Clearly Intelligent. The New York Times conducted a panel with eight AI leaders to grill them about their thoughts on AI. So, in today’s post, I’ve got my structure determined for me. I’m going to wishcast a little bit and put myself on the panel. Maybe I’ll get there by the 2030 iteration of this event. Thanks to the New York Times for doing this, and please don’t copyright strike me here – I’m a paying subscriber using this content fairly, and my small but fervent group of loyal readers will be quite upset if they cannot get my takes.
Roster
Bio snippets are straight from the NYT. The participants for this panel were:
Melanie Mitchell – Computer scientist and professor at the Santa Fe Institute
Yuval Noah Harari – Historian, Philosopher and author
Carl Benedikt Frey – Professor of AI and work at the University of Oxford
Gary Marcus – Founder of Geometric.AI (acquired by Uber) and author of “Taming Silicon Valley”
Nick Frosst – Co-founder of Cohere, an AI startup
Ajeya Cotra – AI risk assessor at METR, a research nonprofit
Aravind Srinivas – Co-founder and CEO of Perplexity, a chatbot search engine
Helen Toner – Interim executive director of Georgetown University’s Center for Security and Emerging Technology
I’m going to present the questions and answers from the piece and give my thoughts on the responses of the panelists, as well as my own answers to the questions.
AI 5 Years in the future – Panelists give their biggest bet about the future of AI in 5 years

Less than a 1% chance of happening. Probably like 0.01% (That’s one in 10,000 simulations of the next five years.) There are so many things that need to happen in order for countries to consider AIs legal persons, and I am very confident that all of the necessary conditions for this will not happen in five years. In my opinion, in order for legal personhood to even be considered for AI, characteristics like consciousness and sentience will have to emerge and also be proven. Scientists and philosophers do not have an agreed on definition of these characteristics, nor do they have agreed methodologies for testing for the presence of these characteristics. I doubt very highly that consciousness and sentience are both rigorously defined and agreed on in five years, and highly doubt they will emerge in AI systems within this short timeframe. Maybe in five years there is a chance that AI gets personhood in the same way that corporations are “people” because of Citizens United, but I don’t think that this is what Harari means.

Agreed. AI won’t have cured cancer or solved physics. But can all cancers be cured? Does making most cancers much more treatable count as a cure? Can all physics be solved? AI will help push the limits of scientific exploration, and I think will have made meaningful contributions to the fields of medicine and physics by 2031. But these absolute milestones will not be hit.

Yes – as stated previously, AI for science will be a big deal – more on that later in the post. I actually used AI to help me do some research for summer camps for my kid this year and it was helpful in replacing a lot of Googling and other menial tasks. In 2031 I do think that you’ll be able to say to Gemini – find and reserve a spot for my kid at a summer camp. Here are her interests. Give me a couple of options, let’s have a back and forth, then do it. Now will a parent have to call to confirm, or sign some paperwork – probably. But I hope (and think) that we will have the technical capability to do this in five years.

Yes, AI will be used to describe so many different product features that it will cease to be very distinct from other aspects of digital technology. Under the hood, or in the interface, AI will play a huge role. I’m hopeful that we see some new and interesting design and interaction pathways that take advantage of AI technologies to reimagine how we use digital tools. I don’t think that I agree with the characterization that “the most banal use cases [will] have the most transformative impact.” It’s very possible that as AI capability and adoption both increase, and the technologies diffuse through the economy, that some root node problems are addressed that have meaningful impact, enabling new ways of solving problems that have previously been intractable. If you look at the area under the curve, yes, it might be that bundling up the banal use cases like automating insurance claims processing or personalizing digital ad copy has the largest impact, but not the most transformative.

Fully agree. The promise of AI should be to deliver unto the human race an abundant future, free of material scarcity, where we can use the technological, societal, and economic scaffolding of society to increase flourishing of the human race. Cheaper spreadsheets will not do this. Hopefully, cheaper spreadsheets allow more work to be done with less human labor, and we can direct that labor up the value chain of human flourishing.

Here we have it – our first mention of the mythical “artificial general intelligence” in the article. As AI has become more mainstream in the first half of the 2020s, AGI as a concept has evolved from a distant sci-fi consideration to a distinct near-term possibility. Ask 10 AI researchers what their definition of AGI is and you’ll probably get 11 different answers. Because of the lack of rigorous definition and consensus of what AGI means, I think it’s probably time to move away from AGI as an actual milestone. As far as what Marcus says here, he’s probably right about 2027. There are still lots of things missing from a system that would represent what I’ll call “digital AGI.” I equate this version of AGI as a system you could drop-in to most remote work friendly occupations and they could perform as well as the median human at the work tasks within that occupation. I think that probably will happen by 2032, at least in some fields/occupations.

Agree with this fully – it is a goal within many frontier AI research labs to automate the process of AI research, the first step necessary to kick off the recursive self-improvement loop. Within this construct, AIs would be in charge of planning, executing, and evaluating AI research, and implementing these changes into subsequent versions. In five years, it’s very likely that humans are still in the loop in this process, both from an operational and safety perspective. I’m not sure that an increase in research speed will have as many consumer-facing benefits as internal benefits. Put another way, I’m not sure that speed of AI progress will be the only bottleneck to new capabilities that make the technology more impactful.

Strongly agree with this given that AI capability to actually do useful things in the world improves. Right now, agentic AI is still in its relative infancy. However, I’ve started using Claude Code recently and have seen the light. Agentic AI will be extremely powerful when either its capabilities improve in areas like memory, statefulness, and continual learning, or when the systems that the AI interacts with evolve and start to be designed with AI agents as the users in mind, not just humans. I think we will see Apple, Google, and OpenAI take big steps this year in the realm of truly personal AI assistants, as I predicted in my 2026 AI predictions post.
My biggest bet about the future of AI in five years
The most meaningful disruptions in the business landscape will result from enterprises that are designed from the ground up with AI in mind. Because organizational change and digital transformation take a long time in large companies, and the people that are currently employed within those large companies have a vested interest in their labor not being replaced by AI, adoption is probably going to be slower than most AI-boosters think. But as an example, I think an insurance company created with 5% of the headcount of a comparable insurance company might disrupt that industry. Sure, there will probably still have to be humans in the loop for legal reasons or physical reasons, like going out into the world and verifying claims, but I can foresee a company that subcontracts a lot of that work and is run mostly by AI. A company like that might be able to offer similar services and coverage for a much lower price because they are paying for 5% of the human labor that their competitors are paying for. This is more verbose than the responses given in the NYT piece, but that’s the benefit of inviting myself to the panel.
Will The AI Doctor See You Now?

I would probably put myself here in the “Moderate” category, with the hope that the actual result is “Large.” I think Marcus here is focusing a little too much on LLMs only, not thinking about things like AlphaFold from Google DeepMind. There is so much potential for AI in medicine that I wouldn’t be able to do it justice in a short paragraph to describe all the ways that I believe it could be useful. In the clinical setting, I think LLM-based tools could definitely aid in way that Frosst mentions and therefore increase the actual amount of time doctors spend focused on patients, not paperwork. Ideally, agentic AI systems could reduce friction within notoriously arcane and byzantine healthcare workflows. Here’s a humorous tweet that might not be that far-fetched in the coming years.
On the research and development side of the house, I believe AI could be extremely useful in helping explore the very large problem space of diseases and potential treatments. I’m not quite sure that our medical system is ready to take advantage of the speed that AI can deliver though – if a new drug candidate can be proposed with the help of AI but it still takes 5-7 years to get to market, we’ve only sped up a portion of the process. This would still be a good outcome, and hopefully AI would increase the hit-rate for clinical trial success, but we should be looking to apply AI to every step in the drug development, testing, and approval process to get treatments into the hands of doctors and patients sooner. We should also be focused on reviewing the current regulatory processes that govern getting these therapies to market and ensure they are updated appropriately to take advantage of a potentially vastly increased volume of new treatment candidates.
An Army of Superhuman Coders in a Data Center

“Large” – obviously. In the extremely polarized world of AI opinions, it’s nice to stumble upon a point of unity. All but the most ardent AI-skeptics admit that current AI systems provide a substantial power-up to human programmers. As Harari notes, coding is an ideal arena for AI to showcase its skills. Programming tasks are easily verifiable (does the code do what you wanted it to do) and there’s an enormous amount of training data in existence for models to learn from. As I mentioned previously, I have only recently gotten around to experimenting with Claude Code and have been overjoyed with the results. If you haven’t vibe-coded anything yourself, please jump in – it’s a real joy to see an idea come to life in a few minutes on a screen. Frey’s point about human review is important, but I don’t know for how long.
An Eager Collaborator

I would place myself in the “Large” camp here. I don’t fully agree with Mitchell’s assertions. In December, OpenAI published researching showing that GPT-5 created novel wet lab protocols that increased efficiency of a molecular cloning protocol by 79x. AI is solving Erdos problems now. AlphaFold famously won its creators a Nobel Prize for its advancement in the realm of protein folding. The frequency and impact of these accomplishments is much more likely to accelerate in the future than to slow down or stop.
I watched a great podcast episode recently from the Cool Worlds podcast by Professor David Kipping of Columbia University. In the episode, he details a recent meeting he attended where the consensus was that AI is absolutely vital to the practice of science moving forward. He thinks, as do I, that AI will be viewed as indispensable a tool as a calculator or a computer in science in the near future. He also wrestled with the possibility of an identity crisis among scientists who may feel obsoleted by the technology. This is a valid concern, but I find solace in Srinivas’s response to this question – the role of humans will be even more concentrated on asking the right questions, and choosing where to focus their research efforts. If we can use AI to speed up the process of science, and also increase the volume of science we do as a species, the ceiling for progress is raised significantly.
Improving the way we move through the world

Frosst’s point is spot on. I think it’s very likely that the short term benefits of AI in transportation will be less flashy than self-driving cars, but only if the entities in charge of maintaining and building these systems embrace the technology. If we can use AI to more proactively approach problems in the realm of transportation, or tackle challenges that would have previously been cost-prohibitive, we could start to realize some great benefits. An example of this is a company I encountered last year at O3’s 1682 Summer Session conference on Innovation and AI – Scout Robotics. They are focused on using AI and robotics to reimagine inspection and maintenance for transportation infrastructure. There are thousands of miles of railroad in this country, the inspection of which is currently constrained by the cost of labor to carry out those inspection operations. If we can use AI to alleviate some of these constraints, we can make grand problems, in the realm of transportation and elsewhere, more tractable.
On the self-driving cars front, I’ve written previously about why I think this is a tricky problem worth solving. I’ll point you to that post for a relatively recent vision related to autonomous cars.
Education – What, how, and why

I would probably put myself off the charts here, somewhere in the realm of “Monumental”. I think AI has the potential to not only disrupt the way education is delivered, but at an even more important level, AI’s effects on the world may call into question what the true goals of education are. If we find ourselves in a future scenario, let’s say in 2040 (about the time my son will be going through his first year or two of college), where AI labor has proven to be a sufficient substitute for human labor in a wide variety of fields, what exactly will we be preparing our children for? This problem is so large, so consequential, and would require such coordination to re-think current systems, that I believe it will probably just be kicked down the road, where the collective hope is that “things will just work out.” I am very averse to this approach and would agree with my good friend Henry Cavill here – hope is not a strategy.
On the positive side of things, like Frey and Toner indicate here, I think AI can help rethink and remake education for the modern era. By providing personalized tutoring and tailoring educational content to specific pupil’s needs and aptitudes, we may be able to better serve individual students and allow them to progress at the pace that fits them, not at the pace the entire class progresses. One of the most interesting podcasts about AI that I listened to in 2025 was from one of my favorite AI content creators – Nathan Labenz at The Cognitive Revolution podcast. This episode from June of 2025 is an interview with with Mackenzie Price, founder of Alpha School, a school that uses personalized AI instruction to accomplish the learning of a traditional school day in two hours a day, with the rest of the time focused on passion projects, sports, arts, and other activities where “guides” (not teachers) work with the students.
Education is a third rail on most parts of the political spectrum. There is widespread agreement that our education system is outdated and not fit for purpose in many aspects, but almost no one can agree on the ways to make it better. A theme that underlies lots of my thinking about AI is that AI technologies may help us reimagine better ways of doing things that would have been unthinkable in the past. What if we really could enable every child on earth to access a free personal tutor? What if we could reorient the goal of education to not simply prepare workers, but to build a population that learns, grows, and thrives to bring about positive changes in the world? These may seem like pipe dreams for now, and Marcus’s observation that AI is used for rampant “cheating” and assignment completion is inarguably correct. The negative effects/uses of these tools today should not preclude us from realizing a better future by the smart integration of the tools.
An alien entity traverses our minds and our world

I’m in the “Moderate” camp here in the short term and the “Large” camp in the long term. We have already seen documented cases of horrific outcomes related to LLM psychosis, and I expect that to continue to be a problem. Any technology in the hands of hundreds of millions of users is going to generate adverse outcomes. I don’t want readers to view this as insensitive or downplaying the negative impact in these cases, which include suicide and murder. These are seriously challenging ethical situations, and they have devastating consequences in the most extreme cases. Add to that the fact that these systems are non-determinative, that each interaction is created anew rather than produced by software that relies on traditional IF THEN rules of behavior, it will never fully be able to account for every edge case. I want to see AI companies put real effort into explaining how they monitor for conversations and interactions that look like they may result in real mental or physical harm to people. I don’t know how you solve for this legislatively.
Alternatively, and similarly to the education point above, it might be hugely beneficial to put a therapist in the pocket of billions of people around the world at very low cost, in the same way it would be to put a tutor or personal assistant in their pocket.
I do love Harari’s point here, and it’s something I think about consistently as it relates to AI. We are all guinea pigs in the AI revolution. You may still be able to insulate yourself by not using the tools right now, but my guess is that in five years, AI will saturate nearly every part of society, and to remove yourself fully you might have to adopt a lifestyle similar to if you were to remove yourself from the influence of electricity. Add in the potential existential crises that arise from the lack of need for a huge swath of human labor at sometime in the future, and you get a huge impact on human mental health in the long term.
Creation and worth – slop or transcendence?

“Large” – positively large in the most optimistic scenarios and negatively large in the most pessimistic scenarios. I find the proclamations about AI as it relates to art and entertainment very interesting, and could probably talk at length about it. If you’ve gotten this far, I don’t want to lose you, so I’ll keep it succinct. AI is probably going to be able to enable lots of people to create material they would not otherwise be able to create. AI itself cannot generate “art” but it can generate “content.” Art, in my opinion, has to have an intentional component, and intentionality (also in my opinion) requires sentience. Hollywood is not “dead”, the video game industry is not “dead”, influencers are not “dead.” These landscapes will have new entrants to compete with that rely heavily (short term) or fully (medium term) on AI, but there will be a continued preference for human-involved material for the foreseeable future.
Update Your Priors – AI misconceptions worth dispelling
Panel question: What’s one misconception about AI that you think is worth dispelling?

I don’t necessarily see this as a misconception. Disregarding “magic” and focusing on “emergent” I think is important here, as semi-conventional wisdom holds that any sufficiently advanced technology is indistinguishable from magic. I think as AI tools become more advanced, and especially more agentic, we will see emergent capabilities that were not predictable or part of the training process. Semantics are important but sometimes frustrating, so I’ll leave you with this thought – we probably will not always be able to predict the capability of AI tools, and the result of that will be both incredible and frightening.

For right now, AI is for all intents and purposes under human control related to the motivation to action. You don’t birth an AI into the world and without any instruction it goes and starts accomplishing things. There is no root motivation or direction. Now this could change in the future, if sentience develops. I’m going to put it on my vision board one day to talk to Harari about sentience and consciousness, as these aspects of AI development probably have the most impact on the extreme longterm (100, 1,000, 10,000 years into the future.) I do agree with the second part, that models will be able to pass a consciousness Turing test within a decade. Agentic AIs are starting to become more common place, and they aren’t asking permission to carry out tasks, so I guess you could in some sense say they are “deciding” which actions to take. Currently though, I would think of that more like my heart “deciding” to continue beating.

Agreed. I’ve already used AI tools to help troubleshoot issues around my home or work on other handyman style projects. I wouldn’t trust it in areas with potentially hazardous or costly mistakes (i.e. electricity or plumbing) but it serves as a very useful first pass in this realm. I do think however that manual work will probably be “safer” from labor displacement for a decent amount of time. The world is very tricky. I think it’ll probably be about eight years until a humanoid robot can assemble a never-before-seen Lego set (I’m working on a post for the future about lots of different physical benchmarks I’d like to see and predictions related to these), so if that’s the case, it will be more than eight years before a humanoid plumber is showing up to fix my leaky sink or install a new water heater.

This one is hard for me – people are greatly confused about large language models, and do attribute humanlike intelligence to an intelligence that is structured and trained very differently from our own. There are many philosophical thought experiments that are useful to consider when thinking about AI, and one of the most famous is John Searle’s Chinese Room. In short, the thought experiment considers a computer that has the ability to perfectly carry a conversation in Chinese by passing pieces of paper back and forth to the human participant. The computer uses a translation dictionary to accomplish this task, painstakingly looking up the words and transcribing them, then handing them back to the human participant. From the human’s perspective, they seem to be conversing with a fluent speaker. The computer doesn’t actually know anything about Chinese, let alone how to speak it.
I think the real question here is if the output of a “mimicry machine” is indistinguishable from humans, does it matter that it is simply a mimicry machine? I also think Marcus’s second point about reasoning flexibly in the face of the unknown is a test where LLM based tools are have made continuous progress and don’t have nearly as hard of a time with new idea generation than they have previously. If you type into an LLM right now and say “think of a new idea”, a new idea will be provided to you. How “new” the idea is will probably be defined by personal preference, and unfortunately, by your priors related to how you think about AI. You might say “oh it’s just a combination of existing ideas” and if that’s the case I’d point you to tabula rasa and let you wrangle with that for a few minutes/the rest of your life.

Agreed. Again, sentience and consciousness are key areas to consider here when you start using the word “thinking.” However, it’s worth thinking about how much of human cognition is comprised of “sophisticated pattern matching.” My guess is, based on the ability of AI tools today to produce meaningful outputs, that this portion is huge.

I am standing, hooting, and hollering for this one. The real world is tricky, annoying, rough, complex, and chaotic. AI capability in a perfectly organized and receptive environment is likely far greater than AI utility in the real world full of humans and other unpredictable entities. Every technology has skeptics, bubbles, hucksters, vanguards, visionaries, and Cassandras. In the age of AI, all of these people have social media megaphones and captured audiences. The future is really, really unpredictable. My advice – don’t be definitive, update constantly, try to balance emotion and objectivity. I also don’t think this should be viewed only from a catastrophic standpoint – a new capability or new model might be the butterfly wing flap to a world of abundance, even if it simply looks like a dot release from a frontier AI lab today.

This is far too confident of a take for me. Like I said above, we have no idea what the employment effects of AI will be. New technologies almost always shift the nature of work in society, and in some cases they do remove types of work from society. How many horse-drawn carriage drivers or telephone switchboard operators have you seen recently? I think AI is different from these previously creatively destructive technologies because it has the potential to substitute not simply the output of human economic endeavor, but the substrate of labor itself. We have had tools in the past that have accomplished this for physical labor, but have never had it for cognitive labor at the scale we are on the precipice of. The first computers were people – they were in charge of actually computing mathematical operations. We began automating this cognitive labor in the mid-20th century, and it unleashed an unprecedented wave of economic growth and invention. If the previous 80 years of technological progress grew from that seed, what could the next 80 years look like if we can conquer all cognitive domains?

Absolutely agree, especially in the short term. Now, if we have the entire planet tiled with data centers that use gas turbines for power, that would be extremely bad. And I think the race dynamics at play today combined with the United State’s notoriously slow pace of building new infrastructure might lead to some short-term deleterious environmental impacts. But overall, in the end, environmental worries will be low on the list of concerns from AI. A particular shout-out here to one of my favorite voices in the AI space – Andy Masley, who has tried tirelessly to accurately quantify the water usage in the realm of AI. My hope and prediction is that AI will speed up our ability to make advances in electricity generation, storage, and transmission, hastening our departure from fuel that dumps carbon in the atmosphere.
Fast Forward – Life in 2030
Question: Which of the following statements do you think will be true by 2030?

I say “False.” Many factors, including at the minimum, friction within organizations to adopting these tools in a way that displaces labor and the lack of physical ability in the real world means that the employment impact of AI won’t be that large by 2030. I’m fairly confident in that prediction, and I also predict that will be different in 2035, 2040, 2050, 2100, and so on. Looking forward to continue elaborating on this over the next 80+ years.

“True” as long as you don’t specify that AI accomplished this feat “itself.” Scientists will be supercharged by AI, and humanity will benefit greatly from this fact.

“True” and terrifyingly so. With the emergence of agentic AI, and the desire amongst optimists to dangerously skip permissions, it’s very likely we will see an either intentional or unintentional major global warning shot by 2030. I hope that it will be in a relatively immaterial domain, and that we can recover from and learn from it. I’m worried that an inverse scenario (bad material impact or loss of life and that we learn nothing from it) is also fairly likely.

“True” but it might not be in chatbot form. I think it will be in ambient assistant form, like a Siri or a Gemini. This will depend on AI’s ability to actually “do stuff”, but I think it will be there and will be as omnipresent as the smartphone itself.
The Arrival of AGI

I would put myself in a new category here – “Likely.” Harari is correct that defining AGI as comparable to human intelligence is meaningless these days. I think we will definitely have something that one could reasonably consider “Digital AGI” by 2036, previously defined in this post as a drop-in remote worker. I don’t think we will have “Physical AGI”, meaning an embodied AI that can accomplish most physical tasks in the real world that a human can by 2036.
Historical Analogs – Have we been here before?


I love this question. It’s so hard to answer definitively, but luckily, since this is my blog, I’ll just tell you what I think right at this moment in February of 2026. I’m going to go out on a limb here, and say that in the short term, lets say between now and 2100, it will be as impactful as the steam engine. In the long term, it will be between the emergence of Homo sapiens as a species and the invention of agriculture.
Wisdom for the age of AI


I think these are all great pieces of advice honestly. And I really credit the Times on getting a diversity of opinion and experience for this piece. I’ll be happy to add my own if I can make my own luck and get an invite to a future version of this panel. My favorite piece of advice here is probably Srinivas’s about learning to ask the right questions. This is probably due to my deep appreciation of design thinking, which is all about asking the right questions to understand the problem you’re truly solving. In the age of AI, we will have more intelligence than we ever have before to solve any challenge we want. It will be crucial to point this intelligence at the right problems to create a better future where we can all thrive and benefit from this miraculous and scary time to be alive.
As far as my own advice to a high school student, I would say the most important characteristics to have are flexibility, creativity, and adaptability. The world is changing quickly – having the ability to adapt to the new realities that we encounter each day is going to be key to be successful and avoid being overwhelmed by the pace of the present.
A note on AI use: Here at Clearly Intelligent I’ll be adopting a scale suggested by Seb Krier that explains how I use AI in generating my posts. I’ll file this under 1.2 on this scale. I used AI to refresh my memory on the Chinese Room and tabula rasa, as well as to search and find some links used in the article that I’ve collected over the past few years thinking about AI. I also used Gemini to generate the cartoon image at the beginning of the post.

































