Does AI Know Everything?
The answer to that question is a deep, resounding, “No.”
In fact, it may be the most human thing about AI. Not only does it not know everything, but it doesn’t know what it doesn’t know — just like you and me.
AI is trained on existing data, and, what’s more, AI functions by predicting what comes next based on what has happened in the past. In that way, AI is even more limited than the human mind in its inability to foresee a future that hasn’t happened yet.
And that is perhaps one of the greatest limitations of AI — and one reason AI still can’t fully replace humans, particularly in creative fields. It can only guess what may happen based on a statistical model trained on what has happened in the past. It literally has no imagination.
What does AI know?
In some ways, this is an easy question to answer; in other ways, it can be profoundly difficult.
Before we dig into that question, however, it’s important to state that AI is developing and evolving at an astounding pace, so what was true yesterday may not be true today, let alone tomorrow. Your answer also depends somewhat on which model you’re using and what access it has.
At the most basic level, AI knows what it has been trained on — or what it can access in real time, if it has that capability.
All AI models are trained on a certain set of data; oftentimes, this dataset is an enormous slice of all available content on the internet. In 2022, IPXO estimated that there were about 200 million active websites on the web and about 5 billion internet users. Those numbers have surely only grown since then. AI companies “scrape” content from the web to train their models. The more data a model is trained on, the more adept it becomes at responding to a user’s query.
But that still doesn’t fully answer the question, because AI companies train their models on as much data as they can get their hands on, but they have no reason or need to share exactly what data they’ve used to train their model. Much like the algorithm used by Facebook or TikTok, the exact training of an AI model is their secret sauce. That’s where the “profoundly difficult” part of answering what AI knows comes in.
What doesn’t AI know?
So AI models know a whole lot. But there are limitations to what they know. Many AI models have a model training date or cutoff date — the last date for which the model had data when it was trained.
An example of this would be current events. If the AI model you are using was trained on data that ran through last Sunday, it wouldn’t know about what happened on the following Tuesday and couldn’t account for that in its responses.
That then brings into the equation live vs. static models. A “live model” with access to the internet could search for new information to incorporate into its response, but that information wasn’t included in its initial training.
And even then, access to information is not the same thing as judgment. AI can retrieve, summarize, and predict based on available inputs. That does not mean it understands those inputs the way a human does.
A quick aside about AI writing
One use of AI that has become incredibly popular as large language models (LLMs) have become widely available is having AI help with writing. Whether that is simply as a proofreader or having the LLM fully do the writing for you, consumers quickly discovered how easy it is to have AI do the work.
But it can be helpful to step back and understand what is happening when we have AI help us with writing.
AI LLMs function off of statistical probability predictions. When you ask one to write something, it works by calculating what word most likely should come next based on all the content and language it was trained on. It is, in some ways, a simple calculation of what word likely should come next based on what word comes next in similar content and contexts, while theoretically aligning with your specific prompt.
All this means is that whatever the LLM “wrote” for you is really just the output of a statistical prediction of what word should come next. It cannot “think” or “imagine.”
When people use AI to write, they are often not getting original thought. They are getting a prediction engine generating the next most likely sentence based on patterns it has already seen. That can be useful for structure, cleanup, and cohesion.
It can also produce generic language, false confidence, and borrowed thinking if the user stops paying attention.
AI can help you write. It cannot decide what is worth saying.
The limitations of prediction models
Let’s be a bit more practical. You pull up your LLM of choice and ask a question like, “What is likely to happen to the price of the S&P 500 over the next 30 days?”
The model will analyze available data, perhaps pull from resources like Financial Times or Fed meeting minutes, and give you a market outlook, with some qualifiers for what may affect that outlook (i.e., “if oil prices stabilize … then XYZ could happen…”).
But the model only knows what it knows or can find out. And then it will predict what comes next based on statistical probability and what has happened in the past.
It cannot imagine something that hasn’t happened yet, which, in today’s geopolitical climate, has proven to be almost impossible to guess.
AI struggles with genuinely unprecedented situations. It can estimate based on what has happened before. It can recombine existing patterns. What it cannot do well is reason from real-world experience or foresee events that break sharply from the past.
We just don’t know … but we can imagine.
In his 2016 book But What If We’re Wrong?, author Chuck Klosterman examines how we as humans can be certain we understand something only to be proven completely wrong much later. He uses examples such as our once-held certainty that the sun revolved around the earth. Later discovery and knowledge proved something we couldn’t even originally conceive to, in fact, be true.
He also spends some time contemplating gravity as a force we once did not understand but now many people have a high level of confidence that we have “figured out.” He speaks with physicists who believe that 500 years from now we may have a fundamentally different understanding of how gravity works.
We simply don’t know what we don’t know, and we often can only imagine something within the realm of what currently seems possible. He explains that our imagination of the future is shaped by the “tools” of the present — we can’t possibly predict a future that is shaped by tools we haven’t yet conceived of. So in that sense, there are limitations to our own imaginations.
But AI is limited even more by the fact that its ability to “predict” anything is really just a statistical guess based on what has happened previously — and what of that it has actually been trained on.
AI can’t imagine a future that isn’t based on data from the past. It cannot “predict” or “venture a guess” at what will happen or should happen in unprecedented times.
There are many things that AI is particularly adept at and many perfectly valid instances in which to lean on the technology. But anything that calls for responding to new situations or envisioning a future that does not currently exist is beyond its actual capability.
As with all things AI-related, the key for all of us humans utilizing AI-powered tools is to go in with a clear understanding of what we’re trying to achieve. Use AI where it is genuinely useful, but take the time to think critically for yourself about whether AI is really the best tool available for that particular instance.
At the end of the day, that’s really the point: AI can be a powerful tool, but it is still just a tool. It’s not a replacement for creativity, or intuition, or more importantly, judgement.
Photo by engin akyurt on Unsplash