6 Things Which Artificial Intelligence Can’t Do

0
399
Artificial Intelligence in defense

The fact that AI still has fundamental limitations after being based on human intelligence as the benchmark for artificial intelligence is something we need to remember when deploying it.

This problem from econometrics illustrates this challenge of going beyond tacit knowledge and shows how not understanding these limits can lead to unfair results with artificially intelligent agents.

Automation has not led to this outcome because it requires specifying exact rules to inform computers what tasks to perform. It is difficult or impossible to express tacit knowledge formally, however, since the skills composing it evolved before formal methods of communication were available to humans.

#1 Evolutionary Skills

Human skills are difficult to reverse-engineer because they have been evolving for a long time. In addition, skills that appear effortless, like walking or breathing, take longer to figure out than actions we need lots of practice with to master.

Artificial intelligence never has its past revolution fame.

#2 The Future of AI Is Complementary 

The conclusion is that the simplest skills to humans are difficult or impossible for machines. This means it will take a lot of time and resources to teach these simple tasks, which might not be worth it because they aren’t challenging enough in return.

Critics like to point out that these shortcomings are evidence of the pursuit of artificial intelligence being misguided or failing. That’s not how I see it, though: instead, they should be thought of as inspiration–because challenges in AI need addressing for advancement.

#3 Use “common sense.”

Humans automatically grasp that people eat at restaurants, order a meal before they arrive, and leave a tip after leaving because we possess background knowledge of how society works. Common sense is a vast, shared, often unspoken body of knowledge about the world we live in.

When we’re two or four years old, we don’t ever put down in a book what it is that we’re learning. We learn so many things during those early formative stages of life but never have an opportunity to document them on paper because they are always being replaced with new information and experiences before the ink has dried from one page to another.”

AI systems can make basic mistakes because humans do not develop persistent mental representations of objects, people, and places. As a result, AI has to learn differently than we do with our “common sense.”.

#4 Learn continuously and adapt on the fly.

Recently, the typical AI development process has been divided into two phases: training and deployment.

AI models learn how to perform specific tasks by ingesting preexisting static datasets. Upon completion of this phase, all parameters are set permanently. The deployment stage is when the system generates insights about new data by using what it has learned from its training. If we want our knowledge base updated with new information or changing circumstances that may impact learning processes in the future; then we have no choice but to retrain offline again using the latest datasets (generally time intensive)

Humans can dynamically and smoothly incorporate this continuous input from their environment, adapting our behavior as we go. In the parlance of machine learning, one could say that humans “train” and “deploy” in parallel and in real-time. Unfortunately, today’s AI lacks this suppleness.”

Humans can adaptively learn new information or skills-based on current circumstances without being explicitly programmed where they should look for it – much like how a human would react if you were teaching them something new on the spot versus giving them explicit training beforehand (i.e., saying ‘before I give you your exam next week please read chapter 5’). Artificial intelligence has not yet reached that level of sophistication; even though modern AIs have been

#5 Understand cause and effect.

Researchers have found that machine learning is a correlative tool, meaning it excels at identifying patterns and associations in data. However, AI has less of an understanding of the causal mechanisms behind these correlations and how they impact our world. For example, we might drop a vase to see what happens or drink coffee because it makes us feel alert

Machine Learning Is A Correlative Tool And That Limits Its Understanding Of How Our World Works.

Until artificial intelligence can reason causally, it won’t be able to understand the world fully. Instead of using natural language as we do, they’ll use logic and data structures that humans might not even think about or appreciate. It also means AI will have trouble communicating with us on our terms because there’s so much variation in how people communicate

#6 Reason ethically.

Machine learning systems will be given increasingly high-level responsibilities as more real-life tasks are delegated to them, from lending money to making hiring decisions to reviewing parole applications. Therefore, resolving the alignment problem will become a more challenging endeavor for society. Despite this, finding a straightforward solution to this problem is virtually impossible.

Establishing specific guidelines for how our AI systems should be implemented would be a good place to start.

The future of AI lies in its complementarities with human skills rather than its substitutability for them.

Final verdict

AI and its interaction with human capabilities need a serious rethink regarding the kinds of problems it’s being developed to address. For instance, up until the computing revolution of the 1970s and 1980s, statisticians employed veritable armies of graduate students to hand-process reams of paper-based data into summary statistics like means, medians, and standard deviations. Artificial Intelligence in defense is going to be a big factor.

It is time to rethink how AI works alongside human capabilities and what problems it is possible to solve with it.