AIMultiple ResearchAIMultiple Research

In-Depth Guide to Future of AI in 2024, According to Top Experts

Investment and interest in AI is expected to increase in the long run since major AI use cases (e.g. autonomous driving, AI-powered medical diagnosis) that will unlock significant economic value are within reach. These use cases are likely to materialize since improvements are expected in the 3 building blocks of AI: availability of more data, better algorithms, and computing.

Short-term changes are hard to predict and we could experience another AI winter however, it would likely be short-lived. Feel free to jump to different sections to see the latest answers to your questions about the future of AI:

Will interest in AI continue to increase?

Short answer: Yes. There is no reason to expect any decline in AI hype even after COVID-19:

Interest in AI has been increasing

There has been a 14x increase in the number of active AI startups since 2000. Thanks to recent advances in deep learning, AI is already powering search engines, online translators, virtual assistants, and numerous marketing and sales decisions.

Graph 1 shows the number of queries including the term “artificial intelligence”. Between 2015-2018, the popularity of AI grew to 2-3x the level in 2015. However, the interest stayed relatively flat until two years ago, and it is again on the rise.

Time series graph showing increased interest in AI since 2015.
Figure 1: Interest in “artificial intelligence” has grown in the past two years. Source: Google Trends

There are high value AI use cases that require further research

Autonomous driving is one popular use case with an increasing trend. As Tesla and Audi manufacture semi-autonomous vehicles today they still require drivers to control. This technology rapidly continues to improve to reach a fully automated driving level. Though Elon Musk stated “Next year for sure, we will have over a million robotaxis on the road,” in October 2019, we still don’t see robotaxis. This is because Elon Musk is the master of hype and self-driving cars have complex regulatory issues such as liability accidents. Elon Musk also highlighted this issue via a tweet reply in April 2020.

However, McKinsey predicts that roughly 15% of vehicles sold in 2030 will be fully autonomous.

Automated content generation also aroused the interest of businesses and AI experts, as GPT-3 was released by OpenAI in June 2020. Compared to GPT-2, OpenAI increased the number of parameters to 175 billion from 1.5 billion. Yet, it appears GPT-3 has also weaknesses in some tasks that require a comparison between two sentences and its accuracy is less than 70% with few-shot learning. Shortly, we will encounter higher-accuracy content automation solutions as Natural Language Generation (NLG) technology advances.

Another use case is conversational AI/chatbots. We commonly encounter AI agents in customer services and call centers. However, the capabilities of these agents are currently quite limited. As AI research progresses, conversational agents will improve to handle almost all customers’ tasks in the future.

AI research effort continues to grow

Between 1996 and 2016, the number of published papers on AI has increased eight times, outpacing the growth in computer science papers.

Graph shows the number of papers on AI has outnumbered those written on computer science and other topics.
Source: AI Index

Especially in 2016-2018, AI paper growth has accelerated:

AI Publications per year
Source: Stanford

In the late 90’s AI papers accounted for less than 1% of articles and around 3% of conference publications. By 2018, the percentage of published AI papers in total papers has increased 3 times in 20 years, accounting for 3% of peer-reviewed journal publications and 9% of published conference papers.

Research may need to continue in new directions beyond deep learning for breakthrough AI research. There are AI researchers like Gary Marcus who believe that deep learning has reached its potential and that other AI approaches are required for breakthroughs. Gray outlined his observations on the limitations of AI in this paper, answered the most critical arguments against his paper and put a timeline on these predictions. He expects VC enthusiasm in AI to be tempered in 2021 but expects the next AI paradigm unlocking commercial opportunities (e.g. the new deep learning) to be available sometime between 2023-2027.

AI systems so far relied on these for improvement: increased computing power, availability of more data, better algorithms and better tools. In all 4 areas, there is potential for dramatic improvements though it is hard to put these against a timeline. In addition, thanks to cryptography and blockchain, it is becoming easier to use wisdom of the crowd to build AI solutions which will also facilitate AI model building.

Advances in computing power

Deep learning relies on computing power to solve more complex problems. With current technology, learning may take a too long time to be beneficial. Therefore, there is a need for advances in computing power. With new computing technologies, companies can have AI models that can learn to solve more complex problems.

AI-enabled chips

Even the most advanced CPU may not improve the efficiency of an AI model by itself. To use AI in cases like computer vision, natural language processing, or speech recognition, companies need high-performance CPUs. AI-enabled chips become a solution to this challenge. These chips make CPUs “intelligent” for optimizing their tasks. As a result, CPUs can work on their duties individually and improve their efficiency. New AI technologies will require these chips to solve complicated tasks and perform them faster.

Companies like Facebook, Amazon, and Google are increasing their investments in AI-enabled chips. Below you can find a chart of global equity funding for AI-enabled chip startups.

These chips will assist next-generation databases for faster query processing and predictive analytics. Industries like healthcare and automobiles heavily rely on these chips for delivering intelligence. We have prepared a comprehensive, sortable list of companies working on AI chips.

Advances in GPUs

GPUs are one of the most commercially used types of AI enabled chips.

Rendering an image requires simple computing power but needs to be done on a large scale very quickly. GPUs are the best option for such cases because they can process thousands of simple tasks simultaneously. New technologies in GPU render better-quality images since they do these simple tasks a lot faster.

Modern GPUs have become powerful enough to be used for tasks beyond image rendering, such as cryptocurrency mining or machine learning. While CPUs are usually used to do these tasks, data scientists discovered that these are repetitive parallel tasks. Thus, GPUs are widely used in AI models for efficient learning.

Quantum computing

Traditional computer systems work with binary states; 0 and 1. However, quantum computing takes this to another level and works with quantum mechanics. This enables quantum systems to work with qubits, instead of bits. While bits consist of 0 and 1, qubits consist of 0, 1, and an additional state, which includes both at the same time. This additional state enables quantum computing to be open to new possibilities and provide faster computation for certain tasks. These tasks include neural network optimizations and digital approximations.

IBM states that it will be possible to build a quantum computer with 50-100 qubits in the next 10 years. When we consider that the 50-qubit quantum computer works faster than today’s best 500 supercomputers, there is significant potential for quantum computing to provide additional computing power.

For more on quantum computing, feel free to read our

Advances in data availability

This is a point that does not need to be explained in much detail. Data availability has been growing exponentially and is expected to continue to do so with the increasing ubiquity of IoT devices.

Advances in algorithm design

While the capabilities of AI improve rapidly, the algorithms behind AI models will also evolve. The advancements in algorithm designs will enable AI to work more efficiently and be available to more people with less amount of technical knowledge. Below you can find the prominent advancements in AI algorithm designs.

Explainable AI (XAI)

One of the main weak points of AI models is its complexity. Building and understanding an AI model requires a certain level of programming skills and, it costs time to digest the workflow of the model. As a result, companies usually benefit from the results of AI models without understanding their workflow.

To solve this challenge, Explainable AI makes these models understandable by anyone. XAI has three main goals:

  • How the AI model affects developers and users
  • How it affects data sources and results
  • How inputs lead output

As an example, AI models will be able to diagnose diseases in the future. However, doctors also need to know how AI comes up with the diagnosis. With XAI, they can understand how AI makes its analysis and explain the situation to their patients accordingly. If you are interested, you can read more about XAI from our in-depth guide.

Transfer learning

Transfer learning is a machine learning method that enables users to benefit from a previously used AI model for a different task. In several cases, it is clever to use this technique for the following reasons:

  • Some AI models aren’t easy to train and can take weeks to work properly. When another task comes up, developers can choose to adopt this trained model, instead of creating a new one. This will save time for model training.
  • There might not be enough data in some cases. Instead of working with a small amount of data, companies can use previously trained models for more accurate results.

As an example, a well-trained AI model to recognize different cars can also be used for trucks. Instead of starting from scratch, the insight gained from cars will be beneficial for trucks.

Reinforcement learning (RL)

Reinforcement learning is a subset of machine learning which aims AI agent to take action to maximize its reward. Rather than traditional learning, RL doesn’t look for patterns to make predictions. It makes sequential decisions to maximize its reward and it learns by experience.

Today, the most common example of RL is Google’s DeepMind AlphaGo which has defeated the world’s number one Go player Ke Jie in two consecutive games. In the future, RL will also be available in fully automated factories and self-driving cars.

Self-Supervised Learning (Self-Supervision)

Self-supervised learning (or self-supervision) is a form of autonomous supervised learning. Unlike supervised learning, this technique doesn’t require humans to label data, and it handles the labeling task by itself. According to Yann LeCun, Facebook VP and chief AI scientist, self-supervised learning will play a critical role in understanding human-level intelligence.

While this method is mostly used in computer vision and NLP tasks like image colorization or language translation today, it is expected to be used more widely in our daily lives. Some future use cases of self-supervised learning include:

  • Healthcare: This technique can be used in robotic surgeries and in estimating the dense depth in monocular endoscopy.
  • Autonomous driving: It can determine the roughness of the terrain in off-roading and depth completion while driving.

Advances in AI-building tools

Though these are not novel algorithms, they can reduce the time to build models and enable both AI research and commercialization

Neural network compatibility and integration

Choosing the best neural network framework is a challenge for data scientists. As there are many AI tools in the market, it is important to choose the best AI tool for implementing the neural network framework. However, once a model is trained in one AI tool, it is hard to integrate the model into other frameworks.

To solve this problem, tech giants like Facebook, Microsoft, and Amazon are cooperating to build Open Neural Network Exchange (ONNX) to integrate trained neural network models across multiple frameworks. In the future, ONNX is expected to become an essential technology for the industry.

Automated machine learning

AutoML supports companies in solving complicated business cases. With this technology, analysts won’t need to go through manual machine learning training processes. They can even evolve new models that can handle future AI challenges. As a result, they will focus on the main case instead of wasting time understanding the workflow.

AutoML also offers customization for different business cases. This enables flexible models when you combine data with portability. To learn more about AutoML, you can check our article.

Advances in collaboration in AI model building

Improved tools lower the bar for model building but still, human ingenuity is a key ingredient in AI models. Data science competitions help companies attract thousands of data scientists to work on their problems.

In the past, challenges like data confidentiality slowed the growth of such platforms. However, modern encryption techniques are enabling companies to share their data publicly and benefit from the wisdom of crowds without giving away confidential information.

What are the future technologies to be enabled by AI?

AI use cases will shape the development of AI. Availability of capital depends on use cases and more valuable use cases would motivate companies and governments to invest more.

The improvement of AI will make our intelligent systems even more intelligent. Our cars will drive themselves, houses will adjust their electricity usage, and robots will be able to diagnose our illnesses. In other words, AI will cover more of our lives and will automate our daily tasks. Here are a few use cases of AI technologies that currently exist in quite limited functionality or limited scope (research projects). Improvement of these technologies will unlock significant value.

  • AI assistants
  • AI-based medical diagnosis
  • Autonomous payments
  • Autonomous vehicles
  • Bionic organs
  • Conversational agents
  • Smart cities
  • Smart dust

Cloud computing based use cases

Cloud computing aims to create a system where you can achieve computing functions whenever you want. According to Gary Eastwood from IDG Contributor Network, cloud computing and AI will fuse in the future.

The integration of AI will help AI models to access information from the cloud, train themselves and apply new insights into the cloud. This enables other AI models to learn from these new insights. This fusion improves calculation power and the capability of treating many data and intelligence.

The possible use cases of cloud computing include AI-lead drones, sensor networks, and smart dust.

Extended Reality (XR)

Besides technologies like Virtual Reality or Augmented Reality, start-ups are experimenting with bringing touch, taste, and smell to enhance these immersive experiences with the support of AI technologies. While Extended Reality (XR) may bring several security issues in the future, XR will be essential to improve worker productivity and the customer experience in the future.

According to Accenture, the designers at Volkswagen can experience the car’s look, feel and drive—spatially, in 3D—thanks to XR tools.

Convergence of IoT and AI

Another trending technology, IoT, will merge with AI technologies in the future. AI can be used in IoT platforms in use cases like root cause analysis, predictive maintenance of machinery or outlier detection. Devices like cameras, microphones, and other sensors collect this data from video frames, speech synthesis, or any other media. Then, it is trained in the public cloud environment with advanced AI technologies based on neural networks.

If you want to read more about AI, these articles can also interest you:

You can also check out our list of AI tools and services:

The future of AI involves both off-the-shelf software and custom solutions for your company’s specific challenges. We can help you find the right partners to build custom AI solutions:

Identify partners to build custom AI solutions

If you have questions about how AI can help your business, don’t hesitate to contact us:

Find the Right Vendors

Sources:

IBM predictions on quantum computing

Access Cem's 2 decades of B2B tech experience as a tech consultant, enterprise leader, startup entrepreneur & industry analyst. Leverage insights informing top Fortune 500 every month.
Cem Dilmegani
Principal Analyst
Follow on

Cem Dilmegani
Principal Analyst

Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 60% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE, NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and media that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised businesses on their enterprise software, automation, cloud, AI / ML and other technology related decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

To stay up-to-date on B2B tech & accelerate your enterprise:

Follow on

Next to Read

Comments

Your email address will not be published. All fields are required.

4 Comments
Nita Vandemark
Jan 12, 2022 at 00:30

IBM produced their quantum computer 2-3 years ago, tho’ it is still behind the D-Wave.

Bardia Eshghi
Nov 25, 2022 at 01:19
shikha gupta
Apr 23, 2021 at 09:19

Thanks for an informative blog sharing another insightful one “future of ai in business “

artificialintelligence
Jan 18, 2021 at 17:09

I have bookmarked your website because this site contains valuable information in it. I am really happy with articles quality and presentation. Thanks a lot for keeping great stuff. I am very much thankful for this site.

Cem Dilmegani
Jan 23, 2021 at 11:08

Thanks!

AI Bot Development
Jul 22, 2020 at 07:21

This has some really interesting ideas I’m looking forward to trying this year. Thanks for the informative and helpful walk-through.

Cem Dilmegani
Jul 25, 2020 at 06:27

Thanks! Apologies for removing the company link, we don’t include links in comments unless they are absolutely necessary. Or else we get more comments than we can deal with.

Related research