Report: powerful, new foundation models set to transform how businesses use AI

Accenture report outlines the next generation of AI on its way.

A report from Accenture says that while artificial intelligence (AI) is still in its infancy, a subset of the field called foundation models could have an overwhelming impact on businesses.

Foundation models are generally defined as large AI models trained on a vast quantity of data and that can adapt to a significant number of tasks. Foundation models are generalists. One example is DeepMind’s Gato. It can complete over 600 different tasks, chat, caption images, play Atari video games, stack blocks with a robotic arm, and more.

98% of global executives agree AI foundation models will play an important role in their organizations’ strategies in the next three to five years.

It can learn these various tasks simultaneously and switch between them without forgetting previously learned skills.

Accenture maintains foundation models could drive new data practices, transform strategy and business planning, write sophisticated code, as well as power novel new products and services.

According to the Accenture report, 98 percent of global executives agree AI foundation models will play an important role in their organizations’ strategies in the next three to five years. And 97 percent of global executives agree AI foundation models will enable connections across data types, revolutionizing where and how AI is used.

Accenture Labs and Accenture Research collaborate on the annual research process, which this year included a survey of 4,777 C-level executives and directors across 25 industries. The surveys were fielded from December 2022 through January 2023 across 34 countries.

This new wave of AI comes from two key innovations. First is transformer models, which Google researchers introduced in 2017. These are neural networks that identify and track relationships in sequential data, like words in a sentence. Accenture noted such models are trained through self-supervised learning.

For a large-language model, like OpenAI’s ChatGPT, this could mean “pouring through billions of blocks of text, hiding words from itself, guessing what they are based on surrounding context, and repeating until it can predict those words with high accuracy,” according to the report.

The second innovation is scale, increasing the size of the models and the amount of computational power used to train them. The latter is one of the barriers facing the technology when it comes to using it. Accenture said the amount of computational power needed to train the largest AI models now doubles anywhere from every three to every 10 months.

As well, it takes time to load foundation models into cloud computing setups, and also expensive to keep many of the models online in the cloud.

Accenture envisions all manner of uses for AI in the not-too-distant future.
 

Anyscale, founded by a group of UC Berkeley researchers, is working to lower those barriers. Anyscale developed Ray, an open-source framework that makes it easier to scale and distribute machine learning workloads. Cohere, the Toronto startup building a natural-language processing developer toolkit, uses Ray to train its large language models, according to Accenture.

Accenture posits that foundation models have the potential to deeply transform human-AI interaction. Because many foundation models use a large-language model as their interface, it’s part of why they are gaining traction so quickly: people can easily engage with them.

Accenture said, for example, Frame, a beta product from the metaverse startup Virbela, uses a large-language model capable of generating code to help teachers design 3D metaverse classrooms simply by describing what they want in the room out loud.

They are also transforming the way information-economy work is done, according to Accenture. Google used a foundation model to develop a code completion tool. Over 10,000 engineers tested it for a three-month period, reducing complex coding iteration time by six percent.

“The potential of these models to transform workflows and improve productivity, even in highly complex tasks, is undeniable,” Accenture reports.

In fact, Accenture envisions all manner of uses for AI in the not-too-distant future.

“Early opportunities may start with generating marketing images and ad copy, but could grow into sophisticated auto-generated code and new ways to search and access information,” according to the report. “Analysts might use language to ask an AI system to describe patterns across thousands of satellite images. A piece of industrial equipment might use an AI system to translate data from dozens of sensors into a repair procedure for a mechanic. Or multimodal AI might help drastically improve the path planning and performance of robotic arms.”

Bias in foundation models is a common concern because of the datasets they have been trained on.
 
 

Startups will need to identify the right role for foundation models. Some applications of the technology might be better served by a narrow AI trained specifically for a task or a clearly defined problem, such as medical image analysis.

More apparent still are some of the troubling aspects of AI. The report noted that foundation models have some characteristics that for now make them questionable in certain situations.

Bias in foundation models is a common concern because of the datasets they have been trained on. The report pointed out that historic datasets excluded certain populations, people, and demographics and can lead to “undesirable outcomes.”

This has caused Yoshua Bengio, the co-founder of Montréal-based Mila, along with hundreds of tech leaders, artificial intelligence (AI) researchers, policymakers and other concerned parties, signing an open letter urging all AI labs to agree to a six-month pause on training systems that are more powerful than GPT-4, the technology underlying the popular large-language model ChatGPT.

Also, Geoffrey Hinton, the Canadian computer scientist who has been internationally distinguished for his work on AI, resigned from Google recently to warn about the dangers of the technology.

According to the report, businesses will need to decide how to access the new AI and where it will sit within the operations. While some may choose to train new foundation models, many decisions will be around working with pre-trained models. Even that may not be the most desirable integration. A wave of companies has begun offering new B2B products and services.

Finally, another way to integrate foundation models may be building on top of the models. “Talent with the skills to take foundation models, adapt them to business needs, and integrate them into applications will become increasingly important,” Accenture said.

The report concludes: “Whether it means training every employee on foundation model capabilities, or creating teams dedicated to integrating these models into different parts of the business, bringing your wider organization into this new era of AI is critical.”

Feature image courtesy Pixabay. Phot by Gerd Altmann.

Charles Mandel

Charles Mandel

Charles Mandel's reporting and writing on technology has appeared in Wired.com, Canadian Business, Report on Business Magazine, Canada's National Observer, The Globe and Mail, and the National Post, among many others. He lives off-grid in Nova Scotia.

0 replies on “Report: powerful, new foundation models set to transform how businesses use AI”