How governments and businesses can build trust in and realise the potential of AI

By on
How governments and businesses can build trust in and realise the potential of AI

Hype, excitement and fear-mongering threatens to corrode business and community confidence in AI. However, simplifying terminology, highlighting successes and learning the lessons of challenging AI experiences can increase awareness of AI’s potential, according to KPMG Australia.

The professional services firm recently released the Navigating AI report that aims to establish a framework for businesses and individuals to move towards trustworthy AI. In an email interview with ITNews, Dean Grandy, Lead Partner, Technology – Infrastructure, Government & Healthcare, KPMG Australia, and Dhawal Jaggi, Partner, Data & Cloud, KPMG Australia, elaborated on the themes and findings of the report.

However, trust in AI remains low, with the Navigating AI report identifying only 34% of Australians are willing to trust the fast-moving technology.

Participating in AI discussions

Grandy argued governments needed to carefully consider their role in relation to AI. The Australian Government should decide whether to be a regulator, a broker that coordinates governments, business, technology and citizens to optimise AI outcomes, an investor in AI to build citizen-centric solutions, or a combination of all three. 

“It will be important for the public sector to participate in AI-related discussions with the business and technology sectors, and the broader community,” said Grandy. “By doing so, it will stay informed of risks and opportunities, ensure consistency in AI design and delivery, and able to target where and how AI solutions should be used to optimise adoption.”  

Avoiding a one-size fits all approach

For businesses and government organisations, trust is critical to realising return on investment and value from any AI initiative, irrespective of the solution and size and complexity of the organisation involved. “Without trust there is limited user adoption, and lack of trust is often one of the key drivers of AI programs not realising expected value,” said Jaggi. 

Establishing trust means tailoring strategies to the requirements of different groups of stakeholders. “For example, for customers to trust AI, they need to know that they have total control of their data; transparency around where it is stored, how it is being used and who is using it; understand any financial gains made in the data trade; and the options they have to retract permission or to opt out,” said Jaggi.

Internally, organisations need to mitigate the risk of workers viewing AI as a threat to their jobs, the KPMG partners argued. They must then ensure workers trust AI systems’ output by establishing transparency around the data used and providing assurances that IP and copyright is adhered to.

Practical governance is key to AI solution design, development and implementation, and starts at the system and design level, added Jaggi.. A governance regime must apply to all business stakeholders, including data scientists, ML engineers, AI system end-users, executive leadership and the board.

He nominated the key questions leaders need to ask around AI as: 

  • Are our governance systems and structures fit for purpose to identify and manage AI risks?
  • Do we have the right capabilities and skills around the leadership teams to respond? To what extent and how should we draw upon external expertise?
  • Is there appropriate reporting to the leadership on the strategic opportunities and risks posed by AI to the organisation?

Delivering education around AI across the organisation

So what are the key characteristics of leaders in the use of AI? According to KPMG, leaders in AI, machine learning and data science avoid siloed or partial approaches, instead providing formal education programs around AI and data literacy across all levels of the organisation.

They build on this by fostering a culture of collaboration and grassroots innovation for whole-of-business impact.

KPMG sees leaders and disruptors in AI across almost all industries in Australia, including financial services, mining, healthcare, telecommunications and government.

“The bottom line is that the time-lag between AI invention and AI adoption by corporates is closing fast,” said Jaggi. “For example, the first chatbot backed by natural language processing was invented at MIT in 1966. It took almost 50 years for mainstream organisations like Pizza Hut to adopt it and launch their chatbot in 2016.

“Fast forward to November 2022, when ChatGPT 3.5 broke the internet, and here we have the likes of Nike, Morgan Stanley, Heinz, Nestle and many others across all major industries already adopting generative AI to drive market differentiation.”

 

Creating the Trustworthy AI framework

To build trust in AI, KPMG and the University of Queensland developed the Trustworthy AI framework, which identifies six dimensions that need to operate in a connected way to ensure trust across the entire AI lifecycle. These dimensions are data, algorithms, security, legal matters, ethics and organisational alignment.

The Navigating AI paper applies the framework to develop a checklist to guide adoption and use of AI across any organisation, whether they are an AI developer, procurer or user.

.Learning lessons from the rise of social media

Jaggi and Grandy both acknowledge that lessons need to be learned from the rise of social media in managing rapidly emerging technologies such as AI. According to Jaggi, questions need to be asked around the impact of any new technology–not just AI–around psychological, geopolitical, economic and social aspects of society. “We need to ensure the next new shiny technology being launched is appropriately assessed in terms of its impacts.” 

 

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Log In

  |  Forgot your password?