Why every investor should embrace Responsible AI

360info
June 22, 2024 06:00 MYT
As the world of AI expands and becomes more complicated, Responsible AI usage is becoming an important pre-condition for investment in any firm. - Getty Images/iStockphoto/via 360info
ARTIFICIAL intelligence (AI), inclusive of machine learning and generative AI, is transforming the investment landscape. From factory floors to financial institutions, this fast-moving technology is being adopted across the economy — and creating unprecedented opportunities and risks.
To succeed in this new environment, investors must keep pace. That means sharpening not only their understanding of how AI works but ensuring significant risks are mitigated, particularly those associated with generative AI. Investors understand the importance of adopting principles and policies that ensure AI is developed and deployed in a manner that is valid and reliable, safe, fair, secure and resilient, accountable and transparent, explainable and interpretable — what’s known as Responsible AI.
However, for many companies and investors it is hard to know where to start, particularly given generative AI’s seemingly unparalleled speed of development and a rapidly changing regulatory and commercial environment.
The Responsible AI Playbook for Investors, a collaboration between the World Economic Forum and CPP Investments Insights Institute, aims to bridge this gap. It argues investors can and should exercise the influence of their capital to promote Responsible AI in their portfolios of direct investment, in their work with investment partners and in the ecosystem at large. And it offers practical tools and approaches to help them do it.
Are boards and investors ready for AI?
Though AI has been advancing behind the scenes for decades, the excitement surrounding generative AI has sparked a more recent rush to adoption. In last month’s McKinsey Global Survey on AI, 65% of respondents said their organizations were regularly using generative AI, nearly double the percentage from 10 months ago. Three quarters of the survey’s respondents predicted generative AI will lead to significant or disruptive change in their industries in the years ahead.
Yet directors are struggling with oversight. Some 36% of directors in the 2024 National Association of Corporate Directors Governance Outlook identified AI as one of the most challenging areas to govern. Only 15% of large US companies disclosed any board oversight of the technology.
These statistics should raise the eyebrows of investors, who depend on boards to be responsible for overall corporate governance.
Responsible AI: A forethought, not an afterthought
Appropriate governance is critical to ensuring boards and management balance the competitive deployment of AI against its potential risks. Responsible AI is a powerful tool for achieving that balance. By setting clear expectations for boards based on Responsible AI principles, investors can ensure foundational concerns are addressed.
Under Responsible AI, AI technologies are developed and used in ways that avoid social risks and respect ethical standards and legal requirements, reducing potential liabilities. For example, a proactive Responsible AI framework can prevent costly lawsuits and fines resulting from failures to comply with emerging global regulations like the European Union's AI Act.
Moreover, robust AI governance can safeguard against technological failures. A study by Boston Consulting Group (BCG) found companies that prioritize scaling their Responsible AI programmes over simply scaling their AI capabilities experience nearly 30% fewer AI failures — or instances when AI systems function in an unintended way that impacts the company, employees, customers or society.
Boosting engagement, trust and value
Responsible AI’s advantages extend beyond this. AI systems designed with responsibility in mind can significantly enhance customer trust and brand reputation. Research from the Economist Intelligence Unit suggests that when customers know a company uses AI ethically, they are more likely to engage with the brand. They’re also more likely to become repeat customers, driving both top-line growth and sustained profitability.
Finally, research from Bain & Company finds that firms with a comprehensive, responsible approach to AI earn twice as much profit from their AI efforts. Leaders in these firms aren’t afraid of possible risks. And they gain value from AI by more rapidly implementing use cases and adopting sophisticated applications.
Through proactive Responsible AI engagement and leadership, investors can drive the responsible development and deployment of AI technologies, ensuring that these innovations contribute positively to corporate performance and market dynamics.
Where can investors begin? 3 quick steps
Step 1: Develop Responsible AI commitments and apply its principles and practices to internal operations.
Investors looking to integrate Responsible AI across their portfolios should become knowledgeable on AI and Responsible AI and apply it to their own operations. This includes defining their own Responsible AI principles and priorities, including what they will not invest in.
Step 2: Conduct Responsible AI due diligence on the portfolio.
Investors should perform proper due diligence to assess how companies and investment partners are positioned to meet Responsible AI principles.
Step 3: Engage with companies, external managers, and the broader ecosystem.
Working with companies, external managers and other players can extend investors’ influence and help them maximize the value derived from their AI-enabled.
#generative AI #Responsible AI Playbook #investment #WEF #English News
;