By Saskia Courtchick and Jonathan Berko
From potentially brand-damaging ethical risks to regulatory uncertainty, AI poses challenges for investors. But there is a way forward.
Artificial Intelligence (AI) raises many ethical issues that can turn into risks for consumers, companies. and investors. And AI regulation, which is progressing unevenly across multiple jurisdictions, adds to the uncertainty. The key for investors, in our view, is to focus on transparency and clarity.
The ethical issues and risks of AI begin with the developers creating the technology. From there, they move to developer clients – companies that integrate AI into their businesses – and to consumers and society more broadly. Through their holdings in AI developers and companies that use AI, investors are exposed to both ends of the risk chain.
AI is advancing rapidly, far ahead Most people understand this. Among those trying to catch up are global regulators and lawmakers. At first glance, their activity in the AI area has grown exponentially over the past few years. Many countries have issued relevant strategies and others are close to introducing them (Display).
In reality, progress has been uneven and far from complete. There is no uniform approach to AI regulation across jurisdictions, and some countries introduced their own regulations before the launch of ChatGPT in late 2022. As AI expands, many regulators will need to update and potentially expand the work they already do.
For investors, regulatory uncertainty compounds AI's other risks. To understand and anticipate how to address these risks, it helps assess the AI business, ethical and regulatory landscape.
Data risks can damage brands.
AI includes an array of technologies that are directed towards performing tasks normally performed by humans and performing them in a human-like manner. AI and business can intersect through generative AI, which involves different forms of content creation, including video, voice, text and music; and large language models (LLMs), a subset of generative AI that focuses on natural language processing. LLMs serve as the foundational model for various AI applications – such as chatbots, automated content creation, and analyzing and summarizing large amounts of information – that companies are increasingly using in their customer engagement. .
As many companies have found, however, AI innovations can include potentially brand-damaging risks. These may arise from biases inherent in the data on which LLMs are trained and as a result, for example, banks may inadvertently discriminate against minorities in approving home loans. is, and a US health insurance provider faces a class-action lawsuit alleging Extended Care Claims for Elderly Patients Wrongly Denied Due to Use of AI Algorithms
Bias and discrimination are just two risks that regulators target and should be on investors' radar. Others include intellectual property rights and data privacy protections. Measures to mitigate risk – such as developer testing of the performance, accuracy and robustness of AI models, and providing transparency and support to companies in implementing AI solutions – should also be examined.
Dive deeper to understand AI regulations.
The AI regulatory environment is evolving in different ways and at different speeds across jurisdictions. Recent developments include the European Union's (EU) Artificial Intelligence Act, which is expected to enter into force in mid-2024, and the UK government's publication of the AI Regulation White Paper on the consultation process launched last year. The UK government's response. Paper
Both efforts illustrate how AI regulatory approaches can differ. The UK is adopting a principles-based framework that existing regulators can apply to AI issues in their respective domains. In contrast, the EU Act introduces a comprehensive legal framework with risk classification compliance obligations for developers, companies, and importers and distributors of AI systems.
Investors should, in our view, do more than drill down into the details of each jurisdiction's AI regulations. They should also familiarize themselves with how jurisdictions are regulating AI issues using laws that stand before and beyond specific AI regulations – for example, data to address violations of copyright law and in cases where AI impacts labor markets.
Fundamental analysis and engagement are key.
A good rule of thumb for investors trying to assess AI risk is that companies that proactively make full disclosures about their AI strategies and policies are well positioned for the new regulations. It is likely to develop. More generally, fundamental analysis and issuer engagement—the fundamentals of responsible investing—are central to this area of research.
Fundamental analysis should test insights against not only company-level AI risk factors, but also the business chain and regulatory environment, as well as core responsible-AI principles (Display).
Engagement conversations can be structured to cover AI issues not only as they affect business operations, but also from environmental, social and governance perspectives. Questions investors may ask the board and management include:
-
AI integration: How has the company integrated AI into its overall business strategy? What are some specific examples of AI applications within a company?
-
Board oversight and expertise: How does the board ensure it has sufficient expertise to effectively oversee the company's AI strategy and implementation? Are there specific training programs or initiatives?
-
Public commitment to responsible AI: Has the company published a formal policy or framework on responsible AI? How does this policy align with industry standards, ethical AI considerations, and AI regulation?
-
Active transparency: Does the company have any proactive transparency measures in place to counter future regulatory implications?
-
Risk Management and Accountability: What risk management processes does the company have in place to identify and mitigate AI-related risks? Is responsibility for monitoring these risks delegated?
-
Data challenges in LLMs: How does the company address privacy and copyright challenges associated with input data used to train large language models? What steps are taken to ensure input data complies with privacy regulations and copyright laws, and how does the company handle restrictions or requirements related to input data?
-
The Challenge of Bias and Fairness in Generative AI Systems: What steps does the company take to prevent and/or reduce biased or unfair results from its AI systems? How does the company ensure that the output of any generative AI systems used is fair, unbiased, and does not perpetuate discrimination or disadvantage against any individual or group?
-
Incident tracking and reporting: How does the company track and report incidents related to its development or use of AI, and what mechanisms are in place to deal with and learn from these incidents?
-
Metrics and Reporting: What metrics does the company use to measure the performance and impact of its AI systems, and how are these metrics reported to external stakeholders? How does the company maintain due diligence in monitoring the regulatory compliance of its AI applications?
Ultimately, the best way for investors to find their way through the maze is to stay grounded and skeptical. AI is a complex and fast-moving technology. Investors should insist on clear answers and not be unduly swayed by broad or complicated explanations.
The authors would like to thank Roxanne Low, ESG analyst with AB's responsible investment team, for her research contributions.
The views expressed herein do not constitute research, investment advice or trading recommendations and do not necessarily represent the views of all AB Portfolio management teams. Views are subject to revision from time to time.
Original post
Editor's note: Summary bullets for this article were selected by the Seeking Alpha editors.