Photo from Analytics Insight.
Businesses are moving forward with the development and implementation of artificial intelligence solutions, despite the uncertainty in the regulatory environment.
This month, European legislators passed the most detailed artificial intelligence legislation seen globally, though other areas, such as the U.S. and its states, are not as advanced in this regard.
Regulators are scrutinizing the risks to consumer privacy, the protection of user data, and the possible biases and inaccuracies in AI algorithms.
Ed McLaughlin, President and Chief Technology Officer at Mastercard, pointed out the presence of numerous unresolved queries. This necessitates the early development of systems in preparation for yet-to-be-determined requirements.
Firms are moving ahead with technology adoption without waiting for regulators to set the terms and conditions. Meanwhile, chief information officers report that they are blending established protocols for handling customer data with some educated speculation to ensure AI applications comply with potential regulations, all while keeping communication lines open with policy makers.
Companies such as Nationwide Mutual Insurance and Goldman Sachs have developed their own internal rules and structures for data and AI usage, partly by predicting what future regulations might entail.
Jim Fowler, the CTO of Nationwide, mentioned that any upcoming AI regulations at the state level would probably require a degree of openness regarding the use of customer data in AI-driven decision processes.
Nationwide adopted a strategy of 'red teams' and 'blue teams': the blue team is tasked with investigating fresh AI prospects, while the red team evaluates when to retract because of cybersecurity concerns, bias, and the need to adhere to governmental regulations.
The outcome of the red team's efforts, according to Fowler, was a set of AI guidelines that will aid in creating solutions that generate business value while aligning with anticipated state risk trends, Fowler explained.
Fowler noted that the potential for AI regulations to differ from one country to another, or even among states, adds complexity to the way firms manage their AI technologies. However, he mentioned that the company is already familiar with this scenario, as insurance underwriting rules already vary by state.
Fowler observed that the company's product offerings vary by state, and he anticipates that technology, particularly AI, might follow a similar pattern, with customer benefits differing from one state to another.
Marco Argenti, the CIO at Goldman Sachs, mentioned that the firm formed a committee dedicated to evaluating the risks of implementing AI. He highlighted his ongoing conversations with regulatory bodies and his commitment to ensuring that the company's internal applications of AI are developed with these risks in mind, particularly with regard to data security.
However, establishing internal safeguards is not a complete solution for complying with upcoming regulations. We must recognize that there could be further regulations introduced by policymakers, he noted.
Earlier this month, European legislators enacted the AI Act, marking the initial regulations poised to directly influence AI usage. These regulations, which will be implemented progressively over a few years, prohibit specific applications of AI and establish new standards for transparency. Developers of the most advanced AI systems, identified by the EU as posing a 'systemic risk,' must subject their models to safety assessments and inform regulatory authorities about significant incidents involving their models.
Although the legislation is enforceable only within the EU, it is anticipated to affect the worldwide landscape since major AI firms would likely prefer not to lose access to the European market.
Eric Loeb, Salesforce's Executive Vice President of Global Government Affairs, mentioned that more than 500 AI-related legislative proposals have been introduced in the U.S. alone. He emphasized the significant level of activity even at the state level within the U.S., highlighting the need for vigilance and discernment.
Loeb mentioned that his team, responsible for integrating safety and compliance advice into Salesforce products, is engaging with policymakers to foresee future developments. However, he also noted that the situation is always evolving, implying that it's impossible to assume all issues are permanently resolved.