Artificial Intelligence and the Rise of Regulators | McAfee & Taft

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now
  • According to Deputy Attorney General Lisa Monaco, the US Department of Justice announced that it will “begin evaluating a company’s ability to manage AI-related risks as part of its overall compliance efforts.” At the American Bar Association’s annual white-collar crime conference, Monaco suggested that “compliance officers should take notice” and detailed the department’s new “Justice AI” initiative. It also warned companies that “[f]raud using AI is still fraud.” The remarks came after the department filed serious charges against a Google engineer for stealing AI secrets from the tech giant and allegedly transferring them to foreign entities. happened
  • The California Privacy Protection Agency (CPPA) released draft regulations on “automated decision-making technology.” The revised regulations include several notable updates to key definitions, as well as the addition of new disclosures that explain how the CPPA expects businesses to comply with the notice requirements. The formal adoption and rulemaking process based on these draft regulations will continue throughout the year.
  • And after months of effort, members of the European Parliament finally pushed the EU’s Artificial Intelligence Act over the finish line this week. The Act will address business barriers by risk type, as well as implement data governance and human oversight requirements. The Act is the most comprehensive framework on AI to date and is sure to have a global impact.

Actionable next steps for businesses

As you can see, regulators at all levels are looking for a piece of the AI ​​pie. So, how should your company prepare to protect your intellectual property and technology from both risk and regulatory scrutiny?

An initial and easy step for most organizations is to extend existing Acceptable Use Policies (AUP). Organizations with more mature risk management and technology governance structures—especially companies evaluating early deployment or use of AI—may consider developing responsible use policies (RUP) that complement existing policies. are, but which address the emerging risks of emerging technologies.

As companies draft revisions to an existing AUP or develop a new RUP, we recommend reviewing the following considerations and potential provisions:

  1. Principles, Purpose and Definitions: The foundation of any responsible use policy for AI and emerging technologies must be a clear statement of the principles and purpose that guide development and deployment within the organization. These principles should be consistent with the company’s values, ethics and commitment to responsible innovation. To ensure clarity and consistency, the policy should also provide a precise definition of AI and emerging technologies. This helps establish a common understanding among all stakeholders and prevents ambiguity in interpretation and implementation.
  2. Use cases, pitfalls and testing: Identifying potential use cases and their associated risks or pitfalls is critical to developing an effective risk management strategy. Companies should conduct thorough assessments to anticipate and mitigate potential negative impacts, such as bias, discrimination, privacy violations, or unintended consequences. Rigorous testing and validation of AI systems before deployment is essential to identify and correct any potential problems or biases.
  3. Governance and risk management: Strong governance frameworks and risk management processes are essential for the development, deployment and monitoring of AI systems. This includes establishing clear roles and responsibilities, reporting lines, and accountability mechanisms to ensure adequate monitoring and control. As companies increasingly rely on external partners for AI development and deployment, managing these relationships becomes critical. The policy should also address contractual obligations, data sharing agreements, and compliance requirements to ensure that third parties adhere to the same rules and standards for responsible AI use.
  4. Selection and use processes: Companies should establish well-defined criteria and procedures for selecting and using AI technologies, vendors and partners. Factors such as transparency, accountability, and ethical considerations should be emphasized in both the selection and application process to ensure alignment with the company’s principles and values. Responsible use of AI requires clear guidelines for data handling and human oversight.
  5. Cybersecurity and privacy: And of course, the policy should outline best practices for protecting company information and stakeholder privacy throughout the data lifecycle, including collection, deletion, storage, and transfer, as well as when Procedures to ensure meaningful human control and intervention, if necessary, to protect privacy and property. Responsible use of emerging technologies mandates strict administrative, physical and technical safeguards to protect privacy and proprietary interests.

Bottom line: As both the risk and regulation of AI continue to grow, now is the perfect time for business leaders to address these concerns and additional assessments.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment