How AI is poised to unlock innovation at an unprecedented pace – POLITICO

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Artificial intelligence (AI) has rapidly evolved from a future promise to a current reality. Generative AI has emerged as a powerful technology that can be applied in countless contexts and use cases – each carrying its own potential risks and involving a diverse set of stakeholders. As enterprise adoption of AI accelerates, we find ourselves at a turning point. Proactive policies and smart governance are needed to ensure AI develops as a reliable, equitable force. Now is the time to create a policy framework that unlocks the full beneficial potential of AI while minimizing the risks.

Proactive policies and smart governance are needed to ensure AI develops as a reliable, equitable force.

The EU and the pace of AI innovation

The European Union has been a leader in AI policy for years. In April 2021, it presented its AI package, which included its proposal for a regulatory framework on AI.

These early initiatives fueled AI policy discussions amid the pace of innovation and technological change. Just as personal computing democratized access to the Internet and access to coding, fueling the creation of more technology, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes great responsibility: we must prioritize policies that allow us to harness its power while protecting it from harm. To do this effectively, we must recognize and bridge the gap between enterprise and consumer AI.

We must recognize and bridge the gap between enterprise and consumer AI.

Enterprise vs. Consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced its first AI functionalities into our products in 2016, and established its Office of Ethical and Humane Use of Technology in 2018. Trust is our first value. That's why our AI offerings are built on trust, security and ethics. As with many technologies, AI has more than one use. Many people are already familiar with large language models (LLMs) through consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.

Enterprise AI is designed and trained specifically for business settings, while consumer AI is open and available for anyone to use. Salesforce is not in the consumer AI space—we create and deploy enterprise customer relationship management (CRM) AI. This means our AI is customized to help our customers meet their unique business needs. We've done this with Gucci by using Einstein for Service. Working with Gucci's global client service center, we helped create a framework that is standardized, flexible and aligned with the brand's voice, allowing client advisors to personalize their unique customer experiences. Empowers.

In addition to their target audience, consumer and enterprise AI differ in a few other key areas:

  • Context – Enterprise AI applications often have limited potential inputs and outputs due to business-specific design models. Consumer AI typically performs general tasks that may vary greatly depending on the application, making it more vulnerable to misuse and harmful effects, such as biased results due to untrained data sources. Extending and using copyrighted material.
  • Data – Enterprise AI systems rely on curated data, typically obtained by consensus from enterprise users and deployed in a more controlled environment, limiting the risk of fraud and improving accuracy. increases in Meanwhile, users' AI data can come from a wide range of unverified sources.
  • Data privacy, security and integrity — Enterprise customers often have their own regulatory requirements and may request that service providers ensure strong privacy, security and accountability controls to prevent bias, toxicity and fraud. could Enterprise AI companies are encouraged to offer additional protections, as their reputation and competitive advantage depend on it. Consumer AI applications are not catered to such stringent requirements.
  • Contractual Obligations – The relationship between an enterprise AI provider and its customers is based on contracts or procurement rules, which define each party's rights and obligations and how data is handled. Enterprise AI offerings undergo regular review cycles to ensure continuous alignment with high customer standards. In contrast, consumer AI companies provide terms of service that tell consumers what data will be collected and how it may be used, with no ability for consumers to negotiate appropriate safeguards.

A policy framework for ethical innovation

Salesforce serves organizations of all sizes, jurisdictions and sectors. We are uniquely positioned to observe global trends in AI technology and identify developing areas of risk and opportunity.

Humans and technology work best together. To facilitate human oversight of AI technology, transparency is critical. This means that humans must be in control and understand the proper use and limitations of AI systems.

Another important element of an AI governance framework is context. AI models used in high-risk contexts can have a profound impact on an individual's rights and freedoms, including economic and physical impacts, or a person's right to dignity, right to privacy, and right to be free from discrimination. can affect These 'high-risk' use cases should be a priority for policy makers.

Humans must be in control and understand the proper use and limitations of AI systems.

The EU AI Act does just that — addresses the risks of AI, and guarantees the safety of people and businesses. It creates a regulatory framework that defines four levels of risk for AI systems — minimal, limited, high and unacceptable — and allocates responsibilities accordingly.

Comprehensive data protection laws and sound data governance practices are fundamental to responsible AI. For example, the EU's General Data Protection Regulation (GDPR) shaped global data privacy regulation, using a risk-based approach similar to the EU AI Act. It contains principles affecting AI regulation: accountability; fairness data security; and transparency. GDPR sets the standard for data protection laws and will be the determining factor in how personal data is managed with AI systems.

Partnership for the future

Navigating the enterprise AI landscape is a multi-stakeholder endeavor that we cannot tackle alone. Fortunately, governments and multilateral organizations such as the United States, the United Kingdom and Japan, the United Nations, the European Union, the G7, and the OECD have begun collaborative efforts to create regulatory frameworks that promote both innovation and safety. are By establishing the right cross-sector partnerships and aligning behind a principled governance framework, we can harness the full transformative potential of AI while prioritizing humans and ethics.

Learn more about Salesforce's enterprise AI policy recommendations.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment