As we ventured into 2023, the launch and rapid proliferation of generative artificial intelligence (AI) tools, marked by their open-source nature and ease of application, sparked an immense surge in interest and a quest for guidance. This era has been characterized by rapid technological adoption, presenting an array of legal, commercial, ethical, and societal implications that are unparalleled in their breadth and pace.
The dynamic landscape has significantly shaped our clients’ concerns and the counsel we provide. Reflecting on these developments, we aim to shed light on a novel challenge for General Counsels (GCs), offer insights into the evolving regulatory environment, and propose a strategic framework to navigate AI-related decisions, whether they pertain to new tools or applications. Amidst a deluge of AI discourse, our goal is to equip GCs with practical, actionable guidance.
Addressing Overreliance: A Crucial Challenge for GCs
The initial phase of generative AI adoption has unveiled critical insights into its potential pitfalls, such as the phenomenon of AI “hallucinations.” As we progress into 2024, with AI becoming more entrenched in business operations, GCs must vigilantly manage the emerging risk of overreliance on AI.
Overreliance stems from a lack of understanding of AI functionalities or the processes behind its outcomes. This gap in knowledge can lead to misplaced trust, particularly when AI tools, influenced by inherent biases and training data, repeatedly deliver seemingly accurate results. Such scenarios foster a false sense of security, diminishing the urgency to verify AI-generated recommendations, thereby increasing the likelihood of accepting inaccurate advice.
This issue is not trivial; the implications of overreliance span across the spectrum of organizational performance, profitability, and people, affecting critical business domains like marketing, product development, and operations. The consequences of inaccurate AI advice can lead to misrepresentations, misguided product development, or even legal liabilities, underscoring the necessity for accuracy and human oversight in AI utilization.
Organizations might be tempted to demand explanations from AI about its conclusions. However, this approach has its limitations, as complex explanations may be overlooked or blindly accepted by users, emphasizing the need for GCs to foster a culture of rigorous evaluation and skepticism towards AI-generated outputs.
Understanding the AI Regulatory Landscape
As AI becomes more integral to business processes, GCs must navigate a complex web of federal and state regulations. The current U.S. regulatory framework for AI is fragmented, with sector-specific laws and a mosaic of state regulations adding layers of complexity.
Federal agencies like the CFPB and EEOC have highlighted how AI applications could contravene existing laws, such as the ECOA and Title VII, due to potential biases or opaque decision-making processes. Similarly, the SEC and FTC are scrutinizing AI use in business practices for deceptive or fraudulent activities, irrespective of intent.
Recent federal and state initiatives signal forthcoming regulations, urging GCs to stay ahead of potential legal and operational standards. Additionally, state consumer privacy laws and industry-specific regulations further complicate compliance, making it imperative for GCs to remain vigilant and adaptable.
A Framework for AI Tool Evaluation
In this uncertain landscape, GCs can adopt a five-part framework to assess AI tools critically:
- Understanding the Tool: Evaluate whether the AI tool genuinely employs machine learning and consider the model type, data training, terms of use, and potential legal or IP concerns.
- Assessing the Use Case: Determine the suitability of AI for the intended application, considering the tolerance for error and the necessity of human judgment or empathy.
- Considering the Data: Prioritize data privacy and confidentiality, assessing the type of data processed and the potential for sensitive information to influence future outputs or be accessed by others.
- Analyzing the Output: Reflect on the nature of the AI’s output, its intended audience, and the potential for overreliance based on the format and context of the information provided.
- Evaluating Accuracy: Critical to the entire process, the accuracy of an AI tool must be thoroughly verified, ensuring it meets the necessary standards for its intended application.
Conclusion
As we navigate the AI revolution, GCs play a pivotal role in establishing norms and frameworks that mitigate risks associated with AI adoption. By applying a thoughtful, rigorous approach to AI tool evaluation and vendor management, organizations can balance innovation with prudence. As legal and regulatory landscapes evolve, staying informed and proactive will be key to navigating the complexities of AI integration in business. Our team remains committed to providing timely, insightful guidance to help our clients thrive in this dynamic environment.