See Credit Details Below
Overview
Why You Should Attend
Technological advances and legal complexity in the age of Artificial Intelligence (AI) has never been greater for companies in every industry. This legal complexity applies more than ever to the interaction between AI and legal obligations related to privacy, security, and global data flows. Businesses cannot effectively compete in today’s marketplace without leveraging AI. Yet, new legislative proposals seek to rein in perceived AI risks via new privacy laws. Are new privacy laws the right approach? Should we look to enact new specialized AI laws? And what steps can companies take now to mitigate the broader legal and societal risks posed by AI? What role can algorithmic impact assessments, transparency in algorithmic decision-making, stakeholder involvement, algorithmic destruction, and other programmatic frameworks play in the ultimate solution?
In the last few years, we have seen a surge of legislative proposals, regulatory frameworks, and reform initiatives as policymakers around the world grapple with how to regulate AI. The European Union is, again, leading the way with the Artificial Intelligence Act, which, taken together with the General Data Protection Regulation (GDPR), currently offers the most robust legal framework to balance AI and privacy thus far. The AI Act creates three categories of risk (unacceptable, high, and limited), which it uses to either ban or restrict proportionate to the risk category. GDPR provides individuals with a right not to be subject to a decision based solely on automated processing, including profiling, obtain human intervention, and contest that decision. Finally, the California Consumer Privacy Act (CCPA) gives consumers the right to opt-out of automated decision making, to learn about the algorithmic logic, and know about the likely outcome.
In the midst of this transformational landscape, and to complicate an already complex area, companies are dealing with a spike in AI-aided cybersecurity disruptions, deep fakes, and sophisticated ransomware attacks. Those concerns are more pronounced during an increasingly hostile political environment, including upcoming presidential election cycles.
This program – now in its ninth year – brings together individuals charged with formulating their organization’s global AI and data privacy governance strategy. It is for those of you who must implement responsible AI frameworks, in the context of prolific global privacy laws, to determine the right approach for your organization. What are the practical implications of your chosen approach? What are the risks? And how do you exploit the opportunities with generative AI to you advantage? This program is for privacy and data governance practitioners within every organization – legal, compliance, IT security, and audit – hoping to gain insights and practical information about the ongoing conversation surrounding the global regulatory landscape on data.
What You Will Learn
After completing this program, participants will be able to:
• Gain insights on key substantive and procedural compliance recommendations for managing AI in the context of privacy laws such as GDPR, AI Act, and CCPA.
• Build on the fundamentals for AI governance through transparency, fairness, privacy, and human oversight
• Set your organization’s compliance strategy
• Heed advice from distinguished experts from both government and industry on legislative developments on how to manage and avoid enforcement risks
Who Should Attend
This program is intended for general and solo practitioners, transactional attorneys, general and corporate counsels, in-house lawyers and legal professionals supporting any client with information risk issues.
Special Feature
Earn Continuing Privacy Education credit
Program Level: Intermediate
Prerequisites: An interest in AI and the interplay with global data protection issues.
Advanced Preparation: None