An Evolving Privacy Landscape: The Rise of Generative AI

In conversations about Artificial Intelligence (AI), data privacy and security have always played a significant role. This year, the issue has taken on even more prominent—and very consequential—dimensions. Generative AI has seen tremendous growth and adoption by the public, as evidenced by ChatGPT amassing 5M users in just five days. As AI becomes increasingly ubiquitous, its capabilities become more indistinguishable from human-generated content. As this technology continues to evolve rapidly, Big Tech companies are shifting their focus toward developing generative AI applications.

However, according to Nina Schick, a Generative AI expert, open-source communities' widespread availability and popularity are expected to intensify the AI frenzy. As these AI platforms rely on personal data for training, it raises the need for risk management and new solutions to address privacy-related concerns. One approach may be to authenticate AI by cryptographically signing information or content, with work already underway on an open standard for cryptographic authentication. There are also online resources like TruePic, which promote transparency and shared trust in digital content across the internet.

Commissioner of the Federal Trade Commission (FTC) Alvaro Bedoya recently cautioned against overlooking the privacy risks associated with AI technology. He stressed the importance of proactive and transparent efforts to identify and mitigate related risks, with the FTC playing its part through privacy enforcement activities. Bedoya also highlighted the applicability of existing laws and regulations to regulate AI—including Section 5 of the Fair Trade Commission Act—and encouraged companies leveraging AI for significant eligibility decisions to consider the risks and ensure compliance with the law carefully.

The potential implications of AI have raised concerns globally, leading to investigations and temporary bans in some countries. For example, Italy temporarily banned OpenAI over privacy concerns, the Canadian Privacy Commissioner opened an inquiry into OpenAI due to a complaint alleging "the collection, use, and disclosure of personal information without consent," and German authorities have launched an investigation into OpenAI's General Data Protection Regulation (GDPR) compliance. Despite these and other concerns, the end goal must include transparency, which should be embedded into the system's design from the start—while also providing redress to individuals harmed by the improper use of such systems. After all, the Government is accountable to the people, and governments must hold companies responsible.

The current notice and consent regime for online consent has been deemed inadequate to address the modern challenges stemming from these new technologies. Recent discussions among privacy and security professionals have focused on the likelihood of a shift towards a data minimization regime, which would likely become part of federal law. This regime would emphasize the purpose, specification, and limits to data retention within a comprehensive privacy program. Privacy protections would also be incorporated into national security strategies using algorithms requiring impact assessments. Interdependencies and synergies between privacy and security—such as those between the Chief Privacy Officer (CPO) and General Counsel—will also be required.

Compliance tools and mechanisms must adapt as regulations evolve to encompass AI. Privacy Enhancing Technologies (PETs), which minimize personal data use, maximize data security, and empower individuals, have been highlighted as one such tool. Additionally, the Fair Information Practice Principles (FIPPs) and other traditional privacy policies have been discussed in the context of incorporating privacy protections into AI policy and design.

In October 2022, the White House Office of Science & Technology Policy issued its Blueprint for the AI Bill of Rights, outlining principles for protecting individuals from specific threats and using technology to reinforce societal values. The blueprint was also accompanied by From Principles to Practice—which provides detailed steps toward actualizing privacy principles in the technological design process for those seeking to incorporate privacy protections into policy and practice. The AI Bill of Rights Framework uses a two-part test to determine what systems are in scope and applies to automated systems that have the potential to meaningfully impact the American public's rights, opportunities, or access to critical resources or services. The framework further describes protections that should be applied to all automated systems that can meaningfully impact individuals' or communities' exercise of rights, opportunities, or access.

Another viable approach to data privacy gaining traction focuses on co-regulation, which would involve developing enforceable codes or standards by industry in collaboration with government legal requirements. Co-regulation is seen as adaptable and compatible with comprehensive privacy models like the European GDPR and sectoral models in the United States. It offers benefits such as increased assurance and compliance with privacy by design features.

As AI has become increasingly ubiquitous, its capabilities have grown to the point where it is often indistinguishable from human-generated content. This has led to a growing concern about the privacy implications of AI and the potential for AI to be used to discriminate against individuals or groups. To address these privacy concerns, several organizations have developed guidelines and best practices for developing and using AI. These guidelines emphasize the importance of transparency, accountability, and privacy by design while calling for the development of new technologies and approaches to protect personal data and mitigate bias in AI systems. As this technology continues to develop, we must stay up-to-date on the latest developments and explore new ways to protect privacy and security.

At Dignari, our team of privacy subject-matter experts (SMEs) supports federal identity management programs to maintain privacy compliance by drafting Privacy Threshold Analyses (PTAs), Privacy Impact Assessments (PIAs), Privacy Act Statements (PAS), and reviewing System of Records Notices (SORNs) coverage. We also provide legal and policy expertise on the Paperwork Reduction Act (PRA), Federal Records Act (FRA), and other statutes and provide drafts and reviews of biometrics-related legislative and regulatory proposals, policies, directives, communications materials, Memoranda of Understanding (MOUs), and Memoranda of Agreement (MOAs).

With memberships in industry professional organizations, including the International Association of Privacy Professionals (IAPP), Dignari’s privacy SMEs are committed to keeping pace with the latest technologies, implementing best practices, tracking industry trends, and advancing privacy management and policy issues.

Previous
Previous

Zero Trust: Strengthening Cybersecurity Through Identity

Next
Next

Myth vs. Fact: Enhancing the Air Travel Experience Through Facial Recognition