Preparing for the Virginia Consumer Data Protection Act

Beginning June 1, 2023, the Virginia Consumer Data Protection Act (CDPA) will come into effect for Virginia businesses and consumers.

What is the CDPA?

At its core, the CDPA is a data privacy law intended to provide guardrails on how businesses use and store the data of Virginia consumers. Virginia was the second state to pass a state data privacy law after California’s California Consumer Privacy Act (CCPA).

The CDPA will apply to covered businesses that conduct business in Virginia or affect Virginia commerce through targeting products and/or services to Virginia residents.  For the CDPA to apply to a company, it must either:

  • Control or process the personal data of at least 100,000 consumers during a calendar year; or
  • Process the personal data of at least 25,000 consumers and derive more than 50 percent of their gross revenue from selling personal data.

Personal data in this context includes “any information that is linked or reasonably linkable to an identified or identifiable natural person.”

What are the CDPA requirements?

The CDPA draws on concepts from the California Privacy Rights Act, CCPA, and the General Data Protection Regulation (GDPR) by establishing consumer rights relating to Privacy.

The main areas of the CDPA that businesses should prepare for are as follows:

Continue reading

Artificial Intelligence in the Workforce

On June 7, 2022, Conn Maciel Carey LLP partners Kara Maciel and Jordan Schwartz interviewed EEOC Commissioner Keith Sonderling about the EEOC’s recent focus on Artificial Intelligence (AI) and its impact on workplace discrimination. 

AI refers to a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[1]  It can feature in software used to complete tasks previously completed by human beings.  Relevant to the discussion with Commissioner Sonderling, employers can use AI in most employment and/or hiring decisions, such as who to inform about a new position, who to interview, and who to select for a position. 

When making those decisions, employers could suffer liability if they discriminate against an individual based on their race, color, religion, sex, national origin, age, pregnancy, disability status, or genetic information[2].  Unlawful discrimination can occur two ways – disparate treatment and disparate impact.  Disparate treatment occurs when individuals are intentionally discriminated against by an employer, whereas disparate impact refers to unintentional discrimination – where an employer’s neutral policies or procedures negatively impact individuals in a particular protected class.  

Employers should be aware, as Commissioner Sonderling stressed in his remarks, that AI technologies are only as good as the data and training used to develop them.  There have been numerous instances where employers who used AI tools to assist in employment and/or hiring decisions have been left with discriminatory results and potential disparate impact liability as a direct result of the technology.

Commissioner Sonderling offered some examples of ways that AI could unintentionally produce discriminatory results in employment decisions: