At Intersection of Marginalized Groups and Tech, Legal Questions Abound

CHBA CLE shines light on gray area of racial language and gender issues of AI

Many aspects of our daily lives are increasingly dominated by technology and data. While debates surge about equality and representation in government, some are raising concerns about the supposed neutrality and objectivity of technology.

In turn, this can create many legal questions surrounding equality and diversity at the intersection of data, technology and law. And the area around these questions don’t always have clear paths in terms of legal recourse.


“What we’ve seen a lot of times with technology is that there’s a disproportionate impact on marginalized and vulnerable populations,” said D. Marty Esquibel, a data protection professional and current HIPAA Privacy and Security Officer at the Colorado Department of Human Services, though he said he was sharing his own opinions and not speaking on behalf of his organization.

Esquibel on Dec. 17 presented a CLE for the Colorado Hispanic Bar Association, introducing the intersection of race and technology. This disproportionate impact can create legal questions about marginalized groups — not singular to Latinos. Both a lawyer and data professional, Esquibel questioned assumptions surrounding data and tech and their objectivity, especially about awe inspired by tech. 

One issue, especially in American society, is the fetishization of technology and data, Esquibel said. Americans tend to think that delegating social problems to data and/or tech will solve them — because they’re objective, neutral and efficient. However, that might not be the reality as “technology’s just a reflection of us,” Esquibel said. He believes there’s danger in placing power in technology, and of particular concern is what develops in the tech being used. Because humans are flawed, he said, flaws can be reflected in software and data.

Esquibel said programming will do what it is told to do, not what someone intends it to do. As a result, bias can enter into the technology in areas ranging from who develops the tech, to implementation to use of the tools. These mismatches of intention and programming can lead to bias. Three areas in technology can produce bias, Esquibel said. Namely, those areas are in the development, data and deployment of technology and harmful bias can enter at any point.

“One of the things to recognize about development is that it’s coded to societal defaults,” he said, adding that usually means “affluent white males.” This can come into play with settings, features and implementation.

For example, Esquibel said that artificial intelligence image software and camera settings are often designed using lighter-skinned individuals. As a result, lighter-skinned individuals will have a higher accuracy with their AI and image software compared with a person of color.

Even attempts to address these societal defaults have mixed impacts. Esquibel pointed to the discovery in 2019, as reported by New York Daily News, that Google had paid $5 Starbucks gift cards “specifically” to “darker skin” homeless people to get 3D biometric and facial scans for training recognition tech to recognize people of color. Reportedly, homeless people were chosen because they were less likely to talk to the media.

But issues with equality in tech don’t just surround skin color. Another concern is that tech is often coded from a male-centric point of view, Esquibel said. He provided an example of an app for finding parking spaces. Some women may prefer the ability to select certain criteria for areas that are well-lit and open for parking after dark, instead of just any parking place, or in an area such as parking garage.

He said he often checks whether software allows for proper “gender coding” that address such concerns. But he added that he checks not just to see if gender coding exists in tech, but that it’s also consistent.

Another concern is the language barrier that can exist for some groups. Esquibel said that there is a problem with some applications oriented toward Hispanic and Latino populations, namely that they do not provide Spanish language options for users. While that may be surprising to learn, Esquibel said details such as language options are not often brought up in development because of who is developing the application.

“It’s just this one-size-fits-all,” Esquibel said. He added that many involved with tech development are white males, and that a major issue is the lack of diversity at tech development tables.

But in order to address bias and trust in tech and data, who can play a part? Esquibel said the players addressing these issues should be technologists, public interest professionals and lawyers working together. Each of these players have a piece to the puzzle of the challenge of addressing is the issues in an integrated way.

Because the impacts aren’t restricted just to tech development. AI and tech used by law enforcement can affect marginalized groups because of those limited voices in development, according to Esquibel. Policing tools, risk-assessment and predictive crime tools are often based on criminal history/activity, but this can mean that tools such as COMPAS are more likely to predict future interactions with police than crimes.

Harassment can be another issue, Esquibel said. He spoke of a case involving a young Black man who was misidentified by facial recognition and arrested. The tech should’ve been used as an investigative tool, not a primary factor for arrest. He mentioned later that some jurisdictions have prohibited the use of biometric technology in some situations.

And through all of this, Esquibel mentioned many legal concerns raised on who to sue if there’s an error in the process of using tech. Do you sue the judge? Sue the company who made the software or the company who bought it? Do you sue a lawyer for malpractice?

Esquibel said he advises young Black and Latino men not to enable biometric and facial unlocking on their phones, because if they’re being investigated by police, the officers can use their facial and biometric data to open the phone. In contrast, a PIN offers more security and more right to silence is available.

The impact of these issues can be problems with interpolation and extrapolation of tech, he said. For example, if the tech is developed by a group of white males and another group of white males are brought in to test it, the data returned will be good, because the subjects match. However, if a group of Black women are pulled in to test against white men, the data may not return accurately.

In addition, he asked if the U.S. Supreme Court’s guidance on anti-classification and emphasis on remaining “blind” in categories such as race and gender apply to AI. “I don’t know,” he said, but added that technical community has not had close exchanges with the legal community.

Esquibel said that people should be questioning technology. From a legal standpoint, he wondered if there should be certifications on accuracy and quality, fairness and performance and algorithm effects for tech.

Esquibel posed questions on whether there should be review boards in place to oversee transparency and oversight. Or, for certifying tech with fairness and performance tests both in the “real world” and in controlled testing, as well as certifications for safety and security, limited or prohibited use and accountability.

“The important thing behind this is that technology does not take into account social and cultural factors behind data,” Esquibel said. “To the machine — data is just data — and it has no meaning.”

— Avery Martinez

zp-pdl.com

Previous articleLegal Lasso: States File Antitrust Lawsuit Against Google
Next articleWith Offices Quiet, so too are Employment Cases

LEAVE A REPLY

Please enter your comment!
Please enter your name here