The interface of a facial recognition technology system that has been installed at the entry point to a common space at a public U.S. university. The system automatically scans a user's face, matches facial information with records in the database, and displays the matching results on screen. Credit: Yao Lyu

Many people use facial recognition technology on their personal devices, to quickly and securely enter a password or complete an online transaction. But when that same technology is deployed in public settings—such as to screen airport passengers or to grant access to a secure location—how do individuals whose images are captured feel?

According to a new study led by Penn State and the University of Alabama, an organization's decision to publicly deploy —and whether or not stakeholders are involved and informed in that decision-making—could not only lead to users' concerns of privacy, , mass surveillance and bias toward minority groups, but it could also reveal issues of organizational justice, or perceptions of fairness, within the organization itself.

"Technology is created by humans, and humans can be easily biased, so technology is never neutral," said Yao Lyu, doctoral student at the Penn State College of Information Sciences and Technology. "The human-computer interaction community has been paying growing attention to justice issues in technology design and development. But as we highlight in our study, besides design and development, the implementation of technology could also engender justice issues, especially in an organizational setting where various stakeholders are involved."

Compared with use of facial technology on a personal device, where the user has full control over the decision to use the tool as a means of authentication, recent implementation of facial recognition technology in public settings captures and uses individuals' images without their consent. This has led to growing controversy—most notably surrounding concerns of privacy and disproportionate misidentification of women and people of color, often leading to false arrests due to algorithmic bias in facial recognition tools that law enforcement agencies use to identify suspects and witnesses. Ongoing debates have led U.S. cities and states including San Francisco, Boston and Maine to limit or ban the use of facial recognition technology in public spaces. Yet its deployment is on the rise in certain sectors—including public U.S. universities—where Yao and his team focused their study.

"I was surprised by the fact that the education sector is among the public settings in which this technology is being implemented at scale, through campus security, attendance monitoring and virtual learning," said Hengyi Fu, assistant professor in the College of Communication & Information Sciences at the University of Alabama and co-author of the study. "In contrast to more contentious discussions about facial recognition technology in other areas of society, there is little sustained opposition to its implementation in schools."

According to Fu, in a higher education setting, where many young people shape their sense of social identity and may be more willing to share personal information, implementation of facial recognition technology could lead to the "normalized elimination of practical obscurity," a pre-internet concept which upholds that private information in publicly-accessible records—such as police logs—is largely protected due to accessibility limitations.

"Attempting to manage what is known and disclosed about oneself can be seen as a legitimate way of students ensuring that their actions and intentions are correctly interpreted and understood," she said.

The entry point to a common space at a public U.S. university, which requires individuals to use facial recognition technology to access the space. Credit: Yao Lyu

In the study, the first of its kind to investigate how participants respond to facial recognition technology in a university setting, the researchers examined the perspectives of faculty, staff and students who were required to gain access to a public university's shared collaborative workspace using facial recognition, replacing the previous card-swipe system, without advance knowledge or education. They interviewed 19 users of the system at least two weeks after each participant's initial use of the technology, measuring each user's previous experience with and knowledge of facial recognition technology; their initial impressions and reactions; their overall attitudes toward the system and the decision to install it; and if their opinions changed after their initial interactions.

They found that more than half of the users were initially uncomfortable with or confused by the decision to install the system, while a minority of users were curious or even impressed. Over time, all participants with negative perceptions remained steadfast; most who initially were in favor of the installation gradually came to feel less supportive; and most who initially felt neutral came to accept the technology, but with some level of reluctance.

Additionally, most participants complained about the decision-making process and considered it unfair that their consent was assumed—regardless of their impression of the technology itself. No users reported that they had received official training on the technology or information on why the system was installed or how the decision to install it was made.

"New technologies are introduced to our lives every day, including facial recognition, oftentimes (with promise) of bringing us more convenience," said Lyu. " It is fancy, fast and especially convenient, but (there are) also people concerned with the potential problems of the technology. The question is: does the convenience outweigh the concerns?"

Through this study and other projects in which the researchers investigate emerging technologies and how people react to them, they hope to arm organizations' management with evidence-based information that could shape how decisions are made when implementing novel or sometimes controversial technology.

"As facial recognition technology deployment continues and public criticism mounts, research is needed to understand how controversial technologies are implemented, what kinds of experiences users have and how users are likely to respond," said Fu. "This study provides empirical and conceptual insights into users' interactions with a facial recognition system encountered in a real-world setting. The qualitative data offer a foundation for design-related quantitative research that will facilitate comparisons between attitudes toward technology over time."

The researchers' paper, "Facial Recognition Technology Interaction in a University Setting: Impression, Reaction, and Decision-making," was presented at iConference 2022 this week, where it was a finalist for the Lee Dirks Award for Best Full Research Paper. Yubo Kou, assistant professor of and at Penn State, also contributed early insight to the project.