7 Mins

Reducing Online Proctoring Discrimination Is Possible With AI

When IBM and Amazon failed to identify women of color in a Stanford study in 2018, the fault could be easily traced back to the training data not including many women of color. A truism of computing, in general, is “garbage in, garbage out.” This maxim holds especially true in machine learning applications.

For online proctoring to avoid systemic bias that cuts against people of color, the training data must include (you guessed it) lots of people of color. The data must include many people who represent all the dimensions of human diversity. The training sets must include both men and women, the neurotypical and the neurodivergent. Expanding the range of subjects in the training set can become quite subtle and require digging a lot deeper than the spectrum of skin colors to look at culture. Regardless of body shape and skin color, students’ culture may be expressed through facial hair or lack of it, head covering, jewelry, clothing, and common gestures, postures, or vocalizations. Training on this broad range of human expression is crucial to reducing online proctoring discrimination.

Many online proctoring companies use generic off-the-shelf image recognition and machine learning software packages to monitor test-takers. Although these systems may be very robust and cover a broad range of applications, their depth in offering specific functionality relevant to the online proctoring application may be deficient. Also, these systems are trained on close-to-optimal data feeds with high resolution and bandwidth, which favors test-takers of higher socioeconomic classes. In the production environment of online assessment, the quality of connections may vary widely, with some students’ bandwidth constrained. AI systems trained on high bandwidth and connectivity datasets may have a hard time treating lower bandwidth data feeds equally.

No matter how good generic off-the-shelf image recognition and machine learning software are initially, they rely on a set of generic standard data, which is only occasionally updated on the software vendors’ schedule, usually only a couple of times a year. This means it may not include continual improvements to address uncovered deficiencies. There may be no easy way to expand the dataset for facial recognition and identification functionality to reflect the student body better than petitioning the software developer to revise the model with new data frequently. The same kinds of criticisms apply to image recognition models used to identify cell phones, books, or other prohibited material in the testing environment.

No matter how good the AI is at proctoring, suspected test violations need to be validated by a human. Human-in-the-loop systems like Rosalyn’s guarantee that a human proctor will confirm any incident flagged during a test session before it ever gets to an instructor or disturb the student’s testing experience. This reality check is crucial to ensuring that inherent biases in the system don’t impact students without clear accountability.

There are no easy fixes to reducing bias in online proctoring, but many specific initiatives can make a huge difference. The most important of these is to put students first. Rosalyn recognizes that need to put students first and has built this into our design process.

This year, we empaneled the first-in-the-industry Student Advisory Board (SAB) to bring student voices much closer to our developers. The nine members of the Rosalyn SAB hail from top universities worldwide and represent a range of cultures, ethnicities, and neurotypes. In addition to sharing their experience of potential bias in the online proctoring, they also share their insights about the online testing experience and how we as a company can better serve them. Our commitment to putting students first goes beyond improving the robustness of the AI to how we treat students' data, how we communicate the rules of the test, and how we respect the rights and dignity of students through the whole process. Conversations with the SAB and students worldwide have influenced the design direction of Rosalyn proctoring systems and the development of specific features in our latest release.

Rosalyn’s mission is to keep assessments fair. We’re devoted to ensuring that no one gains an unfair advantage and compromises the integrity of assessments. A key part of that is ensuring that innocent students aren’t more likely to come under suspicion because of their skin color or cultural and religious traditions. We also want to ensure that the process doesn’t add undue stress on any particular group of people because of their ethnicity or background. Evidence of this commitment is our use of purpose-built artificial intelligence trained on a quarter of a million actual test sessions from all over the world. Rosalyn continually updates the model based on new data. Each iteration improves the ability to detect test violations and differentiate them from an ever-expanding definition of innocuous normal test-taking behavior.

Everyone in higher education—students, instructors, and administrators—has a stake in making sure that online assessments are fair and students play by the rules. Online proctoring is a vital tool to keep tests secure and ensure that online proctoring can’t disadvantage or stress some students more than others because of inherent bias. Rosalyn’s system incorporates the ideal balance between diligent proctoring to ensure academic integrity and deep respect for the rights and privacy of students. This imperative is baked into our design philosophy and permeates every part of our services to higher education institutions.

See also:

This comprehensive guide explores the evolution of proctoring services, delving into the intricacies and comparisons of different AI proctoring models to provide education leaders with the insights needed to make informed decisions and uphold academic integrity.

Read More
Marketing

As artificial intelligence (AI) takes the center stage across countless industries, it brings along its own suite of misconceptions. This is especially evident in the realm of remote exam proctoring – a field accelerated by COVID-19 – where misconceptions of transformative technologies sow doubts and hinder the adoption of genuinely transformative tools.

Read More
Business

Welcome to the future of cheating, where AI isn't just an ally but an accomplice. In CheatCode 2.0, we're delving into the unexpected frontier of academic dishonesty—where the machines that are programmed to help us learn can also be hijacked to game the system. Get ready for a journey through the intricate maze of ethical dilemmas and technological advancements, as we unravel why AI might be the newest threat to academic integrity and what Rosalyn.ai is doing to level the playing field

Read More