7 min read

Proctoring During Covid Requires Change: How Higher Ed Can Adapt More Effectively

Distance learning is here to stay. In a recent survey, 79% of college presidents said that they are likely to reassess the long-term mix of in-person vs. virtual education at their institution. But as almost all schools have learned during the last year of the pandemic, adapting to the virtual model is laden with pitfalls. 

Accurate assessment is one of the most complex challenges to overcome. Fair tests are the cornerstone of high-quality education. Instructors need to make sure that students work independently and that no one gains an unfair advantage. Schools wanting to maintain their reputation for high-quality education need a strategy and a plan for proctoring during and after Covid. Maintaining the integrity of remote assessments is challenging.  

At the start of the pandemic, some schools immediately relaxed their rules to give students a better chance at preserving their grades during the disruption. Some gave up on proctoring during Covid and even allowed students to switch to pass/fail if they got into trouble at some point in the semester. Merciful as a stopgap measure, this policy is unsustainable. 

Students need the incentive to do good work. Schools need to measure how well they are educating students remotely. Lower-division undergraduate courses in STEM fields often have class sizes of 70 or more people, with first-year core course lectures of greater than 400. Even if there are adequate TA resources for personalized instruction, objective assessments are the only way to normalize grades based on performance for class sizes that large.

In the days before COVID, students took tests in a classroom or test center under human proctors’ surveillance. When they checked into the test location, the proctors made sure students only had access to allowed aids (if any) such as a calculator or a blank notebook. Under the watchful eyes of human proctors, students completed the list of questions during a time limit and delivered the answer sheet securely to the proctor to be graded. 

With remote testing, professors can’t secure the room to deter cheating like in a test center. Students usually take the test at home. At home, students are used to having any resource available on the internet at their fingertips. At home, they collaborate with their classmates to get the correct answers. Over the course of a semester, they may have taken many open-book quizzes in their dorm room. 

When you change up the rules to a remotely proctored test, students may have difficulty adjusting. They may not consider looking up something online or conferring with a classmate “cheating” per se, even if it is explicitly called out in the test rules or instructions from the professor. 

Offering students a proctoring platform they respect is essential to ensuring independent work and overcoming the challenges of delivering proctored assessments. Students need to know that instructors will reward them fairly with scores and grades that reflect their hard work. They are also wary of systemic bias or intrusions on their privacy by remote proctoring.

The remote proctoring solutions that universities have adopted have had mixed results. Although not qualitatively different from the monitoring they would get in a test center, remote proctoring has generated resistance from students and faculty groups. Critics question the fairness, invasions of privacy, and unduly inflicted anxiety of remote proctoring solutions.

  • Privacy - Remote proctoring needs to look at the test-takers face and listen to the room in which they are taking the test. Systems may also capture keystroke and other biometric data. Given the abuses of facial recognition technology in law enforcement, students at several schools are wary of the potential in remote proctoring. The lack of transparency by proctoring companies around what they do with the data and how long they retain it adds to the unease. This movement has culminated in an EPIC lawsuit (literally and figuratively) joined by the Washington DC Attorney General.
  • Bias - Given the stories about algorithmic bias by artificial intelligence systems, students and professors are wary of the potential in proctoring systems. They have good reason to be. For example, general-purpose facial recognition AI treats dark-skinned and light-skinned subjects differently. A 2018 study at MIT showed that ubiquitous facial recognition software only misidentified gender 1% of the time with light-skinned images, but this error rate went up to 35% with dark-skinned women. 
  • Fairness - How well these systems deter cheating? It’s impossible to know precisely. It is well-documented that when proctored test administrations were compared against unproctored, the unproctored students score significantly higher. Researchers have concluded from the data that students gained an unfair advantage when tests were unproctored. On the flip side, many online proctoring systems flag innocent test-taking behavior as a potential test violation. These false positives make students less trusting that these systems catch violators and raise anxiety that they may be unfairly labeled as a cheater.
  • Accessibility - Successful proctoring relies on having adequate bandwidth and a reliable laptop. UC Berkeley decided to drop their remote proctoring because of concerns about the accessibility of the product and its incompatibility with equipment loaned to students during the COVID19 shelter-in-place.

Some proctoring companies take these student concerns seriously. At Rosalyn, for example, we have empaneled a first of its kind student advisory board (SAB). The students give us the unvarnished truth about their experience in remote proctoring. These insights have compelled us to design our proctoring system to address the critiques and gain students’ and faculty’s acceptance of our solution.

  • Increased Privacy – Rosalyn has a fully transparent data capture and usage policy. We believe that users should know what data we capture, who sees it, how we analyze it, and what we do with it afterward. We ask for consent before using the data for any purpose other than proctoring the test they are currently taking. 
  • Minimizing Bias (Proctor and Algorithmic) - Rosalyn’s platform uses human-in-the-loop artificial intelligence to monitor test sessions. Having a human proctor confirm any flagged behavior during a test counters potential algorithmic bias. Rosalyn reduces human bias by queuing the review of these events to several different proctors on a round-robin basis. Many different proctors may check the events in a single test session, lowering the probability of bias in the person making the decision on the test session as a whole.
  • Efficacy - Rosalyn’s system emphasizes deterrence over detection. Communicating to students how the system works and what it is monitoring goes a long way towards reducing their anxiety. Knowing that there is a human-in-the-loop to confirm or correct the AI and that the school is rigorously proctoring the exam makes students confident that the test is fair.
  • Accessibility - Rosalyn’s system has a global reach to student devices  anywhere with internet connectivity. Rosalyn has a modest requirement for bandwidth and connectivity and works on any modern browser on Mac and PC laptop computers.

We have designed our proctoring system from the ground up to meet schools’ needs for reliable, secure assessments and students’ need for a fair, comfortable test experience.

See also:

This comprehensive guide explores the evolution of proctoring services, delving into the intricacies and comparisons of different AI proctoring models.

Read More
Marketing

As artificial intelligence (AI) takes center stage across industries, it brings misconceptions, especially in remote exam proctoring—a field accelerated by COVID-19.

Read More
Business

Welcome to the future of cheating, where AI isn't just an ally but an accomplice. Discover how Rosalyn.ai is combating this threat.

Read More