As colleges and universities rely increasingly on remote exams, serious concerns about the systems used to proctor these tests have emerged. These encompass both practical and ethical challenges, including poor facial recognition capabilities, biometric privacy and data use issues, and bias against certain student populations.
Now, students around the world are petitioning schools to reconsider the use of remote proctoring systems due to their potential detrimental effects. But these risks aren’t inherent to all systems. By learning how remote proctoring works and understanding your options, educators can find solutions that respect student privacy and dignity while maintaining exam integrity. This primer explains the features and techniques offered by next-generation remote proctoring systems to secure online test environments transparently and unobtrusively.
Human proctors have been a part of academic test-taking for as long as there have been tests. As testing moved online, however, artificial intelligence (AI) introduced new possibilities, including fully automated AI-based solutions and AI-assisted human-in-the-loop (HITL) systems that rely on webcam and microphone monitoring along with a variety of other security and privacy-protecting measures.
- Human proctors typically use purpose-built systems that combine video meetings and screen sharing to monitor the remote test environment and the test-taker’s exam as it is completed. Often, this means dividing their attention between 10 or more simultaneous video feeds.
Despite the advantage of precise judgment by human proctors, relying solely on human proctors is impractical; people simply can’t give their full attention to everyone taking an online test, and this model severely lacks scalability. Proctors’ decisions may also be influenced by personal bias, lack the clear rationale and documentary evidence that automated systems offer, and be applied unevenly.
- Automated AI-based proctoring systems without human supervision rely on AI analysis of video feeds to detect patterns that suggest a test violation automatically. These systems can monitor hundreds or thousands of test sessions simultaneously and typically detect the presence of prohibited items like cell phones or earphones, or another person in the room.
Automated proctoring systems without human supervision can present significant challenges for both students and educators due to their lack of reliability; with no opportunity for human review, these systems may identify violations where none exist. These risks are amplified when an AI model relies on datasets that don’t represent the student population. In particular, the bias built into many fully automated systems has been shown to disadvantage women, people of color, and people with certain disabilities or medical conditions.
- AI-assisted human-in-the-loop (HITL) proctoring systems seek to strike a balance between human and AI oversight to offer the benefits of automated proctoring, such as real-time alerts, consistency, and scalability, along with the accuracy and judgment that only a person can provide. While the AI may flag potential violations, the final decision-making authority rests in the hands of a person—either the instructor or a designated proctor.
Advanced HITL systems are best positioned to correct latent algorithmic bias of AI and put a check on human bias inherent in any proctoring system. In HITL systems, a real person must confirm or dismiss potential test violations flagged by the AI. Conversely, advanced systems also refer specific queued events rather than whole sessions to individual proctors, forestalling both potential collusion or malicious action between a particular student and proctor.
The accuracy of remote proctoring is a central concern for students, educators, and institutions. Because HITL systems are designed to improve accuracy, they are rapidly gaining popularity. However, as recent developments have shown, accuracy is not the only—or even primary—concern.
Current systems, whether fully automated or HITL, are typically browser-based and give proctoring companies complete control over everything on the student’s computer, including stored passwords and other sensitive information. This introduces significant security and privacy risks. Additionally, many online proctoring companies are not forthcoming about what data is collected or how it is used, profoundly eroding student trust. Preserving academic integrity and protecting student dignity, therefore, requires next-generation solutions that provide ethical, comfortable testing experiences, like Rosalyn.
Rosalyn is a purpose-built HITL invigilation system designed to respect student privacy and dignity. Because it is an application rather than browser-based, it doesn’t take full control of students’ computers and cannot access personal information unrelated to the exam process. Additionally, there is no feature that allows a particular person proctoring to monitor a particular test-taker. Events are queued automatically. Spreading the review decisions on individual events rather than sessions minimizes the impact of conscious or unconscious human bias. There is also full transparency in data collection and data use. With a suite of thoughtfully designed features, this state-of-the-art system ensures fairness and supports positive student experiences.
Here’s how remote proctoring works with Rosalyn:
The most common test violations relate to students accessing outside sources and unreliable verification of the test-taker’s identity. Rosalyn addresses both situations by locking down the student’s system and confirming their identity.
- ID verification occurs at test start and throughout the test.
- External devices are blocked from connecting to the student’s computer.
- Use of any other application, including recording and other communication apps, is prevented.
When students know the rules of the testing process, the consequences of violations, and the methods used to ensure the integrity of the results, they understand that the school takes test security seriously.
- Test-takers agree to the test rules and understand how remote proctoring works.
- The system alerts test-takers to potential violations in real time.
Rosalyn analyzes many data streams from the test session, including webcam video and audio, to detect potential test violations.
- The system continuously verifies that the test-taker is who they say they are.
- AI analyzes webcam video to look for gesture, gaze, and movement patterns that may suggest violations, such as reading prohibited materials off-camera such as phone, screen, or cheat sheet.
- Other data streams can identify when a test-taker is consulting with another person off-camera or searching online for an answer.
- Objects such as cameras or cell phones are identified and flagged.
Rosalyn’s AI component flags events as potential violations, allowing a human proctor to make the final call.
- The AI-based system uses clear and objective criteria to flag events.
- Human proctors can review events occurring during the test session in real time or after the test session.
- Queuing distributes events efficiently among human proctors, preventing bottlenecks and supporting unlimited scaling. This also reduces natural human unconscious bias by reducing the amount of time human proctors view the test-taker.
The real-time nature of Rosalyn ensures that potential violations are addressed promptly and only when such actions are necessary.
- Recordings are annotated to pinpoint evidence of potential violations.
- Proctors can focus their attention on just the relevant events in the test session, making the review fast and efficient.
- Test-takers can receive real-time alerts to prevent minor attempts from becoming serious violations.
A key feature of Rosalyn is its ability to learn as it monitors so they can identify novel attempts to gain an unfair advantage.
- Machine learning speeds up the discovery of new forms of potential violations. The system identifies outlier patterns of behavior that may indicate a test violation, and models can be adjusted and refined to detect that behavior going forward.
- Continual improvements in the models also improve the system’s ability to determine which actions by students aren’t violations. This leads to a better experience for all test-takers while allowing interruption only when necessary.
- Unlike the time-consuming training required to update proctoring protocols when using human proctoring alone, updated AI models are quickly deployed for the immediate benefit of students and educators.
The more information schools have about the effectiveness of their testing processes, the better they are at conducting tests fairly and equitably.
- By analyzing aggregated data from sessions, Rosalyn can spot and report on patterns that can’t be discerned from analyzing a single test session.
- Psychometrics applied to session data allows the system to distinguish test-takers who are getting live assistance from another person during the exam from those who had prior access to the test questions and answers.
- Knowing when test content has been compromised allows educators to revise test forms or take other actions to preserve the integrity of their assessments.
Fairness should lie at the heart of any examination. Similarly, the integrity of exams depends on providing students with an even playing field in which they can demonstrate their knowledge. Students can give their best efforts only if they are assured that factors beyond their control won’t inappropriately skew results. They also need to be confident that the proctoring process is ethical and that the test-taking process itself won’t endanger their privacy.
Understanding how remote proctoring works with a next-generation system like Rosalyn is the first step toward establishing trust, obtaining reliable test results, and creating comfortable, equitable experiences for students.