The overall aim of the usability tests we conducted during the Peer Evaluation (with classmates) and User Evaluation (with potential users) stages were to identify any usability problems degree.ly has so that they could be addressed.
The tests were conducted with the following goals:
Our main method of evaluation was user evaluations. We conducted usability tests twice: the first round with 4 classmates taking the same module; then the second round with 4 potential users.
All of our participants are students from the National University of Singapore (NUS) with the following key characteristics:
The tests required participants to perform a series of critical tasks necessary to the success of degree.ly. The nature of these tasks will be elaborated on in the later section (Test Tasks).
To conduct the user evaluation, we were assigned a participant each to facilitate the test for. Tests were conducted remotely via the video-conferencing platform, Zoom, and participants were required to complete tasks by interacting with our high-fidelity prototype on Figma. As facilitators, we had to record our observations as participants completed the various tasks we had prepared ahead of time. Once the tasks were completed, we asked a series of questions for participants to reflect on their experience with our prototype, and recorded those responses as well.
Once the A/B testing was completed, the facilitator asked several questions to get participants to reflect on their experience with our prototype.
Questions asked:
Note: Full observations and responses recorded by facilitators in templates are available here.
At the end of the evaluation, the results were parsed and compiled into a single document. Problems that more than one participant brings up were given a higher priority so that we could address as many prominent flaws as we could given the limited amount of time we had for this project. We also chose to use the following dimensions to discern what kind of user problem was proposed; this helped indicate to us how serious a problem might be and what actions we could take to resolve it.
Keeping in mind those dimensions, our group compiled all the feedback we gathered from users into two summation sheets, which we used to come to a consensus on what should be changed for each iteration of the prototype. Within these documents, we recorded the changes made and the reasoning behind them.
These were some of the major design changes we made for our design aspect that were unsuccessful.
Note: v1, v2, v3 refers to Prototype Iterations 1, 2 and 3 (final) respectively.