PRE2017 3 Groep9 - Results: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(Created page with 'Back to the PRE2017 3 Groep9 ==Results== This part of the wiki will describe all of the results of this project. These results will be split into two parts: the test results…')
 
No edit summary
Line 5: Line 5:


==Test Results==
==Test Results==
[[Figure1.png]]
[[File:Figure1.png]]


<font size=”8”>Figure 1: Test scores</font>
<font size=”8”>Figure 1: Test scores</font>
Line 18: Line 18:


==User Experience==
==User Experience==
[[Figure2.png]]
[[File:Figure2.png]]


<font size=”8”>Figure 2: Results</font>
<font size=”8”>Figure 2: Results</font>

Revision as of 12:24, 29 March 2018

Back to the PRE2017 3 Groep9

Results

This part of the wiki will describe all of the results of this project. These results will be split into two parts: the test results and the user experience results. Both of these can be further split into two categories: quantitative and qualitative results. The quantitative results are the numerical test results of how many points people scored on the tests, and the ratings that people gave the different methods of studying. The qualitative results are the results gotten from the questionnaires and interviews with test participants. These results can be used to explain the values of the numerical results as well as giving more insight in the enjoyability of the game.

Test Results

Figure1.png

Figure 1: Test scores

Figure 1 displays the test results of both the test group, also known as game group, and the control, or traditional, group. It shows the amount of people on the y-axis and the amount of points scored on the test on the x-axis with a maximum of 15 points. As can be seen the test group converged around an average score of a 4,92, with a score off 5 being the most common. The scores for the control group are more difficult to evaluate, since they do not converge. Looking at the graph it seems like it is split into 3 groups, one group converging around a 4,5, one group around the 8.5 and the last group converging around 13. The average of the entire control group was a score of 8,36. The averages of both groups can be found in figure 2, in the second column. We will now explain how we obtained these results. Firstly we will take a look at the test group. Based on feedback given by test subjects the most important factor for the low test scores achieved by this group was the amount of time it took to finish one rotation of the game. Most test subjects did not manage to repeat every word twice, now since repetition is very important when studying it is clear that this lack of repetition is a major reason for the low scores. Other factors were that people did not think the minigames were connected enough to the words, and that the game only taught words Italian to English and not the other way around. Secondly we will look at the control group. The thing to notice here, is that the results are very spread out. Looking at the answers given to the questionnaires before taking the test, it is clear that the groups correspond to people answering they were bad, average and good at studying languages. From this we can assume that if we had more test subjects, the graph as a whole would converge to the average of a 8,36 and be less spread out. One other thing to mention here is that for the test group, there were also people that said they were bad or good at learning a language, however these results are still clustered. To summarize: the game was less efficient than the traditional method of studying, the main reasons for this being: The game took a long time. The minigames were not connected enough to the words. The game did not teach English to Italian, only Italian to English.

User Experience

Figure2.png

Figure 2: Results

As for the user experience we asked people to rate all different methods of studying. The results from this can be seen in figure 2. From these results we can see that both groups rated the traditional method about the same, around a 5.9, and they also both rated alternative methods, mostly WRTS, with a 8.1. As for the rating the game got, it was about the same as the traditional method, with a score of 5.73. When looking into why the game scored about the same as the traditional method we noticed that most people did enjoy the game more, but rated it about the same because they thought it was less efficient, which is a correct evaluation as shown in the previous section. As for the difference between WRTS and the traditional method, we did not get a lot of reasons for why this was, however from our own experience and by looking at the criteria for the other ratings, we can assume it is because WRTS automates a lot of the work a student otherwise has to do themselves when studying with the traditional method. It is also worth mentioning that not everyone used an alternative method, and thus did not rate it. Now as stated before, even though the game rating was about the same as the traditional method rating, the game was thought to be the more enjoyable method of learning by almost all of the test subjects. From this we can conclude that if this method of studying was improved to the point where it would be just as efficient as the traditional method, it has the potential to score about the same as other alternative methods.

Future research

Now that we have these results, we believe it is worth it to try and improve the game using the feedback that we received. The ideas for how to improve the game will be further discussed here: PRE2017 3 Group9 - Future Research