Web Application - Group 4 - 2018/2019, Semester B, Quartile 3

From Control Systems Technology Group
Revision as of 15:15, 19 March 2019 by S168883 (talk | contribs)
Jump to navigation Jump to search

link rel="shortcut icon" href="https://www.tue.nl/favicon-64.ico" type="image/x-icon"> <link rel=http://cstwiki.wtb.tue.nl/index.php?title=PRE2018_3_Group4&action=edit"stylesheet" type="text/css" href="theme.css">

Introduction

When introducing a decision model, it is important to both validate and verify that decision model. This is especially important when it comes to computational models. When it comes to model verification, we ask ourselves the following question: `Does the model perform as intended?'. This question is asked in order to verify that, for example, the model has been programmed correctly. Furthermore, it verifies if the algorithm has been implemented properly and if the model does not contain errors, oversights, or bugs. We also have model validation. Here, we ask ourselves the following question: `Does the model represent and correctly reproduce the behaviors of the real world system?'. Validation ensures that the model meets its intended requirements in terms of the methods employed and the results obtained. The ultimate goal of model validation is to make the model useful in the sense that the model addresses the right problem, provides accurate information about the system being modeled, and to makes the model actually used[1].

What now?

Unlike physical systems, for which there are well-established procedures for model validation, no such guidelines exist for social modeling. Unfortunately for the implemented decision model, there is no easy or clear way to validate and verify the model. This is mainly due to the model containing much subjectivity through human decision making. When users of the decision model use it, they have to provide input themselves. These inputs are not just numbers, but they are about whether or not the user agrees or disagrees with a proposition. This makes it hard to both validate and verify the model in a traditional way. In the case of models that contain elements of human decision making, validation becomes a matter of establishing credibility in the model. Verification and validation work together by removing barriers and objections to model use. The task is to establish an argument that the model produces sound insights and sound data based on a wide range of tests and criteria that `stand-in' for comparing model results to data from the real system[1]. This process is akin to developing a legal case in which a preponderance of evidence is compiled about why the model is a valid one for its purported use. In order to still do some verification, we use subject matter experts in order to gain a grasp of the credibility of the model. We implement ways to measure this credibility through evaluation and role-playing.

Credibility

As coined earlier, we want to somehow make the credibility of the model tangible. We do this through evaluation and role-playing. A group of domain experts will do the evaluation. These domain experts consist of both the group working on this project and higher-ups that go over anti-drone mechanisms at Eindhoven Airport. We asked higher-ups at Eindhoven Airport that go over anti-drone mechanisms to spread the decision model questionnaire and have it be filled in by numerous individuals that all agree on the interests, needs, and characteristics of Eindhoven Airport. Furthermore, we ask for an initial solution that they think is the best from the list of all the solutions we forged. It is then interesting to see if these individuals get the same results for the decision model and if they agree with the decision model. Additionally, it is interesting to compare the initial solution they thought would be best for the recommended solution they got and what they think of the recommended solution. Are they surprised? Are they not surprised at all? Does the recommended solution provide new insights?

As we do not want to depend on a select few individuals from Eindhoven Airport alone, we also propose an example scenario where the user taking the questionnaire becomes a higher-up of a clearly defined airport that has to design a mechanism against unwanted UAVs. This is the role-playing method to establish credibility. This includes the needs, wants, and beliefs of this airport. We, internally, take this questionnaire as well. Afterward, we compare the initial thought of solutions, the recommended solutions, and the opinion of the recommended solution for the proposed airport.

Methods

Let us consider the two methods coined earlier for testing the credibility of the decision model to a certain degree.

Evaluation

Testing the credibility of the model through evaluation will be done, as briefly introduced earlier, by sending a questionnaire to higher-ups at Eindhoven Airport that go over mechanisms to counter illegal drone activity around their airport. What we are particularly interested in with this way of verification if what the recommendation proposed by the decision model based on the inputs of the individuals fits the airport. With this, we want to test if the decision model proposes solutions that would work for the airport.

Role-playing

Testing the credibility of the model through role-playing will be done by proposing an example scenario where the individual is a higher-up at an airport company. Here, this individual will decide on what mechanisms to consider when it comes to illegal drone activity. This will be done based on certain information given to the individual.

References

  1. 1.0 1.1 Model Verification and Validation, Charles M. Macal http://jtac.uchicago.edu/conferences/05/resources/V&V_macal_pres.pdf