# Decision Model Investigation

In this section, we will investigate some different approaches to decision models. These decision models were investigated but were chosen not to be the final decision model that we will implement. However, for the sake of completeness of this wiki, we will describe our findings on other decision models in this section.

## Nearest Neighbour Strategy

NearestNeighbour, short NN, is a mathematical decision model. It is a machine learning decision model, in the sense that existing solutions, often denoted as training data, are used for NN to be able to accurately make predictions about new data such as a user which wants a solution for their airport. This decision model can make the choice which solution fits best to the user. Nearest Neighbour is based on the machine learning strategy KNearestNeighbors [1] [2][3].

### Picking variables / attributes

In order for Nearest Neighbour to work, we need to quantify our problem into numerical values. For this, we need to split this up into variables with numerical data. This can be done in the same way as we picked the attributes in section implemented decision model. These are variables that can tell which type of solution will fit best for this case. Examples of these attributes for the solution are, e.g. cost (in €), reliability (in %), range (in m), a hindrance to surroundings (scale from 1 to 10), CO2-emission (in kg CO2 / year), et cetera.

### How does NN work?

So, we now have defined a solution in terms of only numerical variables. Then, for each solution that we have found, we will assign corresponding values to the attributes. An example of how this is done can be found in this part of the solutions section.

Now, how NN then works as follows: it plots the points from the solutions in the n-dimensional plane, where n is the number of variables/attributes that each solution consists of. We have that the first variable will correspond to the first coordinates. Continuing this fashion, the second coordinate corresponds to the second variable or attribute, et cetera. Using these n variables or attributes that we will predetermine, we get a plot of the solutions the n-dimensional plane.

So, now we have that all solutions are quantified in the n-dimensional plane. We now ask the user to fill in these attributes for their desired airport simply. What we mean by this is that the user fills in the desired / optimal cost for the solution, the desired / optimal range for the solution, et cetera. This will again result in a point in the n-dimensional plane.

After that, the decision rule is quite simple; we check the Euclidean distance between this point, which in fact represents the most optimal solution for the user and all the other 'solution points'. We then check for which solution point this distance is minimal. In practice, this solution should correspond to the solution that fits the demands and desires best of all the possible solutions. We decided that, instead of only simply giving the best solution, we would list all solutions, and rank them based on distance.

### Problems and Improvements to NearestNeighbour

There are some problems that come with the development of NearestNeighbour, but fortunately, they can be overcome quite easily. First of all, we need to define what NearestNeighbours should do in the case that two solutions have the same distance. If this is the case, we will simply pick one of the two at random to not unfairly prioritise any solution over the other.

Furthermore, as it stands of now, some variables are more important than others simply due to their scale in distance. For example, one unit difference in cost (euro) contributes equally as one unit difference in reliability (one percent). This would also mean that this decision model would pick a solution that is 10 euros cheaper than a solution that is 11% more reliable. In order to tackle this, we normalise all the attributes. Normalising means that all values will be multiplied such that the values are between zero and one. When an attribute is normalised, the lowest value will be zero, and the highest value will be one. Since we do not focus on the mathematical background, we do not discuss this normalisation in great detail. Further explanation can be found here [4].

Now, another problem is that now all attributes have an equal contribution. However, some attributes might be more important than others. In general, the cost is an attribute that has a higher weight than the attribute CO2 emission. Now, we can counter this by multiplying all normalised attributes by a certain predetermined weight. These weights can be determined with all stakeholders; another option is for the decision model to ask the user which variables he/she finds most important, and then base the weights on the user's preference.

### Strength of Nearest Neighbours

Now, one reason that we chose the Nearest Neighbours is that it is quite easy to grasp. In practice, you simply ask which values he/she would most preferably assign to the attributes of the anti-UAV system, and which attributes are most important. Then, based on that, all that remains is listing the existing solutions based on increasing distance to this point. Furthermore, this makes the decision model quite easy to implement, were we to pick this decision model. Lastly, more solutions can easily be added to this decision model. This can simply be done by adding this point to the n-dimensional plane as described before. Naturally, solutions can then also easily be removed from the decision model by removing this solution point from the aforementioned n-dimensional plane.

A voting advice application (VAA), vote compass, or election compass is not a traditional decision model. A VAA is often displayed as a web application that helps voters find a candidate or a party that stands closest to their preferences. They are a rather new phenomenon in modern election campaigning. The VAAs consider preparation stages and running stages. The preparation stage addresses issues that reflect the most important dimensions of political competition. Furthermore, a database of parties' or candidates' positions on these issues is considered. Then, a formula to calculate the proximity of voters' positions to the positions of the parties or candidates. During the running stage, voters express their views on the aforementioned policy issues. Then, the application provides a personalised voting recommendation for each user. Usually, the output is a ranked list of parties or candidates according to the calculated proximities.

The tendency to vote stimulates the use of VAAs, rather than the reverse, according to Ruusuvirta and Rosema [5]. During the Swiss federal elections in 2007, 16% claimed that VAAs had motivated them to participate in the elections. Another 25% reveal that they have been partially motivated [6]. There exist many popular VAAs as of now. Germany has the Wahlomat' with 6.700.000 users, whereas Switzerland has the Smartvote' with 938.403 users. Additionally, the Netherlands has the Stemwijzer' with 1,5 million users, and the EU has the EuProfiler' with 919.422 users.

### Stemwijzer Decision Model

The second decision model that we investigated, was the Dutch StemWijzer[7]. This decision model will for a large part be based on the Dutch StemWijzer. The Dutch Stemwijzer is a website that helps people find the political party that fits best with their point of view/opinion. The way this works is as follows, the website presents the user thirty propositions, to which the user can answer to agree, disagree or either be neutral. Then, the user can choose whether there are some subjects that he or she might find more important than the others. StemWijzer[7] scores each political party by counting the number of times the user and the party have the same point of view for a proposition, where the more important propositions give double the score if the point of view aligns. In the end, the score for each political party adds up, and the parties with the highest score fit best with the point of view of the user. This model is explained in greater detail in Implemented Decision Model.

Back to the root page.

# References

1. "Brilliant.org: K-nearest Neighbors", Retrieved 12 March 2019
2. Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An Introduction to Statistical Learning with Applications in R. Springer, first edition, 2017.
3. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data mining, Interface, and Prediction. Springer, second edition, 2017.
4. "Wikipedia: Feature Scaling", Retrieved on 12 March 2019
5. Ruusuvirta, O., & Rosema, M. (2009, September). Do online vote selectors influence electoral participation and the direction of the vote? In ECPR general conference (pp. 13-12).
6. Ladner, A., & Pianzola, J. (2010, August). Do voting advice applications affect electoral participation and voter turnout? Evidence from the 2007 Swiss Federal Elections. In International Conference on Electronic Participation (pp. 211-224). Springer, Berlin, Heidelberg.
7. "StemWijzer", Retrieved on 14-03-2019