Types of Decision Models - Group 4 - 2018/2019, Semester B, Quartile 3

From Control Systems Technology Group
Jump to navigation Jump to search

<link rel="shortcut icon" href="https://www.tue.nl/favicon-64.ico" type="image/x-icon"> <link rel=http://cstwiki.wtb.tue.nl/index.php?title=PRE2018_3_Group4&action=edit"stylesheet" type="text/css" href="theme.css">

Decision Model Investigation

In this section, we will investigate some different approaches for decision models. These decision models were investigated, but were chosen not to be the final decision model that we will implement. However, for the sake of completeness of this wiki, we will describe our findings on other decision models in this section.

Nearest Neighbour Strategy

NearestNeighbour, short NN, is a mathematical decision model. It is a machine learning decision model, in the sense that existing solutions, often denoted as training data, are used for NN to be able to accurately make predictions about new data such as a user which wants a solution for their airport. This decision model can make the choice which solution fits best to the user. Nearest Neighbour is based on the machine learning strategy KNearestNeighbors [1].

Picking variables / attributes

In order for Nearest Neighbour to work, we need to quantify our problem into numerical values. For this, we need to split this up into variables with numerical data. This can be done in the same way as we picked the attributes in section implemented decision model #attributes. These are variables that can tell which type of solution will fit best for this case. Examples of these attributes for the solution are e.g. cost (in €), reliability (in %), range (in m), hindrance to surroundings (scale from 1 to 10), CO2-emmision (in kg CO2 / year), etc.

How does NN work?

So, we now have defined a solution in terms of only numerical variables. Then, for each solution that we have found, we will assign corresponding values to the attributes. An example of how this is done can be found in this part of the solutions section.

Now, how NN then works as follows: it plots the points from the solutions in the n-dimensional plane, where n is the amount of variables / attributes that each solution consists of. We have that the first variable will correspond to the first coordinatec. Continuing this fashion, the second coordinate corresponds to the second variable or attribute, etc. Using these n variables or attributes that we will predetermine, we get a plot of the solutions the n dimensional plane.

So, now we have that all solutions are quantified in the n-dimensional plane. We now ask the user to simply fill in these attributes for their desired airport. What we mean by this is that the user fills in the desired / optimal cost for the solution, the desired / optimal range for the solution, etc. This will again result in a point in the n-dimensional plane.

After that, the decision rule is quite simple; we check the euclidean distance between this point, which in fact represents the most optimal solution for the user, and all the other 'solution points'. We then check for which solution point this distance is minimal. In practice, this solution should correspond to the solution that fits the demands and desires best of all the possible solutions. We decided that, instead of only simply giving the best solution, we would list all solutions, and rank them based on distance.

Problems and Improvements to NearestNeighbour

There are some problems that come with the development of NearestNeighbour, but fortunately, they can be overcome quite easily. First of all, we need to define what NearestNeighbours should do in the case that two solutions have the same distance. If this is the case, we will simply pick one of the two at random to not unfairly prioritize any solution over the other.

Furthermore, as it stands of now, some variables are more important than others simply due to their scale in distance. For example, one unit difference in cost (euro) contributes equally as one unit difference in reliability (one percent). This would also mean that this decision model would pick a solution that is 10 euros cheaper than a solution that is 11% more reliable. In order to tackle this, we normalize all the attributes. Normalizing means that all values will be multiplied such that the values are between zero and one. When an attribute is normalized, the lowest value will be zero and the highest value will be one. Since we do not focus on the mathematical background, we do not discuss this normalization in great detail. Further explanation can be found here [2].

Now, another problem is that now all attributes have an equal contribution. However, some attributes might be more important than others. In general, cost is an attribute that has a higher weight than the attribute CO2 emission. Now, we can counter this by multiplying all normalized attributes by a certain predetermined weight. These weights can be determined with all stakeholders; another option is for the decision model to ask the user which variables he / she finds most important, and then base the weights on the user's preference.


Back to the root page.

References

  1. "Brilliant.org: K-nearest Neighbors", Retrieved 12 March 2019
  2. "Wikipedia: Feature Scaling", Retrieved on 12 March 2019