Decision Model - Group 4 - 2018/2019, Semester B, Quartile 3: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Line 125: Line 125:
=== Propositions ===
=== Propositions ===


The propositions can be found in the `Questionnaire' section of the [[Decision_Model_validation_-_Group_4_-_2018/2019,_Semester_B,_Quartile_3 | validation]] page.
The propositions can be found in the `Categorising the detection solutions ' section of the [[Categorizing_solutions_-_Group_4_-_2018/2019,_Semester_B,_Quartile_3#Categorising_the_detection_solutions | categorising]] page.


== Unevenness ==
== Unevenness ==

Revision as of 16:11, 29 March 2019

<link rel="shortcut icon" href="https://www.tue.nl/favicon-64.ico" type="image/x-icon"> <link rel=http://cstwiki.wtb.tue.nl/index.php?title=PRE2018_3_Group4&action=edit"stylesheet" type="text/css" href="theme.css">

Decision Model

Introduction

In this section, we will describe our decision model. First, a description of what a decision model actually is will be given, give a basic understanding of the concept. After that, we will explain what our decision model, in fact, does on a higher level, without all the details inside the decision model. After that, we will explain how the decision model is derived, and how our decision model works on a lower level.

What is a decision model?

A decision model is an intellectual template for perceiving, organising, and managing the business logic behind a business decision[1]. An informal definition of business logic is that it is a set of business rules represented as atomic elements of conditions leading to conclusions. A decision model is not simply a list of business rules or business statements. Rather, it is a model representing a structural design of the logic embodied by those statements. In our case, we modify the decision model such that it proposes questions and uses the answers given to those questions in order to label certain solutions with a certain score based on how well they fit the answer of the question. If a certain solution fits better for a certain answer on a specific question, this solution gets a higher score than a solution that does not fit that answer at all. We elaborate more on this later. Then, when all attributes are scored, they are combined, and the solution that has the most attribute scores in common often has the highest score. A list that contains the three best appropriate solutions of each of the categories (detection, identification, neutralisation) based on what solutions have the highest score are displayed to a user of the decision model.

As described before, our decision model gives as output the best solution for anti-UAV systems based on the input of the user. This user can be, for example, an airport seeking to improve on its anti-UAV systems. Due to the enormous growing list of solutions for this, airports may find it difficult to decide for themselves. After our thorough analysis on solutions and types of airports, we have seen that some solutions fit certain airports better than others, and thus we decide to give a systemized model to consult users in this difficult choice.

Decision Model Investigation

There are a lot of different types of decision models, and so before we implement a decision model, we decided to investigate some of the decision models that were available. Then, when we have done enough investigation, we will decide on the decision model that fits our approach best. The model that fits our description best is described in the sections below. The other models that we have investigated will be discussed in types of decision models.

How does our decision model work?

The decision model that we will use will for a large part be based on the Dutch StemWijzer[2], which is a website that helps people find the political party that fits best with their point of view/opinion. The way this works is as follows, the website presents the user thirty propositions, to which the user can answer to agree, disagree or either be neutral. Then, the user can choose whether there are some subjects that he or she might find more important than the others. StemWijzer[2] scores each political party by counting the number of times the user and the party have the same point of view for a proposition, where the more critical propositions give double the score if the point of view aligns. In the end, the score for each political party adds up, and the parties with the highest score fit best with the point of view of the user.

The reason for choosing this type of decision model is that it is easy and straightforward for the users since for each question the user only needs to fill out whether he agrees, disagrees or is neutral. Furthermore, the user can decide whether he/she finds certain subjects more important than others. This is really useful in our situation, as airports might find specific attributes of a solution a lot more critical for their airport than others.

In our situation, where we would like to find an anti-UAV system that fits best with a particular airport, the decision model of StemWijzer needs a few minor changes in order to make it work. Instead of asking about the propositions of political parties, we will ask the user questions on the attributes a solution has and are based on the recommendation report. The attributes used for this, are explained in the next section. The questions asked on the attributes of the solutions will be based on the comparison of the attributes between the solutions. However, since a solution of an anti-UAV system can exist of three parts: detection, identification and neutralisation, questions will be asked on all three of these 'sub-solutions'. Users might also indicate if they want a complete solution including detection, identification, and neutralisation, or whether they do not need one or more of these `sub-solutions'. It might be the case that an airport might just only want a UAV detection system, in which case the questions on identification and neutralisation will be skipped, and only a `sub-solution' for detection will be given. Then, the user will be asked to indicate the attributes that are most important to the airport. The scoring of the solutions will work in the same way as StemWijzer calculates the score for political parties, and will be explained in more detail below.

Goal

Let us take a closer look at the goals of this model. One of the goals of the model is to produce a result that takes the needs and beliefs of an individual as input and propose a solution against unwanted UAVs around airports based on this input. Furthermore, the user can indicate what issues weight more and that thus, are more critical. While this might seem like the goal of the model, it is most definitely not the only goal. With this model and the produced results, we also want to spark a debate when it comes to anti-UAV measures around airports. From our research, we know that many airports (still) do not have appropriate countermeasures against UAVs. These airports would only start looking for measures and solutions after an unwanted UAV threatens the airspace. This model directly goes against the passive behaviour we have seen so much from airports and promotes the discussion of suitable solutions for airports. Furthermore, as this type of model is primarily used for elections, it would be interesting to do further research on this type of model as it is still rather new. Extending this type of model to other fields than elections then could lead to exciting results.

Attributes

As described above, we will create a decision model that airports can use to decide on which type of anti-UAV system to deploy. For this decision model, we have deconstructed the needs of the airports into particular attributes. These attributes are based on the analysis done on both the solutions and the airport analysis. We distinguished between three different types of airports and identified all the USE-stakeholders for each type. Furthermore, we did a risk analysis for each type of airport and stakeholder analysis. Using this stakeholder analysis, we were able to set up attributes that different airports are interested in. From these interests, we have derived core attributes. We will first summarise a list of these attributes to get a clear overview of what attributes are all taken into account when creating the decision model. It is, of course, hard to note down the internal processes that take place during the design of all of the attributes. This is why there is no clear section that explains this internal process, other than there being an excel sheet that contains all of the attributes against the solutions.

The list of current attributes is as follows:

  • Range
  • Speed of operation
  • Disturbance to the environment
  • Effect on different types of drones
  • Scalability
  • Number of drones it can concurrently handle
  • Emission
  • Size
  • Identification
  • Level of autonomy
  • Power Outage risks
  • Weather
  • Uptime
  • Portability
  • Danger to humans
  • Emission
  • Destructivity
  • Level of training needed

Note that this list is non-exhaustive and that the decision model will be made such extensions in this list can easily be implemented.

Scoring the solutions

The next step is for the decision model to rank or score these attributes so that the decision model can link the outcome of the attributes to actual solutions. To score these solutions, multiple choice questions were used in the same way as StemWijzer[2] asks their users questions. An example of scoring the attributes based on the questions is as follows:

Q: "The budget for a UAV 'detection' system is 10.000 euro or less."

A:

  • Agree
  • Neutral
  • Disagree

Based on this question, all detection solutions that cost less than 10.000 euro will obtain a point (or two depending on the weight, which will be explained in the next section), if the user answers 'Agree'. On the other hand, all detection solutions that cost more than 10.000 euro will obtain a point (or two) if the user answers 'Disagree'. If the user answers 'Neutral', then none of the solutions will obtain a point for this question. All these questions are justified, and all questions will be explained in greater detail (see section questions), so that each attribute can get a justified and well-calculated score. The main point of this example is to show how we are going to score solutions based on the questions that we ask.

Weighing the attributes

Now that our decision model has calculated the score of each attribute concerning the preferences of the user, we must also appropriately weigh the attributes. In most cases, the emission does not contribute equally to the choice in solution as the safety of the solution, to give an example. We will weigh these attributes as follows: We ask the user to indicate the attributes that they find more important than others. For these attributes, we will double the score for a solution that aligned with the answer of the user.

Translating the attributes to advised solutions

After the user finishing stating whether they agree, disagree or feel indifferent towards all propositions, an actual combination of solutions for each of the chosen sub-systems can be proposed. Each of the solution proposed under the section solutions will be grouped in either the `agree', `disagree', or `neutral' category for each proposition. We consider three broad categories for the propositions themselves, namely identification, detection, and neutralisation. Each of these categories considers propositions that relate to the essential attributes coined previously. Scoring the solutions for illegal drone activity will be done based on the answers that the individual using the decision model provides.

So, we now have a way of scoring and weighing the demands of airports based on the propositions given below. We also have a way to score the given solutions based on the attributes that we have deconstructed from the needs of the stakeholders. What now remains is linking the solutions for each category and the outcome of the prepositions together. For each proposition, if the user agrees, all solutions in the `agree' category for that proposition gain 1 point. If the user, however, disagrees, all solutions in the `disagree' category for that proposition gain 1 point. Furthermore, the user can also skip the proposition if they do not care about the attribute coined for that proposition. In the end, the user can also indicate which attributes are more important to them. All these attributes will gain a multiplier of 2. Additionally, the user can deselect solutions that they do not want to be taken into account during the final result presentation.

For example, let us consider a solution `x' for category `y' and the attributes: `cost', `scalability', and `safety'. Let us assume there is only a single proposition for each attribute. Let the user answer on the propositions such that solution `x' is the right solution for the attributes `cost' and `safety', but not on the `scalability'. Furthermore, the user has indicated that cost is more important than the other attributes. The final score the solution `x' then gets is: 2 (cost) + 0 (scalability) + 1 (safety) = 3. By scoring each of the solutions in this manner, we can, in the end, advise the solutions with the highest scores for each of the categories (detection, identification, and neutralisation) to fit best with the demands of the airport at stake. Note that these are not final decisions that the airport should blindly follow. Rather, we intend to provide a recommendation based on the needs of the airport.


Note that this decision model recommends a type of solution rather than an actual solution. That is, if one were to buy the `recommended' solution suggested by the decision model, this solution might not work. Preferably, the type of methods used for the various subcategories is given rather than an actual solution.

Propositions

In this section, we consider propositions regarding each of the attributes coined earlier. Since we have three categories of solutions (detection, identification, and neutralisation).

The individual stating whether or not they agree with the propositions should be as least restrictive as possible. That is, one should only agree with certain propositions when they really need to place those restrictions upon the solution.

How are the propositions made?

The procedure is as follows: in a brainstorm meeting with all group members, the most essential attributes regarding anti-UAV mechanisms are discussed. These have been found through exhaustive research of existing anti-UAV mechanisms and illegal UAV activity surrounding airports. Furthermore, exhaustive research of existing solutions and solutions that might be possible in the near future guides the solutions that are considered. Based on all this information, the group members make a list of around fifty statements. The statements are then considered from the point of view from each solution that is to be considered. From this point of view, it is possible to indicate whether they agree or disagree with the statement, or whether they take a neutral position. In particular, propositions about which the solutions clearly disagree on are included in the list of propositions. If there are statements that all solutions agree on, then they are dropped. After all, the individual filling in the list has nothing to choose from when all solutions are on the same side of the fence. Ultimately, a list of a maximum of thirty statements will be created that form the final list. Throughout this whole process, the group members strive to make the statements as clear and objective as possible.

See the image below for a visualisation of this process.

Missing image
Figure 1: Visualisation of the design process of propositions.

Scoring

Let us consider a bit more in-depth on how the scoring of each proposition goes. We use the method of `most similarities'. It works as follows: every statement where the user and solution give the same answer (e.g., agree, disagree, or neutral) counts as a point. Double points are given for the statements to which you assign extra weight, which is done at the end. All points are summed up at the end. The user can then observe what solution has the highest score. This solution having the highest score means that this solution had the same position as you the most often.

A visualisation of the scoring is displayed below in Figure 2. Note that this figure does not consider the multipliers yet.

Wrong solution?

We cover a maximum of thirty statements and, based on those statements, examine which solution you have the most agreement with. We try to focus on topics that are important and current when it comes to the technological development of UAVs and their countermeasures. It may, however, happen that you end up at one of the solutions that you did not expect nor agree with. When this happens, it means that, apparently, you have many similarities with that solution, apart from the reasons you initially had that caused you to be surprised.

When you arrive at a solution other than what you had expected, it is advisable to look carefully at which points there are similarities. Perhaps this makes you think about choosing a solution. Of course, however, there can also be very good reasons to go for a different solution. We only provide a tool to discover substantive differences between the solutions. It is always encouraged to keep thinking for yourself!

Propositions

The propositions can be found in the `Categorising the detection solutions ' section of the categorising page.

Unevenness

If one were to perform 100, 1.000, or even 10.000 random walks through the proposed decision model, not all solutions would appear equally as likely. This phenomenon is due to the fundamental construction of the decision model. As the propositions are designed such that they really highlight the differences between all solutions, it is possible for a large percentage of solutions to either agree or disagree. This way, we often get uneven splits. That is, for example, 80% of all solutions `agree' with a proposition, whereas only 20% of all solutions `disagree' with a proposition. Then, we already observe that not all solutions are equally as likely to appear when it comes to filling in the complete decision model. It is even the case that there are quite a few solutions that are always more inclined to appear high on the ranking assuming that the user does assign extra weights to specific attributes. This `bias' is introduced because of a certain solution simply being `better' in most aspects than another solution. Does this then invalidate the other solution? No, it certainly does not. We would argue that it would be worth it to consider any solution that differs from (most) other solutions even if this is only regarding a single attribute. This different solution would then bring more variety to the decision model, which in turns result in more options that can be taken.

Thus, the uneven distribution of solutions into categories and specific solutions that have a `bias' has been done on purpose due to the fundamental construction of both the decision model and its propositions.