WIP group 8: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(Created page with '=== Open Questions === * What is important for effective anthropomorphism? Avoiding repeated answers, showing some (abstract) face visualization, audio output? * Are we going to …')
 
 
Line 29: Line 29:
* INCREASING MOTIVATION USING ROBOTICS
* INCREASING MOTIVATION USING ROBOTICS
van Minkelen, P., Gruson, C., van Hees, P., Willems, M., de Wit, J., Aarts, R., ... & Vogt, P. (2020, March). Using self-determination theory in social robots to increase motivation in L2 word learning. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 369-377).
van Minkelen, P., Gruson, C., van Hees, P., Willems, M., de Wit, J., Aarts, R., ... & Vogt, P. (2020, March). Using self-determination theory in social robots to increase motivation in L2 word learning. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 369-377).
= Some Background Information & Literature =
Below, some background information is provided which we first have to investigate before proceeding further to the actual prototyping process. The information mainly covers the cognitive topics needed to understand the end-users in an objective way. All sources are provided above, in the WIP drive-file the sources are implemented in the text itself.
Anthropomorphism: The attribution of human features (or behaviour) to non-human entities (aka: also robots). With human features, emotions are also included. Easy examples are Wall-e, Winnie the Pooh etc. When we see that an object (or animal) is acting more human-like than it actually is, we will still have the tendency to act more human like towards it.
BUT: Anthropomorphism can lead to an inaccurate understanding of biological processes in the natural world. It can also lead to inappropriate behaviours towards the object or just “faulty behaviour”.
Structural anthropomorphic form: the presence of shapes, volumes, mechanisms, or arrangements that mimic the appearance or functioning of the human body. Gestural anthropomorphic form: the use of motions or poses that suggest human action to express meaning, intention or instruction.
The biggest risk of anthropomorphic design is that people expect the interface to be more human-like than it really is, ultimately leading to disappointment and frustration when the interface fails to be as intelligent. So it is important that the appearance matches their capabilities.
Biases and Heuristics:
So people have a very complex way of decision-making, and we don’t always make the rationally right decision. Heuristics are used as a mental shortcut to make decisions, but, there is a vulnerability of heuristics: Biases. Since heuristics represent simplifications, they occasionally lead to systematic flaws and errors. The systematic flaws represent deviations from the normative decision-making model (how we should make decisions) and are called biases.
Heuristics/biases in the first step of decision making (acquiring and integrating cues):
Attention to a limited number of cues // anchoring and cue primacy // cue salience // overweighting of unreliable cues.
Heuristics/biases in the second step of decision making (Interpreting and assessing information)
Availability (we tend to rely decisions on available info we already have) // representativeness // overconfidence //  cognitive tunneling (underutilize subsequent information) // simplicity seeking & choice aversion // confirmation bias/
Heuristics/biases in third step of decision making (Planning & choosing)
Planning bias // retrieve a small number of actions // availability of actions // availability of possible outcomes // hindsight bias // framing bias // default heuristics
Possible biases that we can use for the project:
Availability: This reflects the tendency of people to make certain types of judgments of assessments, estimates of frequency by assessing how easily the state is brought to ming.
We can implement this in the system, because we provide our users with statistics
Overconfidence: People are often biased in their confidence with respect to the hypotheses they have brought in their working memory, believing that they are more correct more often than they actually are reflecting the more general tendency for overconfidence in metacognitive processes.
So for example, when we would have to report how much we ran during a jog, we will most likely overestimate ourself.
Planning bias: Easily said, humans suck as planning in the future. We are in other words cognitively blind for unexpected delaying effects.
Loneliness:
Eyssel, F., & Reich, N. (2013, March). Loneliness makes the heart grow fonder (of robots)—On the effects of loneliness on psychological anthropomorphism. In 2013 8th acm/ieee international conference on human-robot interaction (hri) (pp. 121-122). IEEE.
Increasing Motivation using Robots:
van Minkelen, P., Gruson, C., van Hees, P., Willems, M., de Wit, J., Aarts, R., ... & Vogt, P. (2020, March). Using self-determination theory in social robots to increase motivation in L2 word learning. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 369-377).
= Questions regarding the innovation = 
Why is our (AI) Avatar more effective than an application that gives ‘normal’ notifications?
A social robot, either in the form of a humanoid or just the application (so just a more ‘real’ way of interacting with the system) has several advantages that fit to our purpose. First of all, an article by Andrea Deublein and Birgit Lurgin shows that social systems outperform tablets in terms of social perception and the overall evaluation of interaction. The usability and user friendliness is hereby a key priority in the design. Another article states that elderly people rated a social robot solution to an Assistant Recommender System as less complex and they had stronger willingness to use the system at home. So, an AI can be perceived as a buddy and can help the user better than just the tablet.
Also, we can clearly see (and prove this with a survey or something) that people just tend to rely their conclusions based on a ‘few days of feeling the same’, and that keeping a logbook yourself is often out of the daily lifestyle in a short time.
What kind of data will it extract? (What are the possible privacy threats?)
What exactly is the MAIN goal of the system? (i.e. healthier lifestyle? get motivated to eat healthy or to exercise?)
What will the system MAINLY do? (Get data and perform stats on it, make personal preferences, exercise/food-related? What KIND of statistics?)
What will the system cost and who will pay for it? (7k whoopwhoop, jk)
What kind of statistics?
Will the user experience cognitive deception when it gets motivated by a system rather than humans, and why (not)?

Latest revision as of 15:09, 2 February 2021

Open Questions

  • What is important for effective anthropomorphism? Avoiding repeated answers, showing some (abstract) face visualization, audio output?
  • Are we going to separate input processing and output production? This would allow us to specialize. One can for example focus on the input, another on the output, another on the database, another on the GUI, and the last of us on the user experience.
  • Are we going to add additional buttons? E.g. to explicitly add/remove goals, to toggle between different kinds of inputs, etc?

Not-yet-organized references

Possibly relevant papers:


possibly relevant papers for theoretical background:

  • Literature review about different robotics (ours is assistive i think)

Royakkers, L., & van Est, R. (2015). A literature review on new robotics: automation from love to war. International journal of social robotics, 7(5), 549-570. (Summarized)

  • ANTHROPOMORPHISM:

Melson GF, Kahn PH, Beck A, Friedman B (2009) Robotic pets in human lives: implications for the human-animal bond and for human relationships with personified technologies. J Soc Issues 65(3):545–569

  • ANTHROPOMORPHISM:

Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42(3–4):170–190

  • DESIGNING and taking into account human emotions:

chapter 5 of Affective Interaction: Understanding, Evaluating, and Designing for Human Emotion (See summary)

  • SOCIAL FACILITATION

Riether, N., Hegel, F., Wrede, B., & Horstmann, G. (2012, March). Social facilitation with social robots?. In 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 41-47). IEEE. (see summary)

  • SOCIAL FACILITATION

Woods, S., Dautenhahn, K., & Kaouri, C. (2005, June). Is someone watching me?-consideration of social facilitation effects in human-robot interaction experiments. In 2005 international symposium on computational intelligence in robotics and automation (pp. 53-60). IEEE. (see summary)

  • LONELINESS

Eyssel, F., & Reich, N. (2013, March). Loneliness makes the heart grow fonder (of robots)—On the effects of loneliness on psychological anthropomorphism. In 2013 8th acm/ieee international conference on human-robot interaction (hri) (pp. 121-122). IEEE.

  • INCREASING MOTIVATION USING ROBOTICS

van Minkelen, P., Gruson, C., van Hees, P., Willems, M., de Wit, J., Aarts, R., ... & Vogt, P. (2020, March). Using self-determination theory in social robots to increase motivation in L2 word learning. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 369-377).


Some Background Information & Literature

Below, some background information is provided which we first have to investigate before proceeding further to the actual prototyping process. The information mainly covers the cognitive topics needed to understand the end-users in an objective way. All sources are provided above, in the WIP drive-file the sources are implemented in the text itself.


Anthropomorphism: The attribution of human features (or behaviour) to non-human entities (aka: also robots). With human features, emotions are also included. Easy examples are Wall-e, Winnie the Pooh etc. When we see that an object (or animal) is acting more human-like than it actually is, we will still have the tendency to act more human like towards it. BUT: Anthropomorphism can lead to an inaccurate understanding of biological processes in the natural world. It can also lead to inappropriate behaviours towards the object or just “faulty behaviour”.

Structural anthropomorphic form: the presence of shapes, volumes, mechanisms, or arrangements that mimic the appearance or functioning of the human body. Gestural anthropomorphic form: the use of motions or poses that suggest human action to express meaning, intention or instruction. The biggest risk of anthropomorphic design is that people expect the interface to be more human-like than it really is, ultimately leading to disappointment and frustration when the interface fails to be as intelligent. So it is important that the appearance matches their capabilities.

Biases and Heuristics: So people have a very complex way of decision-making, and we don’t always make the rationally right decision. Heuristics are used as a mental shortcut to make decisions, but, there is a vulnerability of heuristics: Biases. Since heuristics represent simplifications, they occasionally lead to systematic flaws and errors. The systematic flaws represent deviations from the normative decision-making model (how we should make decisions) and are called biases. Heuristics/biases in the first step of decision making (acquiring and integrating cues): Attention to a limited number of cues // anchoring and cue primacy // cue salience // overweighting of unreliable cues.

Heuristics/biases in the second step of decision making (Interpreting and assessing information) Availability (we tend to rely decisions on available info we already have) // representativeness // overconfidence // cognitive tunneling (underutilize subsequent information) // simplicity seeking & choice aversion // confirmation bias/

Heuristics/biases in third step of decision making (Planning & choosing) Planning bias // retrieve a small number of actions // availability of actions // availability of possible outcomes // hindsight bias // framing bias // default heuristics

Possible biases that we can use for the project: Availability: This reflects the tendency of people to make certain types of judgments of assessments, estimates of frequency by assessing how easily the state is brought to ming. We can implement this in the system, because we provide our users with statistics Overconfidence: People are often biased in their confidence with respect to the hypotheses they have brought in their working memory, believing that they are more correct more often than they actually are reflecting the more general tendency for overconfidence in metacognitive processes. So for example, when we would have to report how much we ran during a jog, we will most likely overestimate ourself. Planning bias: Easily said, humans suck as planning in the future. We are in other words cognitively blind for unexpected delaying effects.


Loneliness: Eyssel, F., & Reich, N. (2013, March). Loneliness makes the heart grow fonder (of robots)—On the effects of loneliness on psychological anthropomorphism. In 2013 8th acm/ieee international conference on human-robot interaction (hri) (pp. 121-122). IEEE.

Increasing Motivation using Robots: van Minkelen, P., Gruson, C., van Hees, P., Willems, M., de Wit, J., Aarts, R., ... & Vogt, P. (2020, March). Using self-determination theory in social robots to increase motivation in L2 word learning. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 369-377).

Questions regarding the innovation

Why is our (AI) Avatar more effective than an application that gives ‘normal’ notifications? A social robot, either in the form of a humanoid or just the application (so just a more ‘real’ way of interacting with the system) has several advantages that fit to our purpose. First of all, an article by Andrea Deublein and Birgit Lurgin shows that social systems outperform tablets in terms of social perception and the overall evaluation of interaction. The usability and user friendliness is hereby a key priority in the design. Another article states that elderly people rated a social robot solution to an Assistant Recommender System as less complex and they had stronger willingness to use the system at home. So, an AI can be perceived as a buddy and can help the user better than just the tablet. Also, we can clearly see (and prove this with a survey or something) that people just tend to rely their conclusions based on a ‘few days of feeling the same’, and that keeping a logbook yourself is often out of the daily lifestyle in a short time.

What kind of data will it extract? (What are the possible privacy threats?)

What exactly is the MAIN goal of the system? (i.e. healthier lifestyle? get motivated to eat healthy or to exercise?)

What will the system MAINLY do? (Get data and perform stats on it, make personal preferences, exercise/food-related? What KIND of statistics?)

What will the system cost and who will pay for it? (7k whoopwhoop, jk)

What kind of statistics?

Will the user experience cognitive deception when it gets motivated by a system rather than humans, and why (not)?