Related Literature Group 4, design: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
<font size='6' style="margin-bottom: 20px; padding-bottom: 10px;display: block;"> Excerpts & citations </font>
<font size='6' style="margin-bottom: 20px; padding-bottom: 10px;display: block;"> Excerpts & citations </font>


===Gender===
====Gender====


* Females chose to gender-match and to interact with a more realistic VA. Males exhibited little preference for either gender, and a greater preference than females for realistic VAs. Thus, where it is not feasible to gender-match in SSCO, the recommendation is to implement a realistic female VA. <ref name="payne2013">Payne, J., Szymkowiak, A., Robertson, P., & Johnson, G. (2013). Gendering the machine: Preferred virtual assistant gender and realism in self-service. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8108 LNAI, 106–115. https://doi.org/10.1007/978-3-642-40415-3_9</ref>
* Females chose to gender-match and to interact with a more realistic VA. Males exhibited little preference for either gender, and a greater preference than females for realistic VAs. Thus, where it is not feasible to gender-match in SSCO, the recommendation is to implement a realistic female VA. <ref name="payne2013">Payne, J., Szymkowiak, A., Robertson, P., & Johnson, G. (2013). Gendering the machine: Preferred virtual assistant gender and realism in self-service. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8108 LNAI, 106–115. https://doi.org/10.1007/978-3-642-40415-3_9</ref>
Line 14: Line 14:


* We found that the designed characteristics of VAs affects some aspects of user impressions (i.e. personality and trustworthiness) of the VA, while other impressions are not affected (i.e. social ability). We also found that gender matching between the agent and the user affect user impressions.  // Gender similarity --> more trustworthy <ref name="akbar2018">Akbar, F., Grover, T., Mark, G., & Zhou, M. X. (2018, March 5). The effects of virtual agents’ characteristics on user impressions and language use. International Conference on Intelligent User Interfaces, Proceedings IUI. https://doi.org/10.1145/3180308.3180365</ref>
* We found that the designed characteristics of VAs affects some aspects of user impressions (i.e. personality and trustworthiness) of the VA, while other impressions are not affected (i.e. social ability). We also found that gender matching between the agent and the user affect user impressions.  // Gender similarity --> more trustworthy <ref name="akbar2018">Akbar, F., Grover, T., Mark, G., & Zhou, M. X. (2018, March 5). The effects of virtual agents’ characteristics on user impressions and language use. International Conference on Intelligent User Interfaces, Proceedings IUI. https://doi.org/10.1145/3180308.3180365</ref>
====Human-likeness====
* In this study, it was noted that PIAs’ human-like features could influence the manner in which participants (i.e. males or females) related to a particular PIA and participants’ preference level of a particular PIA. The result of the Fischer’s exact test conducted in this study (Table V) found no statistically significant effect between participants’ gender (male or female) and their preferred gender of PIAs. It is concluded that the gender of participants (i.e. male or female) has no role (impact) on their preferences regarding the type, features, or gender of PIA, neither on their preference level of PIA. <ref name="mabanza2019">Mabanza, N. (2019). Gender influences on preference of pedagogical interface agents. 2018 International Conference on Intelligent and Innovative Computing Applications, ICONIC 2018. https://doi.org/10.1109/ICONIC.2018.8601292</ref>
* On the other hand, previous studies have also shown that developing overly humanized agents results in high expectations and uncanny feelings. <ref name="chaves2021">Chaves, A. P., & Gerosa, M. A. (2021). How Should My Chatbot Interact? A Survey on Social Characteristics in Human–Chatbot Interaction Desi | Enhanced Reader. International Journal of Human–Computer Interaction, 37(8), 729–758. https://doi.org/10.1080/10447318.2020.1841438</ref>
* Many participants saw the human-like appearance of the VA prototype as setting the wrong expectations in terms of its capabilities, and they were disappointed when the agent’s intelligence only extended towards responding to the their emotion and prompting more self-reflection. (…) We believe incorporating human-like qualities and emotional intelligence into future agents to be worthwhile; however, intelligence should also extend into other aspects of the agent’s capabilities in order to better help users be as efficient as possible in achieving their work goals. <ref name="grover2020"/>
* A study by Go and Sundar <ref name="go2019">Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316. https://doi.org/10.1016/j.chb.2019.01.020</ref> states that revealing the identity of a chatbot as a non-human can have a positive effect: user will have less high expectations about the conversation, and will be impressed when an agent shows human-like behaviour. Furthermore, they emphasize the importance of the conversational style between a human an a computer. When the dialogue resembles that of an actual human, perceived feelings of social presence and homophily will increase, leading to more positive attitudes towards the agent (and in turn potential desired behaviour consequences).
* the presence of a representation produced more positive social interactions than not having a representation <ref name="Yee2017">Yee, N., Bailenson, J. N., & Rickertsen, K. (2007). A Meta-Analysis of the Impact of the Inclusion and Realism of Human-Like Faces on User Experiences in Interfaces.</ref>
** human-like representations with higher realism produced more positive social interactions than representations with lower realism; however, this effect was only found when subjective measures were used. Behavioral measures did not reveal a significant difference between representations of low and high realism.
*** the difference we found may also be driven by demand characteristics. Participants interacting with an animated character (as opposed to a photograph) may suppose that the researcher is expecting a high appraisal.
**while the presence of a face is better than no face at all, the quality of the face matters much less.
***it is quite possible that animating highly realistic faces inherently allows for residual attributes of the faces that are negative—for example making 3D human faces may produce gestures and animations that appear unnatural or disturbing
**while most studies have found that interface agents have positive effects on task performance, these effects are overall actually quite small.
* In addition to understanding human social behavior around computers, another extensive line of work examines humans’ responses to computers with more expressive, human-like qualities, such as faces and facial expressions. In general, these studies have found that anthropomorphic properties of computers influence users’ perceptions (e.g., [13, 70, 86]), attitude (e.g., [13]), and behavior around such systems (e.g., [48, 86, 105]). For example, Sproull et al. [86] report that people respond to a text-based interface differently than to a talking face. On the one hand, users are aroused more and present themselves more positively when interacting with a talking face. On the other hand, users are less relaxed or assured when interacting with a talking face. Another study shows that users find the interface with the anthropomorphic qualities—faces and facial expressions—more likeable and engaging, although such an interface takes the users’ effort to interpret the meaning of the human-like expressions and may even be a distraction [48, 86]. More recent studies show that human-like features with higher realism elicit more positive social interactions while having no significant impact on user task performance [105]. Furthermore, a study reveals that anthropomorphism may even elicit user objections due to users’ own biases (e.g., sexism) [70]. <ref name="zhou2019">Zhou, M. X., Yang, H., Mark, G., & Li, J. (2019). Trusting Virtual Agents: The Effect of Personality. ACM Trans. Interact. Intell. Syst, 9(3). https://doi.org/10.1145/3232077</ref>


===References===
===References===


<references/>
<references/>

Revision as of 19:11, 9 May 2021

Excerpts & citations

Gender

  • Females chose to gender-match and to interact with a more realistic VA. Males exhibited little preference for either gender, and a greater preference than females for realistic VAs. Thus, where it is not feasible to gender-match in SSCO, the recommendation is to implement a realistic female VA. [1]
  • Young and attractive female agent positively impacts interest in learning, an older an unattractive male agent does not impact motivation. [2]
  • As research suggests that the combination of an agent’s gender and personality can play an important role in user perceptions and expectations [1, 37], employing a male agent instead may result in some significant differences in user perceptions or ratings of the agent. We encourage future work to investigate how gender and personality of a workplace productivity agent might influence user experience. [3]
  • participants prefer same-gender agents when they are asked to choose their preferred agent as presenter for a multimedia slideshow. [2]
  • We found that the designed characteristics of VAs affects some aspects of user impressions (i.e. personality and trustworthiness) of the VA, while other impressions are not affected (i.e. social ability). We also found that gender matching between the agent and the user affect user impressions. // Gender similarity --> more trustworthy [4]

Human-likeness

  • In this study, it was noted that PIAs’ human-like features could influence the manner in which participants (i.e. males or females) related to a particular PIA and participants’ preference level of a particular PIA. The result of the Fischer’s exact test conducted in this study (Table V) found no statistically significant effect between participants’ gender (male or female) and their preferred gender of PIAs. It is concluded that the gender of participants (i.e. male or female) has no role (impact) on their preferences regarding the type, features, or gender of PIA, neither on their preference level of PIA. [5]
  • On the other hand, previous studies have also shown that developing overly humanized agents results in high expectations and uncanny feelings. [6]
  • Many participants saw the human-like appearance of the VA prototype as setting the wrong expectations in terms of its capabilities, and they were disappointed when the agent’s intelligence only extended towards responding to the their emotion and prompting more self-reflection. (…) We believe incorporating human-like qualities and emotional intelligence into future agents to be worthwhile; however, intelligence should also extend into other aspects of the agent’s capabilities in order to better help users be as efficient as possible in achieving their work goals. [3]
  • A study by Go and Sundar [7] states that revealing the identity of a chatbot as a non-human can have a positive effect: user will have less high expectations about the conversation, and will be impressed when an agent shows human-like behaviour. Furthermore, they emphasize the importance of the conversational style between a human an a computer. When the dialogue resembles that of an actual human, perceived feelings of social presence and homophily will increase, leading to more positive attitudes towards the agent (and in turn potential desired behaviour consequences).
  • the presence of a representation produced more positive social interactions than not having a representation [8]
    • human-like representations with higher realism produced more positive social interactions than representations with lower realism; however, this effect was only found when subjective measures were used. Behavioral measures did not reveal a significant difference between representations of low and high realism.
      • the difference we found may also be driven by demand characteristics. Participants interacting with an animated character (as opposed to a photograph) may suppose that the researcher is expecting a high appraisal.
    • while the presence of a face is better than no face at all, the quality of the face matters much less.
      • it is quite possible that animating highly realistic faces inherently allows for residual attributes of the faces that are negative—for example making 3D human faces may produce gestures and animations that appear unnatural or disturbing
    • while most studies have found that interface agents have positive effects on task performance, these effects are overall actually quite small.
  • In addition to understanding human social behavior around computers, another extensive line of work examines humans’ responses to computers with more expressive, human-like qualities, such as faces and facial expressions. In general, these studies have found that anthropomorphic properties of computers influence users’ perceptions (e.g., [13, 70, 86]), attitude (e.g., [13]), and behavior around such systems (e.g., [48, 86, 105]). For example, Sproull et al. [86] report that people respond to a text-based interface differently than to a talking face. On the one hand, users are aroused more and present themselves more positively when interacting with a talking face. On the other hand, users are less relaxed or assured when interacting with a talking face. Another study shows that users find the interface with the anthropomorphic qualities—faces and facial expressions—more likeable and engaging, although such an interface takes the users’ effort to interpret the meaning of the human-like expressions and may even be a distraction [48, 86]. More recent studies show that human-like features with higher realism elicit more positive social interactions while having no significant impact on user task performance [105]. Furthermore, a study reveals that anthropomorphism may even elicit user objections due to users’ own biases (e.g., sexism) [70]. [9]

References

  1. Payne, J., Szymkowiak, A., Robertson, P., & Johnson, G. (2013). Gendering the machine: Preferred virtual assistant gender and realism in self-service. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8108 LNAI, 106–115. https://doi.org/10.1007/978-3-642-40415-3_9
  2. 2.0 2.1 Shiban, Y., Schelhorn, I., Jobst, V., Hörnlein, A., Puppe, F., Pauli, P., & Mühlberger, A. (2015). The appearance effect: Influences of virtual agent features on performance and motivation. Computers in Human Behavior, 49, 5–11. https://doi.org/10.1016/j.chb.2015.01.077
  3. 3.0 3.1 Grover, T., Rowan, K., Suh, J., McDuff, D., & Czerwinski, M. (2020). Design and evaluation of intelligent agent prototypes for assistance with focus and productivity at work. International Conference on Intelligent User Interfaces, Proceedings IUI, 20, 390–400. https://doi.org/10.1145/3377325.3377507
  4. Akbar, F., Grover, T., Mark, G., & Zhou, M. X. (2018, March 5). The effects of virtual agents’ characteristics on user impressions and language use. International Conference on Intelligent User Interfaces, Proceedings IUI. https://doi.org/10.1145/3180308.3180365
  5. Mabanza, N. (2019). Gender influences on preference of pedagogical interface agents. 2018 International Conference on Intelligent and Innovative Computing Applications, ICONIC 2018. https://doi.org/10.1109/ICONIC.2018.8601292
  6. Chaves, A. P., & Gerosa, M. A. (2021). How Should My Chatbot Interact? A Survey on Social Characteristics in Human–Chatbot Interaction Desi | Enhanced Reader. International Journal of Human–Computer Interaction, 37(8), 729–758. https://doi.org/10.1080/10447318.2020.1841438
  7. Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316. https://doi.org/10.1016/j.chb.2019.01.020
  8. Yee, N., Bailenson, J. N., & Rickertsen, K. (2007). A Meta-Analysis of the Impact of the Inclusion and Realism of Human-Like Faces on User Experiences in Interfaces.
  9. Zhou, M. X., Yang, H., Mark, G., & Li, J. (2019). Trusting Virtual Agents: The Effect of Personality. ACM Trans. Interact. Intell. Syst, 9(3). https://doi.org/10.1145/3232077