PRE2023 3 Group4: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
(added interview literature partly)
(added links to onedrive)
Line 1: Line 1:
This study was approved by the ERB on Sunday 03/03/2024 (number ERB2024IEIS22).
 
Below a few links are listed to important documents that were used for this study:
 
* ERB form: ERB form.pdf
* Research proposal: Research proposal.pdf
* Consent form: Consent Form.pdf
* Research protocol: Research protocol.pdf
 
== Group members ==
== Group members ==
{| class="wikitable"
{| class="wikitable"
Line 31: Line 39:
|BPT  
|BPT  
|Programming responsible  
|Programming responsible  
|-
|Naomi Han
|0986672
|BCS
|Programming responsible
|}
|}


Line 258: Line 261:


Pepper can also use several gestures while responding to a human, like waving and nodding. It has 12 hours of battery life, and it can return to its charging station if necessary. It is 1.2 meter tall, has 3 omnidirectional wheels in order to move smoothly and 17 joints for body language <ref name=":9" />. Pepper is designed to make it appropriate and acceptable in daily life usage for interacting with human beings. Some design principles behind Pepper are; a pleasant appearance, safety, affordability, interactivity and good autonomy. The aim was to make it not too exact a human likeness robot, since the designers wanted to avoid the ‘uncanny valley’ <ref name=":9" />.   
Pepper can also use several gestures while responding to a human, like waving and nodding. It has 12 hours of battery life, and it can return to its charging station if necessary. It is 1.2 meter tall, has 3 omnidirectional wheels in order to move smoothly and 17 joints for body language <ref name=":9" />. Pepper is designed to make it appropriate and acceptable in daily life usage for interacting with human beings. Some design principles behind Pepper are; a pleasant appearance, safety, affordability, interactivity and good autonomy. The aim was to make it not too exact a human likeness robot, since the designers wanted to avoid the ‘uncanny valley’ <ref name=":9" />.   
== Research proposal ==
'''<big>Project team: PRE2023 3 Group 4</big>'''
'''Researchers:'''  
''Liandra Disse - 1529641''
''Margit de Ruiter - 1627805''
''Danique Klomp - 1575740''
''Isha Rakhan - 1653997''
''Emma Pagen - 1889907''
''Naomi Han - 0986672''
'''Study name'''
The influence of congruent emotion displayed with an emotional message on students’ acceptance of a social robot.
'''Study description'''
The study aims to research the reactions of students when interacting with robots. More specifically, the research question that will be studied is “To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?”. We expect that participants will prefer interacting with the robot while displaying the emotion that fits with the content of its message and be open to more future interactions like this with the robot. On the other hand, we expect that a mismatch between the emotion displayed by robot and the story it is telling will make participants feel less comfortable and therefore less accepting of the robot. The main focus of this research is thus on how accepting the students are of the robot after interacting with it, but we will also try to gain insights into potential underlying reasons, such as the amount of trust the students have in the robot and how comfortable they feel when interacting with them. The results could be used to provide insights into whether robots could be used on university campuses as assistant robots.  
'''Participants'''
As mentioned before, the study is investigating the viewpoint of students and therefore the participants will be gathered from the TU/e. We have chosen to target this specific group because of their in general higher openness to social robots and the increased likelihood that this group will deal a lot with social robots in the near future ​<ref name=":11">Manzi, F., Sorgente, A., Massaro, D., Villani, D., Di Lernia, D., Malighetti, C., Gaggioli, A., Rossignoli, D., Sandini, G., Sciutti, A., Rea, F., Maggioni, M. A., Marchetti, A., & Riva, G. (2021). Emerging Adults’ Expectations about the Next Generation of Robots: Exploring Robotic Needs through a Latent Profile Analysis. ''Cyberpsychology, Behavior, and Social Networking'', ''24''(5), 315–323. <nowiki>https://doi.org/10.1089/CYBER.2020.0161</nowiki> </ref>​. Therefore, we expect that this is an interesting user group during the beginning of the implementation stage of social robots that we are currently in ​<ref name=":11" />​. We aim to gather around 10 participants from our own circle of fellow students, preferably with an equal distribution of gender.  
'''Method'''
The study design consists of a 2 (positive/negative emotional message) x 3 (happy/neutral/sad emotion displayed by robot) within-subject experiment. Each participant will thus be told two stories, three times each, as can be seen in Table 1. This results in six conditions that differ in terms of a match between the content of the story with the emotion of the robot.
{| class="wikitable"
|+Table 1: The six conditions in the experiment
|Story / displayed emotion
|Happy
|Neutral
|Sad
|-
|Positive
|Congruent
|Emotionless
|Incongruent
|-
|Negative
|Incongruent
|Emotionless
|Congruent
|}
For this experiment, the robot Pepper will be used. Pepper will be programmed to display happiness and sadness based on pitch, body posture and LED eye color. The behavior that Pepper will display is shown in Table 2 and Figure 1, based on the research of Bishop et al. <ref name=":12">Bishop, L., Van Maris, A., Dogramadzi, S., & Zook, N. (2019). Social robots: The influence of human and robot characteristics on acceptance. ''Paladyn'', ''10''(1), 346–358. <nowiki>https://doi.org/10.1515/PJBR-2019-0028/MACHINEREADABLECITATION/RIS</nowiki> </ref> and Van Otterdijk et al.<ref name=":13" />. Facial expressions cannot be used, since the morphology of Pepper does not allow for it.
{| class="wikitable"
|+Table 2: Pepper's behavior per emotional condition<ref name=":12" />
|
|Happy
|Neutral
|Sad
|-
|Pitch of voice
|High pitch
|Average of happy and sad condition
|Low pitch
|-
|Body posture
|Raised chin, extreme movements, upwards arms
|Average of happy and sad condition
|Lowered chin, small movements, hanging arms
|-
|LED eye color
|Yellow
|White
|Light-blue
|}
[[File:Pepper examples.png|thumb|''Figure 1: Example of Pepper's behavior for (a) happy, (b) neutral, and (c) sad condition of displayed emotion'' ​<ref name=":12" />]]
The aforementioned research question will be investigated by taking a qualitative approach. First of all, participants will be asked to read and fill in the consent form. Next, they have to complete a short survey about their demographic information, including age, gender, study program and general acceptance towards robots. Then, they will proceed to start the experiment in small groups of participants. After the participants have listened to all three versions of the same, either positive or negative, story, they will be asked to engage in an interview with one of the researchers. During the interviews, an audio-recording will be made for transcribing and depersonalized later. After the completion of this interview, the participants will listen to the three versions of the other story and once again take part in a similar interview with the researcher. Participants will be randomly assigned to either first the positive story, or first the negative story to prevent bias. The study will end with thanking the participants, answering potential questions and explaining to them what the aim of the study was. The experiment will take about an hour in total. 
The in-between interview will be structured. The interview questions will be based on literature and careful consideration of the researchers. The questions will be targeted at asking the participant's opinion about how they felt during the interaction with Pepper, which version of the story they liked best and why, and how they feel towards having a Pepper robot on campus. The answers to the interviews will be qualitatively evaluated by performing a thematic analysis. 
'''Materials'''[[File:Positive story.png|thumb|''Figure 2: Picture to accompany the positive story<ref>​''Tribune''. (2018, January). Retrieved from Berlin mourns sudden death of month-old polar bear cub: <nowiki>https://tribune.com.pk/story/1599017/berlin-mourns-sudden-death-month-old-polar-bear-cub</nowiki> </ref>'']]In the study, the Pepper robot will be used. The reason this robot was chosen is because Pepper is a well-known robot that multiple studies have been done on and that is already being applied in different settings, such as hospitals and customer service. Based on young adults preferences for robot design, Pepper would also be most useful in student settings, given its human-like shape and ability to engage emotionally with people ​<ref name=":8" />​. The experiment itself will be conducted in one of the robotics labs on the TU/e campus, where the robot Pepper is readily available.  
[[File:Negative story.png|thumb|''Figure 3: Picture to accompany the negative story''<ref>​''The Telegraph''. (2012, June). Retrieved from Pictures of the day: <nowiki>https://www.telegraph.co.uk/news/picturegalleries/picturesoftheday/9326418/Pictures-of-the-day-12-June-2012.html</nowiki> </ref>]]
The positive and negative stories that the robot will tell are fictional stories about polar bears, inspired by the study ​<ref name=":12" />​. The content of the stories is based on non-fictional internet sources and rewritten to best fit our purpose. We have decided to keep the stories fictional and about animals rather than humans, because of the lower risk of doing emotional damage to the participants associated with elicited feelings based on personal circumstances. If possible, we would like to accompany the story with an image of polar bears displayed on Pepper's screen to make the story more visual (see Figure 2 and 3)''.''
Positive story: When Artic gold miners were working on their base, they were greeted by a surprising guest, a young lost polar bear cub. It did not take long for her to melt the hearts of the miners. As the orphaned cub grew to trust the men, the furry guest soon felt like a friend to the workers on their remote working grounds. Even more surprising, the lovely cub loved to hand out bear hugs. Over the many months that followed, the miners and the cub would create a true friendship. The new furry friend was even named Archie after one of the researcher’s children. When the contract of the gold miners came to an end, the polar bear cub would not leave their side, so the miners decided to arrange a deal with a sanctuary in Moscow, where the polar bear cub would be able to live a happy live in a place where its new-found friends would come to visit every day.  
This story is an adaption of a news article <ref>Cole, J. (2021, April). ''Good News Network''. Retrieved from Orphaned Polar Bear That Loved to Hug Arctic Workers Gets New Life: vhttps://www.goodnewsnetwork.org/orphaned-polar-bear-rescued-russian-arctic/ </ref>.
Negative story: While shooting a nature documentary on the Arctic Ocean island chain of Svalbard, researchers encountered a polar bear family of a mother and two cubs. During the mother's increasingly desperate search for scarce food, the starving family was forced to use precious energy swimming between rocky islands due to melting sea ice. This mother and her cubs should have been hunting on the ice, even broken ice. But they were in water that was open for as far as the eye could see. The weaker cub labors trying to keep up and the cub strained to pull itself ashore and then struggled up the rock face. The exhausted cub panicked after losing sight of its mother and its screaming could be heard from across the water. That's the reality of the world they live in today. To see this family with the cub, struggling due to no fault of their own is extremely heart breaking.  
This story is also an adaption of a news article that was found <ref>Alexander, B. (n.d.). ''USA Today Entertainment''. Retrieved from Polar bear cub's agonizing struggle in Netflix's 'Our Planet II' is telling 'heartbreaker': <nowiki>https://eu.usatoday.com/story/entertainment/tv/2023/06/15/netflix-our-planet-2-polar-bear/70296362007/</nowiki> </ref>.   
'''Feasibility'''  
We believe the study is feasible in the time we have available. Since this is qualitative research, not many participants are needed to perform the study. Realistically, we want to have 10 participants, which comes down to each person in our research team of five has to find two participants. Moreover, the interaction with Pepper will be done in small groups of about five participants at the same time to limit the time that the total study takes. The research will be done with the Pepper robot in a laboratory, which are both available on TU/e. Our research team consists of somewhat experienced programmers that have worked with other robots such as Misty before and if needed, experts are available to provide support with the programming of Pepper. We also may be able to receive the coding for Peppers emotional behavior that was used in another similar study ​<ref name=":13">Van Otterdijk, M. I. (2021, July). ''Preferences of Seniors for Robots Delivering a Message With Congruent Approaching Behavior''. Retrieved from ResearchGate: <nowiki>https://www.researchgate.net/publication/354699157_Preferences_of_Seniors_for_Robots_Delivering_a_Message_With_Congruent_Approaching_Behavior</nowiki> </ref>. 
'''Societal importance/application context'''  
Human-robot interaction (HRI) is a field that is growing very rapidly these days. In many sectors, such as healthcare and education, social robots must be able to communicate with people in ways that are natural and easily understood. In order to make this human-robot interaction feel natural and enjoyable for humans, robots must make use of human social norms ​<ref name=":2" />​. Therefore, it is important to gain knowledge on how humans react to robots showing emotions, which is what will be studied in this project.  
The aim of this study is to provide insight into whether a robot such as Pepper could be used on campus as an assistant, either as a tutor or for more general questions about the campus. This research provides a first step by investigating how comfortable students feel around the Pepper robot, and whether they would want such a robot on campus in the first place. Based on the results, more research can be done into how exactly the robot can be used in student settings.  


== Survey and interview questions ==
== Survey and interview questions ==
Line 436: Line 338:
#* Thinking about your daily life, where would you (not) like to encounter Pepper?
#* Thinking about your daily life, where would you (not) like to encounter Pepper?
# Are there any other remarks that you would like to leave, that were not touched upon during the interview, but that you feel are important?
# Are there any other remarks that you would like to leave, that were not touched upon during the interview, but that you feel are important?
== Research protocol ==
General preparation:
* Book the interview rooms (5 total). This can only be done 1 week in advance.  
* Book the lab and make sure the robot Pepper is present.  
* Create an overview for programming responsible of the order in which the stories will be presented to the participants on paper.
* Create an overview of only interview questions (no category names) on paper and leave space for notes.
* Rehearse the procedure and make sure that everything is clear to the researchers.  
Experiment preparation (24h in advance):
* Print the signed consent form
Note: The signature of the researchers has to be present on the consent form, this is also the signature that the text above talks about. The participants will read and sign the consent form when the experiment starts.  
* Print out the contact information on separate papers to hand out to the participants after the experiment.
* Make sure the recording equipment is present on everyone's laptops/phone. Download when necessary.  
Lab preparation before participant enters (30-45 minutes before participants arrive):
* Get the key to the experiment room
* Prepare the experiment room:
** Put chairs down, lights on and remove any stuff that we do not use.
*** Chairs need to be in the following position:
[[File:Chairs setup.png|left|frameless]]
* Connect robot and test the protocol.
* Move robot to suitable location in terms of lightning and visibility.
** See the above configuration of chairs and robot. 
* Make sure the consent forms and pens are present.
* Open the survey on the laptops.
* Make sure there are cups present, for the participants to get some water.
* Prepare the interview rooms (put in the comfortable setting)
* Make sure the participant has a chair that is straight across from the interviewer.
* See the setup below:
[[File:Chairs setup two.png|frameless]]
The participant will be seated in front of the interviewer, not next to or close-by.  
* Make sure the audio recorder works. Test for artifacts and background noise.
* Prepare the laptop with the interview questions on screen.
* Make sure there is water present to give to the participants.  
Preparation of participants and start of the experiment (when participants arrive)
* Let the participants into the lab. You can briefly greet them when they enter the lab.  
* When all the participants have arrived, you can use the below text as a welcome word:
''Welcome to our experiment. We hope that you are comfortable. We will shortly start with the experiment itself, but first we would like you to read and sign the consent form and answer the question in the online survey that is opened on the laptop''.
{wait for the participants to fill out the demographic's questionnaire and consent form. The consent form will be printed on paper and presented to the participant and the LimeSurvey demographic survey will be opened on a laptop}
''Thank you for filling the form out. I will now explain what we will be doing. The experiment itself will consist of you watching the robot tell you three iterations of the same story. After these three iterations you will go with one of the us to do a short interview of about 10 min about the interaction and the robot. Afterwards you will come back to the lab and will see three iterations of a new story. Again, you will go with one of us to do a short interview of about 10 min about the new interaction and robot. After the last interview you will receive some closing remarks and the experiment will be done.  ''
''If something is unclear, please do not hesitate to ask any questions to any of us during the experiment. If you do not want to participate in the experiment any more, you are allowed to leave at any moment during the experiment without giving any reason. This will not have any consequences for you. The data that you have provided up until the moment of your withdrawal will be used. In any other case your data will be stored up until the 7th of June.''
''Are there any questions at the moment? If there are no further questions, we will start the experiment.''
During the experiment (participant interaction with the robot and interview):
* When the participants have been briefed and there is no other question. There will be two researchers present in the experiment room when the robot protocol begins.
* Manually select either the positive or negative protocol.
** Experiment group 1 (positive - negative story):
*** Positive: neutral - congruent - non-congruent
*** Negative: neutral - congruent - non-congruent
** Experiment group 2 (negative – positive story):
*** Negative: neutral - congruent - non-congruent
*** Positive: neutral - congruent - non-congruent
* After the three positive/negative stories have been presented, the participants will be divided over the experimenters for the interview. The assigned researcher will lead the participant into the private room.  
* When the participant enters the room, they will be asked to take a seat and make themselves comfortable.
* The interview will start with the following text:
''You have now watched three iterations of the robot telling a story. During each iteration the robot had a different character. We will now ask you some questions about the experience you had with the robot. We would like to emphasize that there are no right or wrong answers. If there is a question that you would not like to answer, we will skip it.''
* After the participant is briefly given the information necessary, the interview will start using the following questions (Please do not name the categories of the questions, these are for researcher eyes only).
After the experiment (when participants are still present):
* After the second interview is concluded, the participants will be brought back to the experiment room.
* The participants will receive the following debriefing:
''Thank you for participating in our experiment. The purpose of the experiment was to gain an in-depth understanding of how the match and mismatch of Pepper's emotion with the story it told you influenced your attitude towards Pepper.''
''We want to emphasize that you can still withdraw from the study up to 24h after today. Which means {date of tomorrow}. When you do, you are not required to give any reason. You can contact us through the email that will be handed to you.''  
{give note with email of the contact person on it}.  
''If you are interested in the results of the research or if you have any questions in the days after the experiment, you are free to contact the same email address with your concerns, questions or remarks.  ''
''If there are no questions left, we would like to thank you again for your participation and time.''
* Answer all the questions that participants have and let participants leave the experiment room.
After the experiment (when participants left):
* Check and collect all signed consent forms  
* Check if LimeSurvey surveys are correctly saved
* Clean up the experimental setting, move everything back
* Disconnect laptop and shut down Pepper
* Turn lights off, close door and hand in key to lab coordinator


== Sources ==
== Sources ==
<references />
<references />
== Appendix ==
The complete research protocol can be found via the following link:

Revision as of 16:29, 7 March 2024

This study was approved by the ERB on Sunday 03/03/2024 (number ERB2024IEIS22).

Below a few links are listed to important documents that were used for this study:

  • ERB form: ERB form.pdf
  • Research proposal: Research proposal.pdf
  • Consent form: Consent Form.pdf
  • Research protocol: Research protocol.pdf

Group members

Name Student Number Current Study program Role or responsibility
Margit de Ruiter 1627805 BPT Note-taker
Danique Klomp 1575740 BPT Contact person
Emma Pagen 1889907 BAP End responsible Wiki update
Liandra Disse 1529641 BPT Planner
Isha Rakhan 1653997 BPT Programming responsible

Introduction to the course and project

Problem statement

Modern media is filled with images of highly sophisticated robots that speak, move and behave like humans would. The many movies, plays and books that are created speculate that these types of robots will be integrating into our daily lives in the near future. The idea of robots becoming increasingly more like humans is thus integrated into in our ideas. However, modern technology has not yet been able to catch up to this futuristic idea of what an artificial agent, like a robot, is able to do. This delay mainly comes from the lack of knowledge on how to replicate the behavior of humans in the hardware and programming of the artificial agents. One of the main areas that has been of growing interest is the implementation of emotions in robots and other artificial agents. Emotions of a human are not easy to replicate, as they consist of many different factors that make up the emotion. The research that will be presented in this wiki will also focus on emotions, but it will look at how these emotions have an effect on the acceptance of the robot. The question that will be answered is:

“To what extent does a match between the displayed emotion of a social robot and the content of the robot’s spoken message influence the acceptance of this robot?"

Objectives

As a group, we outlined our objectives for our project. With our main objectives being contributing to knowledge about the role of emotions in social robot interactions and extending knowledge on the reliability of the acceptance measurement with the focus on young adults. As the Almere model is yet to be extensively tested on younger adults. In order to achieve these two main objectives, we have some smaller objectives that will guide us towards them. These concern conducting lab research and doing statistical and qualitative data analysis that are related to social and psychological research. Next to that, we are a multidisciplinary group, and are aiming towards working together in such a manner that every single group member is able to bring their own discipline to the table. And finally, properly programming and working with a robot is crucial to achieve our main objectives.  

Users

The users in this research are young adolescents. They have specific needs and require certain characteristics of the social robot in order to have a pleasant social interaction. In general, these users would like the robots to be authentic, imperfect, and to be active listeners. Active listening helps to build trust between the human and the robot. Also, by listening and showing that the robot understands the conversation and the emotional state of the person, the robot can adapt its interactions according to this, which will lead to a more personalized and meaningful interaction. The users require the robot to be easy to understand and it should have an intuitive interface. As already mentioned above a bit, the users like robots that can understand and respond to human emotions in order to have a meaningful interaction.

Planning

Each week, there will be a mentor meeting on Monday morning followed by a group meeting. Another group meeting will be held on Thursday afternoon and by Sunday afternoon the wiki will be updated for work done that week (weekly deliverable).

Week 1

  • Introduction to the course and team
  • Brainstorm to come up with ideas for the project and select one (inform course coordinator)
  • Conduct literature review
  • Specify problem statement, user group and requirements, objectives, approach, milestones, deliverables and planning for the project

Week 2

  • Get confirmation for using a robot lab, and which robot  
  • Ask/get approval for conducting this study
  • Create research proposal (methods section of research paper)
  • If approval is already given, start creating survey, programming the robot or creating video of robot

Week 3

  • If needed, discuss final study specifics, including planning the session for conducting the study
  • If possible, finalize creating survey, programming the robot or creating video of robot
  • Make consent form
  • Start finding and informing participants

Week 4

  • Final arrangements for study set-up (milestone 1)
  • Try to start with conducting the study  

Week 5

  • Finish conducting the study (milestone 2)

Week 6

  • Conduct data analysis
  • Finalize methods section, such as including participant demographics and incorporate feedback
  • If possible, start writing results, discussion and conclusion sections

Week 7

  • Finalize writing results, discussion and conclusion sections and incorporate feedback, all required research paper sections are written (milestone 3)
  • Prepare final presentation

Week 8

  • Give final presentation (milestone 4)
  • Finalize wiki (final deliverable)
  • Fill in peer review form (final deliverable)

Individual effort per week

Week 1

Name Total Hours Break-down
Danique Klomp 13.5 Intro lecture (2h), Group meeting (2h), Group meeting (2h), Literary search (4h), Writing summary LS (2h), Writing problem statement first draft (1,5h)
Liandra Disse 13.5 Intro lecture (2h), group meeting (2h), Searching and reading literature (4h), writing summary (2h), group meeting (2h), updating project and meeting planning (1,5h)
Emma Pagen 12 Intro lecture (2h), group meeting (2h), literary search (4h), writing a summary of the literature (2h), writing the approach for the project (1h), updating the wiki (1h)
Isha Rakhan 11 Intro lecture (2h), group meeting (2h), group meeting (2h), Collecting Literature and summarizing (5h)
Margit de Ruiter 13 Intro lecture (2h), group meeting (2h), literature research (4h), writing summary literature (3h) group meeting (2h)
Naomi Han 2 group meeting (2h)

Week 2

Name Total Hours Break-down
Danique Klomp 16,5 Tutormeeting (35min), groupmeeting 1(2.5h),  groupmeeting 2 (3h), send/respond to mail (1h), literature interview protocols and summarize (3h), literature on interview questions (6.5h),  
Liandra Disse 12 Tutormeeting (35min), groupmeeting (3h), write research proposal (3h), groupmeeting (3h), finalize research proposal and create consent form (2,5h)
Emma Pagen 11,5 Tutormeeting (35min), groupmeeting (3h), write research proposal (2h), groupmeeting (3h), finalize research proposal and create consent form (1,5h), updating wiki (1,5h)
Isha Rakhan 10 Research on programming (7h), groupmeeting (3h)
Margit de Ruiter 11,5 Tutormeeting (35min), groupmeeting (3h), read literature Pepper and summarize (3h), groupmeeting (3h), research comfort question interview (2h)

Week 3

Name Total Hours Break-down
Danique Klomp 14 Tutormeeting (35min), groupmeeting 1(3h), meeting Task (3h), preparation Thematic analysis & protocol (2h), mail and contact (1,5h), meeting Zoe (1h), group meeting (3h)
Liandra Disse 12 Tutormeeting (35min), groupmeeting 1(3h), meeting Task (3h), update (meeting) planning (1h), prepare meeting (1h),  group meeting (3h), find participant (30min)
Emma Pagen 12 Tutormeeting (35min), groupmeeting 1(3h), finish ERB form (1h), create lime survey (1,5h), make an overview of the content sections of final wiki page (1h), group meeting 2 (3h), updating the wiki (2h)
Isha Rakhan 12,5 Tutormeeting (35min), groupmeeting 1(3h), meeting zoe (1h), group meeting (3h), programming (5h)
Margit de Ruiter *was not present this week, but told the group in advance and had a good reason*
Naomi Han 16,5 Tutormeeting (35min), groupmeeting 1(3h), group meeting (3h), Writing Introduction (4,5h), Programming (2,5 + 3h)

Literary review

State of the art  

The use of social robots has increased rapidly over time. Social robots are being developed specifically for interacting with humans and other robots. They use artificial intelligence and are equipped with tools such as sensors, cameras and microphones. These tools enable the robot to interact with humans ​[1]. These robots come in all different kinds of shapes and sizes. For example, there are social robots such as Pepper, that look more humanlike, and there are robots such as PARO, which is seal-like ​[2]. These types of robots are now mostly used in service settings ​[3].  

As human-robot interactions are growing more important, more research is being done around these interactions, such as acceptance, trust, responsibility and anthropomorphism. This can be done by investigating different properties of the robots, such as voice, appearance, and facial expressions. These properties can be programmed in the robot in such a way that people can recognize certain emotions in the robot ​[4]​. For example, a study has been done with the robot Sophia, who is developed to eventually work as a service robot in for example healthcare and education. She was given different emotional expressions, and pictures of Sophia were posted on Instagram. The comments on these posts were then analyzed to examine people’s responses to emotions on robots ​[4]​. This is only one example of many more similar studies.  

While research is still being done on human reactions to social robots, many of these robots are already being used in real world settings. They are mainly used as companions and support tools for children, but they are also used for providing services such as cleaning ​[1]. Two examples of social robots that are applied in the real world will be given. The first is the robot Misty. This robot is capable of many different facial expressions and can move its arms and head. Moreover, it has face and speech recognition to remember people and recognize intents ​[1] [5] . Another example is the robot Pepper. This robot has a more humanoid appearance than Misty and is more advanced in its movements. Pepper is also able of perceiving human emotions and adapting its behavior appropriately. The robot is mostly used in companies and schools, such as Palomar College, where the robot is used to highlight and promote programs and services at the college. The students are able to ask it questions, such as “How do I get to my class?” ​[6]​.

How do students interact with robots?

Students are an important user group for robots, since robots can be helpful educational tools. They could help students to grasp difficult concepts. Especially they can be useful in providing language, science of technology education. A robot could take on the role of a peer, a tool or a tutor in the learning activity [7]. Also, students can learn a lot from interacting with robots. Building teamwork and improving communication skills are just some examples of the multiple benefits of using robotics in education [8].  However, the implicit and multi-faceted impacts that this might bring into educational environments as a whole should be considered [9]. Another important aspect to stress is that exposure to robots at a relative young age prepares students for the future, it is likely that they will encounter robots in multiple industries. By familiarizing themselves with robots in an early spectrum, they will gain knowledge and benefit from this in their future careers.  

Students are an important target group for robots, because they represent future workforce and innovations. Understanding the needs of students is therefore important, since it can help developers design the robots so that they are engaging, user-friendly and educational. Teens have a desire for robots to be authentic, imperfect, and active listeners [10].  

In former research, a field study was conducted with qualitative interviews. The results showed a positive perception of the robot-supported learning environment, indicating a positive impact on the learning outcomes. Most students showed an additional value in the presence of the robot compared to traditional onscreen scenario or self-study and the robot increased their motivation, concentration and attention [11].  

Students also prefer robots that are easy to use and understand, it needs an intuitive interface and clear instructions. Apart from this, students like robots that can perform a wide range of activities and tasks and also, providing challenges and opportunities over time. They also like robots to be adaptable. Also, robots that can interact socially are interesting to students. They like robots that can understand and respond to human emotions, speech, gestures in order to have meaningful interactions and relationships.

The importance of social robots being able to display emotions

In many sectors, such as healthcare and education, social robots must be able to communicate with people in ways that are natural and easily understood. In order to make this human-robot interaction (HRI) feel natural and enjoyable for humans, robots must make use of human social norms [12]. This requirement originates from humans anthropomorphizing robots, meaning that we attribute human characteristics to robots and engage and form relationships with them as if they are human [12][13]. We use this to make the robot’s behavior familiar, understandable and predictable to us, and infer the robot’s mental state. However, for this to be a correct as well as intuitive inference, the robot’s behavior must be aligned with our social expectations and interpretations for mental states [13].

One very important integrated element in human communication is the use of nonverbal expressions of emotions, such as facial expressions, gaze, body posture, gestures, and actions [12][13]. In human-to-human interaction as well as human-robot interaction, these nonverbal cues support and add meaning to verbal communication, and expressions of emotions specifically help build deeper and more meaningful relations, facilitate engagement and co-create experiences [4]. Besides adding conversational content, it is also shown that humans can unconsciously mimic the emotional expression of the conversational partner, known as emotional contagion, which helps to emphasize with others by simulating their feelings [4][12]. Due to our tendency to anthropomorphize robots, it is possible that emotional contagion also occurs during HRI and can facilitate making users feel positive affect while interacting with a social robot [4].

Artificial emotions can be used in social robots to facilitate believable HRI, but also provide feedback to the user about the robot’s internal state, goals and intentions [14]. Moreover, they can act as a control system through which we learn what drives the robots behavior and how he is affected by and adapts due to different factors over time [14]. Finally, the ability of social robots to display emotions is crucial in forming long-term social relationships, which is what people will naturally seek due to the anthropomorphic nature of social robots [12].

Measurements in HCI research

In the past HCI (Human Computer Interaction) and HTI (Human Technology Interaction) research focused on improving the technological aspects of the interaction, but in more recent years increased interest has been developed into the aspects of user experience. User experience is still a broad area of research, yet in social robotics it has become increasingly relevant. User experience has many distinct aspects that are all a part of the overall experience, yet the basis of user experiences lies in the comfortable interaction with an agent. Making the interaction comfortable and likeable will create a sense of trust and eventually acceptance of the agent.  

The three factors mentioned above are all connected to each other. Specifically, trust and acceptance are linked. A paper by Wagner et al [15] on a meta-analysis of acceptance in service robots described trust as a mediating factor between informational cues and acceptance of the service agent. However, there are also many contextual factors that play a role in this relationship. For example, acceptance and trust in agents seems to be smaller when the user is in a group than when the user uses the robot individually [16].

To find these relationships and correlations between the many varied factors influencing the overall user experience, there need to be reliable measures that can be used to measure the presence, extend and underlying principles of them. Yet, one of the main challenges when it comes to HCI research is creating a reliable measurement. This challenge is present in most HCI domains, for example speech interfaces [17], user engagement [18] and online trust [19]. The main reason for the lack of reliable and valid measures in HCI research is that these measures are only needed in the user experience research, which is new, as stated before.  

Still there are several attempts to create reliable measures for artificial agent acceptance and trust. First of all, one of the more well-known measures of acceptance is the Almere model that was proposed by Marcel Heerink et al. [20]. This measure consists of a questionnaire that covers twelve basic principles that range from induced anxiety to enjoyment and trust. The questionnaire itself consists of 41 questions. This model has an acceptable Cronbach’s alpha score of ~0.7 when it is used in the older adult and elderly population [20][21]. However, when the measure is used for young adults, the reliability stays around the same [22]. Although this is the measurement that was also proposed in the article that inspired this research, there are still many more measurements of acceptability and attitude towards robots [23].  

Measuring trust is harder as there is not one proposed overall method to measure trust, but this does not mean it cannot be measured. In a literary review of several papers measuring trust it was found that questionnaires are one of the most used methods to measure trust [24]. Sadly, the questionnaires used are not one standard set of questions, but a variety. This creates the added problem that studies are hard to compare to each other. As stated before, trust is connected to acceptance. In the Almere model discussed in the previous paragraph, trust is included as a basic factor. It, however, it is only measured using 2 statements “I would trust the robot if it gave me advice” and “I would follow the advice the robot gives me”. When trust is being measured on its own, it will need to be extended. Luckily there have been some measures proposed. One of these measurements comes from a review of several measurements of trust that were combined by Madsen and Gregor [25]. In the same paper they proposed a questionnaire that consists of 5 distinct factors that each have 5 questions related to them. The overall Cronbach’s alpha of this measurement was found to be 0.85, which is a good score.  

The constructs measured above are not the only constructs that can be measured in Human-Computer interactions, but they are still the most prevalent in recent HCI and HTI research. Given the circumstances, they do give a great overview of the overall concepts in HCI research, as they contain most of the different basic principles in their foundation. This means that these measures are a great starting point for more in-depth research.

How to display emotions as a robot

A robot can display emotions when it combines body, facial and vocal expressions.  

The way such emotional reaction is expressed highly depends on the robot’s degree of anthropomorphism. For robots with a simple appearance, it may be sufficient to express emotions by means of e.g. lights or sounds. However, as the degree of anthropomorphism increases, it turns necessary to match the robot's behavior with the appearance to avoid falling into the uncanny valley [26].

The idea behind the uncanny valley proposes that as robots keep approaching a more human-like appearance, people can experience a feeling of uneasiness / disturbance [27]. These experiences also occur as the robot’s user perceives a mismatch between the robot’s appearance and behavior.  There are also differences in the way that the uncanny valley is perceived across different ages and cultures. As Eastern countries and children are less likely to be disturbed by this phenomenon [26].

Developers of humanoid robots found that next to body posture, hands also play a role in conveying emotions, as human hands can contribute to the human ability of emotional expression. These developers then created the emotion expression humanoid robot WE-4RII, with the integration of robot hands. This humanoid robot was eventually able to express emotion using facial expression, arms, hands, waist and neck motion. They also concluded that motion velocity is equally as important as body posture. “WE-4RII quickly moves its body for surprise emotional expression, but it slowly moves its body for sadness emotional expression.” [28]

Next to that, vocal prosody also contributes to the quality of the emotion that is being displayed. In human-to-human interaction, patterns of pitch, rhythm, intonation, timing, and loudness contribute to our emotional expression. A sudden change in volume or pitch could emphasize excitement or emphasis. Or when the pitch rises at the end of a sentence, it will be more clear that the robot is asking a question, this could indicate confusion and / or curiosity [29].

Studies have shown that humans will interpret both linguistic and non-linguistic emotion displaying sounds in an emotional way. But there is a preference towards the linguistic type of robot, as research has shown that people prefer human-like voices. In the example of a virtual car passenger, the driver appeared to be more attentive and less involved in accidents, as the virtual passenger’s speech matched the driver’s emotion. So it is not only beneficial to sound like a human being, but also the capability of matching the user’s emotions contributes to the emotion displaying quality of the robot [29].

Pepper

Pepper, the Humanoid and Programmable Robot[30]

Currently, Pepper is deployed in thousands of homes and schools. However, Pepper was initially designed for an application of business-to-business. It was launched in June 2014. Then, Pepper became of interest all over the world for multiple other applications. For example, in business-to-consumer, business-to-academics and business-to-developers fields. So, in the end, it was adapted for business-to-consumer purposes [31].  

Pepper is capable of exhibiting body language, perceiving and interacting with its environment and it is able to move itself around. The robot can also analyse other people's expressions and their voice tones, using emotion and voice recognition algorithms in order to create interaction. It is equipped with high-level interfaces and features for multimodal communication with humans surrounding Pepper [31].

Pepper has a lot of capabilities, among which mapping and navigation, object detection, hearing, speech, and face detection [32]. Pepper is a humanoid robot, meaning it is designed to have a physical human appearance. It's sound and speech recognition capabilities yield good results, even with several accents. However, it's built-in navigation system is unreliable, which makes it hard to get to destinations accurately. Sometimes, object and face detection of Pepper gives inconsistent results. So, Pepper can be improved in those fields [32].

Pepper uses facial recognition to pick up emotions on human faces, like sadness or hostility and it uses voice recognition to hear concern. It has age tools like age detection and basic emotions embedded intro its framework [32]. It bases the recognition mostly on eye contact, the central part of the face and distance. It can not only detect emotions, but also knows how to respond and react to them appropriately. For example, it will detect sadness based on a person’s expression and voice tone and by using sensors that are built-in and pre-programmed algorithms, the robot will react properly [30]. Several applications of this robot are answering questions, greeting guests and playing with kids in Japanese homes [33].  

Pepper can also use several gestures while responding to a human, like waving and nodding. It has 12 hours of battery life, and it can return to its charging station if necessary. It is 1.2 meter tall, has 3 omnidirectional wheels in order to move smoothly and 17 joints for body language [31]. Pepper is designed to make it appropriate and acceptable in daily life usage for interacting with human beings. Some design principles behind Pepper are; a pleasant appearance, safety, affordability, interactivity and good autonomy. The aim was to make it not too exact a human likeness robot, since the designers wanted to avoid the ‘uncanny valley’ [31].

Survey and interview questions

Survey

At the start of the experiment, the participants are asked to fill in a survey on limesurvey. The questions are listed below.

  1. What is your age is years?
  2. What is your gender?
    • Male
    • Female
    • Non-binary
    • Other
    • Do not want to say
  3. What study program are you enrolled in currently?
  4. In general, what do you think about robots?
  5. Did you have contact with a robot before? Where and when?

Interview

In the study that we propose, the participants are subjected to the experiments which are followed with an interview. This interview is done face-to-face with the participant in a quiet room. The interview will be a (semi)-structured interview, as this allows us to prepare questions in advance to the interview, but also allows follow up questions that are not scripted. This will give a slightly more complete way of answering questions, however it will also be slightly harder to code the interviews as some participants might be asked follow up questions.  The semi-structured design will allow for further conversation where necessary, and it will seek the fine line between a too structured interview that is avoid of free conversation, which makes it difficult to collect enough information, and an unstructured interview that has only free conversation, which makes it hard to compare the answers that are given by the participants [34].

In general there are several important steps to preparing and doing an interview. These are the steps that are normally taken:

  1. Design interview questions.
    • Think about who you will interview: In this case we will interview peer students. These students are similar to us, but not all students might understand the same jargon, as not all students will be familiar with robots and psychological terms. This is something to keep in mind when deciding upon questions. In addition to this, the student population is, on average, more intelligent than the general public, allowing for more advanced questions. In addition to this, the students are all from a technical university, creating the expectation that these students will have a positive attitude towards the robot, as they are often familiar with the workings and the ideas.
    • Think about what kind of information you want to obtain from interviews: The research questions aims to look at three constructs. First of all, we want to look at the acceptance of robots by the students. Second, we want to focus on whether students trust the robot and what it says. Third, we would like to focus on how comfortable students are in interacting with the robot. These three constructs need to be included in the interview.
    • Think about why you want to pursue in-depth information around your research topic : The results could be used in robot design for different purposes. As the sample population consists of students, the results will only be applicable to them. This leads to applications like student counselor, assistive robots on campus and classroom bots. The study does not focus on the generalization of results, which might be work for future research, as elderly and children might react in different ways to the robot than students would.
  2. Develop an interview guide (what you do during the actual interview, the protocol).
  3. Plan and manage logistics
    • The interviews will be audio-recorded and transcribed using name program. The recordings will be destroyed July 7th 2024 for privacy reasons.
    • The interviews are done one-on-one, where the interviewer has a printed page of the questions with space to make notes.
    • Each interview will be about 10 minutes.


The interview questions are divided into three subjects: attitude, trust and comfort.

Attitude

The interview questions that will be asked in between the interactions with Pepper are the following.

Manipulation check {see whether the story did come across as positive}

  1. What was your impression of the story that you heard
    • Briefly describe, in your own words, the emotions that you felt when listening to the three stories?

Attitude towards the robot:

  1. What do you think about the appearance of the robot during the three stories that you heard?
    • How was the robot feeling when it told the story?
    • How did the robot convey this feeling?
    • Did the robot do something unexpected?
  2. What did you like/dislike about each of the three robot characters?
    • What are concrete examples of this (dis)liking?
    • How did these aspects/example influence your feelings about the robot?
    • What effect did the other characters have compared to each other?
      • What was the most noticeable difference?
  3. Which of the three robot interactions do you prefer to see in the robots that you will use?
    • Why do you prefer this character of the robot?

Trust:

  1. Which robot character did you find the most trustworthy? And which one the least trustworthy?
    • Why was this character the most/least trustworthy?
      • What did the robot do to convey this?
    • What did the other characters do to be less trustworthy?

Comfort:

  1. Which of the three robot characters made you feel the most comfortable in the interaction?
    • Why did this characters made you feel comfortable?
    • What effect did the other characters have?

General:

  1. Do you think it would be suitable to use Pepper (with your preferred character) in real life settings?
    • In what setting would you think it would be suitable to use Pepper?
    • Thinking about your daily life, where would you (not) like to encounter Pepper?
  2. Are there any other remarks that you would like to leave, that were not touched upon during the interview, but that you feel are important?

Sources

  1. 1.0 1.1 1.2 Biba, J. (2023, March 10). What is a social robot? Retrieved from Built In: https://www.nature.com/articles/s41598-020-66982-y#citeas
  2. Geva, N., Uzefovsky, F., & Levy-Tzedek, S. (2020, June 17). Touching the social robot PARO reduces pain perception and salivary oxytocin levels. Retrieved from Scientific reports: https://www.nature.com/articles/s41598-020-66982-y#citeas
  3. Borghi, M., & Mariani, M. (2022, September). The role of emotions in the consumer meaning-making of interactions with social robots. Retrieved from Science Direct: https://www.sciencedirect.com/science/article/pii/S0040162522003687
  4. 4.0 4.1 4.2 4.3 4.4 Chuah, S. H. W., & Yu, J. (2021). The future of service: The power of emotion in human-robot interaction. Journal of Retailing and Consumer Services, 61, 102551. https://doi.org/10.1016/J.JRETCONSER.2021.102551
  5. Misty Robotics. (n.d.). Misty Robotics. Retrieved from Misty Robotics: https://www.mistyrobotics.com/
  6. Becerra, T. (2017, October 3). Palomar College welcomes Pepper the robot. Retrieved from The Telescope: https://www.palomar.edu/telescope/2017/10/03/palomar-robot-pepper-debut/
  7. Mubin, O., Stevens, C. J., Shahid, S., Mahmud, A. A., & Dong, J. (2013). A REVIEW OF THE APPLICABILITY OF ROBOTS IN EDUCATION. Technology for Education and Learning, 1(1). https://doi.org/10.2316/journal.209.2013.1.209-0015
  8. Center for Innovation and Learning. (2023, November 21). Explore the seven benefits of robotics in education for students. Center for Innovation and Education. https://cie.spacefoundation.org/7-benefits-of-robotics-for-students/
  9. Shin, N., & Kim, S. (2007). Learning about, from, and with Robots: Students’ Perspectives. IEEE Xplore. https://doi.org/10.1109/roman.2007.4415235
  10. Björling, E. A., Thomas, K. A., Rose, E., & Çakmak, M. (2020). Exploring teens as robot operators, users and witnesses in the wild. Frontiers in Robotics and AI, 7. https://doi.org/10.3389/frobt.2020.00005
  11. Donnermann, M., Schäper, P., & Lugrin, B. (2020). Integrating a Social Robot in Higher Education – A Field Study. IEEE Xplore. https://doi.org/10.1109/ro-man47096.2020.9223602
  12. 12.0 12.1 12.2 12.3 12.4 Kirby, R., Forlizzi, J., & Simmons, R. (2010). Affective social robots. Robotics and Autonomous Systems, 58(3), 322–332. https://doi.org/10.1016/J.ROBOT.2009.09.015
  13. 13.0 13.1 13.2 Breazeal, C. (2004). Designing Sociable Robots. Designing Sociable Robots. https://doi.org/10.7551/MITPRESS/2376.001.0001
  14. 14.0 14.1 Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3–4), 143–166. https://doi.org/10.1016/S0921-8890(02)00372-X
  15. Wagner Ladeira, M. G. P., & Santini, F. (2023). Acceptance of service robots: a meta-analysis in the hospitality and tourism industry. Journal of Hospitality Marketing \& Management, 32(6), 694–716. https://doi.org/10.1080/19368623.2023.2202168
  16. Martinez, J. E., VanLeeuwen, D., Stringam, B. B., & Fraune, M. R. (2023). Hey? ! What did you think about that Robot? Groups Polarize Users’ Acceptance and Trust of Food Delivery Robots. Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 417–427. https://doi.org/10.1145/3568162.3576984
  17. Clark, L., Doyle, P., Garaialde, D., Gilmartin, E., Schlögl, S., Edlund, J., Aylett, M., Cabral, J., Munteanu, C., Edwards, J., & R Cowan, B. (2019). The State of Speech in HCI: Trends, Themes and Challenges. Interacting with Computers, 31(4), 349–371. https://doi.org/10.1093/iwc/iwz016
  18. Doherty, K., & Doherty, G. (2018). Engagement in HCI: Conception, Theory and Measurement. ACM Comput. Surv., 51(5). https://doi.org/10.1145/3234149
  19. Kim, Y., & Peterson, R. A. (2017). A Meta-analysis of Online Trust Relationships in E-commerce. Journal of Interactive Marketing, 38(1), 44–54. https://doi.org/10.1016/j.intmar.2017.01.001
  20. 20.0 20.1 Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing Acceptance of Assistive Social Agent Technology by Older Adults: the Almere Model. International Journal of Social Robotics, 2(4), 361–375. https://doi.org/10.1007/s12369-010-0068-5
  21. Heerink, M. (2011). Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. Proceedings of the 6th International Conference on Human-Robot Interaction, 147–148. https://doi.org/10.1145/1957656.1957704
  22. Guner, H., & Acarturk, C. (2020). The use and acceptance of ICT by senior citizens: a comparison of technology acceptance model (TAM) for elderly and young adults. Universal Access in the Information Society, 19(2), 311–330. https://doi.org/10.1007/s10209-018-0642-4
  23. Krägeloh, C. U., Bharatharaj, J., Sasthan Kutty, S. K., Nirmala, P. R., & Huang, L. (2019). Questionnaires to Measure Acceptability of Social Robots: A Critical Review. Robotics, 8(4). https://doi.org/10.3390/robotics8040088
  24. Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. International Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2022.2138826
  25. Madsen, M., & Gregor, S. D. (2000). Measuring Human-Computer Trust. https://api.semanticscholar.org/CorpusID:18821611
  26. 26.0 26.1 Marcos-Pablos, S., & García‐Peñalvo, F. J. (2021). Emotional Intelligence in Robotics: A Scoping review. In Advances in intelligent systems and computing (pp. 66–75). https://doi.org/10.1007/978-3-030-87687-6_7  
  27. Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The Uncanny Valley: Existence and Explanations. Review Of General Psychology, 19(4), 393–407. https://doi.org/10.1037/gpr0000056  
  28. Effective emotional expressions with expression humanoid robot WE-4RII: integration of humanoid robot hand RCH-1. (2004). IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/abstract/document/1389736?casa_token=LP_352U3xbQAAAAA:Yugjlzs5aZ-KEfzz2UxVjNIZDKTyRkeEXNjyImWL_TXrR1NHVd75pi6-ZKfHd3Zd10c5xykvxQ  
  29. 29.0 29.1 Crumpton, J., & Bethel, C. L. (2015). A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. International Journal Of Social Robotics, 8(2), 271–285. https://doi.org/10.1007/s12369-015-0329-4  
  30. 30.0 30.1 Mlot, S. (2014, June 5). “Pepper” robot reads, reacts to emotions. PCMAG. https://www.pcmag.com/news/pepper-robot-reads-reacts-to-emotions
  31. 31.0 31.1 31.2 31.3 Pandey, A. K., & Gelin, R. (2018). A Mass-Produced sociable humanoid robot: Pepper: the first machine of its kind. IEEE Robotics & Automation Magazine, 25(3), 40–48. https://doi.org/10.1109/mra.2018.2833157
  32. 32.0 32.1 32.2 Mishra, D., Romero, G., Pande, A., Bhuthegowda, B. N., Chaskopoulos, D., & Shrestha, B. (2023). An exploration of the Pepper robot’s capabilities: unveiling its potential. Applied Sciences, 14(1), 110. https://doi.org/10.3390/app14010110
  33. Glaser, A. (2016, June 7). Pepper, the emotional robot, learns how to feel like an American. WIRED. https://www.wired.com/2016/06/pepper-emotional-robot-learns-feel-like-american/
  34. Pollock, T. (2022, June 14). The Difference Between Structured, Unstructured & Semi-Structured Interviews — Oliver Parks - Search Based Recruitment Experts. Oliver Parks - Search Based Recruitment Experts. https://www.oliverparks.com/blog-news/the-difference-between-structured-unstructured-amp-semi-structured-interviews

Appendix

The complete research protocol can be found via the following link: