Broad SotA

From Control Systems Technology Group
Revision as of 14:56, 24 February 2018 by S142803 (talk | contribs) (Created page with '<font color=red> yet needs introduction </font> ==Museum/AI experience== Socially interactive robots need to bring a positive user experience to provide long-term added value t…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

yet needs introduction

Museum/AI experience

Socially interactive robots need to bring a positive user experience to provide long-term added value to people’s lives [1]. In a museum, most visitors just stroll around the museum without acquiring any information [2] , they typically don’t look at an object/artwork for more than 30 seconds-60 seconds. Therefore, texts and explanations need to be provided near the displayed objects. It is best if this is personalized to the visitor by incorporating modern technologies for interaction [3]. It has already been shown that the presence of a social robot and interactions with it (although tele-operated in this study) can raise children’s interest in science [4]. Possibly this holds true for museums. Studies have shown that robots successfully helped primary school and university students in their learning (2) and since a museum also tries to educate, a robot can be helpful for this. It is found that people appreciate their names being called by robots. Also, physical existence and social interaction of the robot are necessary to encourage curiosity [5]. Elderly seem to appreciate social robots as a guide as well [6], although tested in another setting. Robots have already been used in museums ([7], [8] and [9]). Visitors really appreciated the robot and they especially liked it when the robot displayed free play of children together with guiding the visitor [10], although this robot was not intended to give a personalized tour, rather to attract the attention of the visitor to a certain piece of art. Article 3 shows a different kind of robot, with a mobile user interface, a friendly UI with QR code reader which provides extensive information about the collections to enrich visitor’s experience, an AR narrator, which plays an interactive role between visitors and the object, and a back-end semantic database, which stores the object’s data and communicates with the front-end to make personalized recommendations during a visit. Enhancing personalization, education, visualization, and interaction between visitors and collections in museums and art galleries, focusing on the visitor experience. There is definitely a need for personalized social robots that can interact with visitors to create a more meaningful experience of the museum while being able to educate the visitor in a better/nicer way. However, there needs to be a good balance between technology and art, otherwise the technology might be drawing attention away from the artwork too much.

Person Localization and Path Planning

While seeking for articles with the subject of path planning and localization, many different methods were found. In the next part, the different methods are being discussed.

Wireless based localization [11] A robot uses a ZigBee wireless network to localize in a area by a weighted centroid technique. This is a simple method for a good localization with a desirable level of accuracy.

Indoor localisation with beacons [12] Localization with bluetooth beacons is used in a tour guide app. This is a low-cost localization method for indoor use. Which can also be used in our museum. It are devices that emit a bluetooth signal. Afterwards there is looked at the contribution of localization to the usability of the application. The conclusion of this article gives that this localization method suffer from noise but localization can improve the user experience in an application. [13]

Embedded system controls [14] This article is about a autonomous robot which is designed to guide people through a engineering lab. This robot has several self-localization possibilities. The autonomous navigation works through the following of walls with ultrasound and image processing with a webcam. The robot has a Raspberry pi 2 minicomputer and a 4 omni wheels who use 4 motors with a disadvantage of a 30 minute run time.

Markov localisation [15] A version of a Markov localization is presented which is used in a dynamic environment. The environment in our museum is also dynamic because there are visitors in the museum which are walking around. The method is implemented and tested in different real-world applications for mobile robots. It is also tested as a interactive museum tour guide. It is a good method, but in a museum where much visitors are walking around, there are a lot of wrong measurements of the proximity sensors of the robots. This is a problem which we also should have in our museum.

Particle filters [16] The Monte Carlo localization (MCL) algorithm is a particle filter which is combined with a probabilistic model of robot perception and robot motion. This approach is tested in practice.

Active Neural Localization [17] A active neural localiser is a neural network that learns to localize. This works with a map of the environment and raw pixel-based observations. This is based on the Bayesian filtering algorithm. And reinforcement learning is integrated. The limitation of this model is the adaptation to dynamic lighting.

Natural Language Processing

Sources need to be implemented yet

Currently, open-ended natural language processing can be achieved (e.g. IBM’s Watson) by employing deep neural networks, however this needs a huge amount of processing power and requires the learning process before it can start operation. On the small scale of a single museum, such an approach will not be worth the trouble. Matching a natural-language query to a previously assembled set of options is more likely to be possible at this smaller scale, an in the case of a museum can be done fairly easily as the subjects of the input, namely the exhibits, is known beforehand (the robot does not need to be able to process questions not relating to the museum). While the options might not be as extensive or natural as they would be with a deep neural network approach, they will be sufficient for the goal that is answering questions about a museum exhibit. Thus, a command-based approach can be used, similarly to how many virtual assistants operate: predefined commands or question structures are used to approximate the experience of having natural language interpreted on-the-fly, while multiple variations can be accepted to allow for a natural experience in asking the question (so a visitor does not have to think about using the correct ‘command’ for his or her purpose). When large amount of processing power can fit in smaller form-factors in the future, it might be possible to use deep neural network based systems on this scale, possibly doing the learning in another, centralized location in which more processing power is available, reusing the resulting system in multiple museums.


Speech recognition

Sources need to be implemented yet

Most of the articles that were found concerning speech recognition were outdated and not informative enough for this project. The articles contain explanations of underlying models for speech recognition. Automated speech recognition (ASR) is an important aspect in machine learning techniques such as the Markov model. Voice recognition is the technology of converting sounds, words or phrases processed by human speech into electrical signals wherafter these signals are coded in a meaningful manner. ASR is still a widely unsolved problem; it is not yet in line with human performance. To conclude, several applications provide a ASR system for commercial purposes, and this is where this project should look at.

Sources

  1. Alenljung, B., Lindblom, J., Andreasson, R., & Ziemke, T. (2017). User experience in social human-Robot interaction, International Journal of Ambient Computing and Intelligence (ijaci),8(2), 12-31. doi:10.4018/IJACI.2017040102
  2. Höge, H. (2003). A Museum Experience: Empathy and Cognitive Restoration, Empirical Studies of the Arts ,21(2), 155-164. doi:10.2190/5j4j-3b28-782j-fak7
  3. Falco, F.d., Vasson, s.(2017). Mueseum Experience Design: A Modern Storytelling Methodology. The Design Journal, 20(1). doi:10.1080/14606925.2014.1352900
  4. Shiomi, M., Kanda, T., Howley, I., Hayashi, K., Hagita, N. (2015). Can a Social Robot stimulate Science Curiosity in Classrooms? International Journal of Social Robotics, 7(5), 641-652 doi:10.1007/s12369-015-0303-1
  5. Shiomi, M., Kanda, T., Howley, I., Hayashi, K., Hagita, N. (2015). Can a Social Robot stimulate Science Curiosity in Classrooms? International Journal of Social Robotics, 7(5), 641-652 doi:10.1007/s12369-015-0303-1
  6. Montemero, M., Pineau, J., Roy, N., Thrun, S., Verma, V. (2002) Experiences with a mobile robotic guide for the elderly, Eighteenth national conference on Artificial intelligence, AAAI-02 Proceedings 587-592, Retrieved from http://www.aaai.org/Papers/AAAI/2002/AAAI02-088.pdf
  7. Li, Y.L., Liew, A.W. (2014) An interactive user interface prototype design for enhancing on-site museum and art gallery experience through digital technology. Museum management and Curatorship 30(3), 208-229, doi:10.1080/09647775.2015.1042509
  8. Verma, P. (2017). How Technology is transforming the Museum Experience, Dell Technologies, Retrieved from https://www.delltechnologies.com/en-us/perspectives/how-technology-is-transforming-the-museum-experience/ 9 van Dijk, M.A.G., Lingnau, A., Kockelkorn, H. (2012), Measuring enjoyment of an interactive museum experience, Proceedings of the 14th ACM international conference on multimodal interaction, ICMI 2012, 249-256, doi:10.1145/2388676.2388728
  9. van Dijk, M.A.G., Lingnau, A., Kockelkorn, H. (2012), Measuring enjoyment of an interactive museum experience, ‘’Proceedings of the 14th ACM international conference on multimodal interaction, ICMI 2012, 249-256, doi:10.1145/2388676.2388728
  10. Shiomi, M., Kanda, T., Ishiguro, H., Hagita, N. (2007) Interactive Humanoid Robots for a Science Museum, IEEE Intelligent Systems, 22(2), doi: 10.1109/MIS.2007.37
  11. MacDougall, J., Tewolde, G.S. (2013). Tour Guide robot using wireless based localization, IEEE International Conference on Electro-Information Technology , EIT 2013, doi: 10.1109/EIT.2013.6632690
  12. MacDougall, J., Tewolde, G.S. (2013). Tour Guide robot using wireless based localization, IEEE International Conference on Electro-Information Technology , EIT 2013, doi: 10.1109/EIT.2013.6632690
  13. Kaulich, T., Heine, T., & Kirsch, A. (2017). Indoor localisation with beacons for a user-Friendly mobile tour guide. Ki - Künstliche Intelligenz : German Journal on Artificial Intelligence - Organ Des Fachbereichs "künstliche Intelligenz" Der Gesellschaft Für Informatik E.v, 31(3), 239-248. doi:10.1007/s13218-017-0496-6 link: https://tue.on.worldcat.org/oclc/7088178387
  14. Diallo, A.D., Gobee, S., Durairajah, V. (2015) Autonomous Tour Guide Robot using Embedded System Control, Porcedia Computer Science, 76, 126-133, doi: 10.1016/j.procs.2015.12.302
  15. Fox, D., Burgard, W., Thrun, S. (1999) Markov Localization for mobile robots in dynamic environments, Journal of Artificial intelligence Research 11, 391-427, doi:10.1613/jair.616
  16. Fox D., Thrun S., Burgard W., Dellaert F. (2001) Particle Filters for Mobile Robot Localization. In: Doucet A., de Freitas N., Gordon N. (eds) Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science. Springer, New York, NY, doi: 10.1007/978-1-4757-3437-9_19
  17. Chaplot, D.S., Parisotto, E., Salakhutdinov, R. (2018). Active Neural Localization, ICLR 2018 Conference Blind Submission, Retrieved from: https://arxiv.org/pdf/1801.08214.pdf