PRE2019 3 Group15

From Control Systems Technology Group
Revision as of 14:55, 9 February 2020 by 20183097 (talk | contribs) (→‎Planning)
Jump to navigation Jump to search

Group Members

Name Study Student ID
Mats Erdkamp Industrial Design 1342665
Sjoerd Leemrijse Psychology & Technology 1009082
Daan Versteeg Electrical Engineering 1325213
Yvonne Vullers Electrical Engineering 1304577
Teun Wittenbols Industrial Design 1300148

Users

Primary users

  • Dance industry: this is the overarching organization that will possess most of the robots.
  • Organizer of a music event: this is the user that will rent or buy the robot to play at their event.
  • Owner of a discotheque or club: the robot can be an artificial alternative for hiring a DJ every night.

Primary user needs

  • The DJ-robot is a smart, lucrative investment.
  • The user interface is easy to understand, no experts needed.
  • The DJ-robot is easy to transport.
  • The DJ-robot is autonomous, no human in the loop.
  • The DJ-robot is at least as valued as a human substitute.

Secondary users

  • Attenders of a music event: these people enjoy the music and lighting show that the robot makes.
  • Human DJ's: likely to "cooperate" with a DJ-robot to make their show more attractive.
  • Human lighting experts: can also "cooperate" with the robot to improve their aspect of the show.

Secondary user needs

  • The DJ-robot selects popular tracks that are valued by the audience.
  • The DJ-robot selects appropriate tracks regarding genre.
  • The DJ-robot does not fall silent in between tracks.
  • The DJ-robot creates an attractive lighting show.
  • The according lighting show fits the beat.
  • The according lighting show fits the genre of music.
  • Attending a set should be something extraordinary and special.
  • The music set played is structured and progressive.
  • The DJ-robot is able to handle requests from the audience.
  • The DJ-robot takes the audience reaction into account in track selection.
  • The transition between tracks is smooth.


Approach, Milestones, and Deliverables

Approach

Milestones

In order to complete the project and meet the objective, milestones have been determined. These milestones include:

  • A clear problem and goal have been determined
  • The literature research is finished
  • The research on how to create design and prototype is finished
  • A design is created
  • A working prototype is constructed
  • The wiki is finished and contains all information about the project

Deliverables

The deliverables for this project are:

  • A product design (?)
  • A working prototype
  • The wiki-page
  • The final presentation in week 8

Planning

Based on the approach and the milestones, a planning has been made. This planning is not definite and will be updated regularly, however it will be guideline for the coming weeks.

Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8
Goal Do literature research, define problem, make a plan Determine USE aspects, start research into design and prototype Continue research, start on prototype Finish first design, start on prototype Finalize design Finish prototype, do testing Finalize prototype Finish wiki, presentation
Mats Work on SotA and evaluate design options
Sjoerd Work on defining "users" and "user requirements"
Daan
Yvonne Work on approach, milestones, and deliverables, and make concept planning
Teun
Everyone Think about subject, do literature research and summarize

SotA (Summary of related research)

Yoshii, K., Nakadai, K., Torii, T., Hasegawa, Y., Tsujino, H., Komatani, K., ... & Okuno, H. G. (2007, October). A biped robot that keeps steps in time with musical beats while listening to music with its own ears. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1743-1750). IEEE.

This research presents a robot that is able to move according to the beat of the music and is also able to predict the beats in real time. The results show that the robot can adjust its steps in time with the beat times as the tempo changes.


Kashino, K., Nakadai, K., Kinoshita, T., & Tanaka, H. (1995). Application of Bayesian probability network to music scene analysis. Computational auditory scene analysis, 1(998), 1-15.

In this paper a music scene analysis system is developed that can recognize rhythm, chords and source-separated musical notes from incoming music using a Bayesian probability network. Even though 1995 is not particularly state-of-the-art, these kinds of technology could be used in our robot to work with music.


Huron, D. (2002). Music information processing using the Humdrum toolkit: Concepts, examples, and lessons. Computer Music Journal, 26(2), 11-26.

This article introduces Humdrum, which is software with a variety of applications in music. One can also look at humdrum.org. Humdrum is a set of command-line tools that facilitates musical analysis. It is used often in for example Pyhton or Cpp scripts to generate interesting programs with applications in music. Therefore, this program might be of interest to our project.


Choi, K., Fazekas, G., Cho, K., & Sandler, M. (2017). A tutorial on deep learning for music information retrieval. arXiv preprint arXiv:1709.04396.

This paper is meant for beginners in the field of deep learning for MIR (Music Information Retrieval). This is a very useful technique in our project to let the robot gain musical knowledge and insight in order to play an enjoyable set of music.


Pérez-Marcos, J., & Batista, V. L. (2017, June). Recommender system based on collaborative filtering for spotify’s users. In International Conference on Practical Applications of Agents and Multi-Agent Systems (pp. 214-220). Springer, Cham.

This paper takes a mathematical approach in recommending new songs to a person, based on similarity with the previously listened and rated songs. These kinds of algorithms are very common in music systems like Spotify and of utter use in a DJ-robot. The DJ-robot has to know which songs fit its current set and it therefore needs these algorithms for track selection.


Jannach, D., Kamehkhosh, I., & Lerche, L. (2017, April). Leveraging multi-dimensional user models for personalized next-track music recommendation. In Proceedings of the Symposium on Applied Computing (pp. 1635-1642).

This article focuses on next-track recommendation. While most systems base this recommendation only on the previously listened songs, this paper takes a multi-dimensional (for example long-term user preferences) approach in order to make a better recommendation for the next track to be played.