Subjective Speech Quality Assessment using Mobile Crowdsourcing

In human to human communication via telecommunication systems, the Quality of Experience (QoE) is assessed by system providers to optimize their services. Traditionally, methods for the QoE assessment of transmitted speech are listening-only-tests (LOTs). Methods for subjective determination of transmission quality using laboratory studies are provided by ITU-T, the ITU Telecommunication Standardization Sector, in P.800 Recommendation. Such lab-based LOTs provide reliable, valid results and are often used as the ground truth for research and industry. However, LOTs conducted in a laboratory setting also exhibit some problematic limitations, as they are: 1) money intensive, regarding the costs for laboratories, participants, test conductors and supervisors, 2) time intensive, concerning invitations and introductions with respect to the number of participants, 3) limited external validity, as laboratory test environments significantly differ from the actual application environment in terms of quality of speech.

In this lecture, recommendations on how to conduct speech quality assessment in the crowdsourcing platform will be given by focusing on mobile crowdsourcing. Reliability of ratings collected through crowdsourcing will be discussed by comparing them with ratings collected in the laboratory. In addition, user’s surrounding environment as a crucial factor that may influence results of speech crowdtesting will be discussed and results of empirical studies will be presented.

About the speaker

Babak Naderi Babak Naderi
Technical University Berlin
Germany
http://www.qu.tu-berlin.de/menue/team/researchers/babak_naderi/

Babak Naderi is a senior researcher at the Quality and Usability Lab, Technical University Berlin. He obtained Bachelor’s degree in Software Engineering and Master’s degree in Geodesy and Geoinformation Science at the Technical University Berlin. He is actively researching in the usability perspective with specific focus on application of crowdsourcing and crowd-working, motivation, quality control and HCI parameters since 2012. In his doctoral dissertation he focused on Motivation of Workers on Micro-task Crowdsourcing Platforms. Additionally, he is the co-leader of the Crowdsourcing Task-Force of Qualinet COST Action IC 1003.