
īrown, A., et al.: Dynamic subtitles: the user experience. Hong, R., et al.: Video accessibility enhancement for hearing-impaired users. In: Proceedings of the 18th ACM International Conference on Multimedia (MM ’10) (2010)

Hong, R., Wang, M., Xu, M., Yan, S., Chua, T-S.: Dynamic captioning: Video accessibility enhancement for hearing impairment. Accessed Įnglish-language Working Group: Closed Captioning Standards and Protocol for Canadian English Language Television Programming Services (2008). Ofcom: Measuring live subtitling quality, UK (2019). Kafle, S., Huenerfauth, M.: Predicting the understandability of imperfect English captions for people who are deaf or hard of hearing. L.: Caption accuracy metrics project research into automated error ranking of real-time captions in live television news programs (2011) Palgrave Studies in Translating and Interpreting, Palgrave Macmillan, London (2015)Īpone, M.B.T., Botkin, B., Goldberg. Audiovisual Translation in a Global Context. Romero-Fresco, P., Martínez Pérez, J.: Accuracy Rate in Live Subtitling: The NER Model. Association for Computational Linguistics, Melbourne, Australia, July (2018) In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol. Accessed Īli, A., Renals, S.: Word error rate estimation for speech recognition: e-WER.
#Caption phone hard of hearing 2016 california tv#
Media Access Group (WGBH): Closed Captioning on TV in the United States 101 (2019). Association for Computing Machinery, New York, NY, USA (2019) In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19), pp. 117, 14–18 (2017)īerke, L., Albusays, K., Seita, M., Huenerfauth, M.: Preferred appearance of captions generated by automatic speech recognition for deaf and hard-of-hearing viewers. 7(3), 621–631 (2020)īerke, L.: Displaying confidence from imperfect automatic speech recognition for captioning. M.H.: Modeling closed captioning subjective quality assessment by deaf and hard of hearing viewers. Accessed īBC: BBC Subtitle Guidelines (2019). The Described and Captioned Media Program: Captioning Key for Educational Media (2010). 12(4), 183–189 (2001)įederal Communications Commission: Closed Captioning Quality Report and Order, Declaratory Ruling, FNRMP (2014). E.N.: The severely to profoundly hearing-impaired population in the united states: prevalence estimates and demographics.


SPb.: Department of operational printing, HSE, p. Malyshev: Modern technologies of education at the University. Our findings also revealed for preferences genre-adaptive caption typeface and movement during captioned live TV programming. By analyzing the focus-group responses, we observed DHH users’ preference for specific contrast between caption text and background color such as, black text on white background or vice-versa, and caption placement not occluding onscreen salient content. We convened two focus groups where participants watched videos consisting of captions with various display properties and provided subjective open-ended feedback. Therefore, we empirically investigated what visual attributes of captions are preferred by DHH viewers while watching captioned live TV programs. The effect of these visual properties of captions on Deaf and Hard of Hearing (DHH) users’ TV-watching experience have been less explored in existing research-based guidelines nor in the design of state-of-the-art caption evaluation metrics. text color, typeface, caption background, number of lines, caption placement), especially during live or near-live broadcasts in local markets. There is a wide range of visual appearance of captions during television programming (e.g.
