تعداد نشریات | 19 |
تعداد شمارهها | 380 |
تعداد مقالات | 3,131 |
تعداد مشاهده مقاله | 4,251,614 |
تعداد دریافت فایل اصل مقاله | 2,845,980 |
Exploring the Role of Visual Representation Signals for Interactional Action in Conversation | ||
Journal of Modern Research in English Language Studies | ||
مقاله 1، دوره 2، شماره 2، خرداد 2015، صفحه 1-21 اصل مقاله (390.58 K) | ||
نویسندگان | ||
Arash Gholamy Saleh Abady1؛ Sayyed Mohammad Alavi2 | ||
1The Chairman of English Department, AJA University of Medical Sciences, | ||
2Associate Professor, University of Tehran | ||
تاریخ دریافت: 17 فروردین 1395، تاریخ پذیرش: 17 فروردین 1395 | ||
چکیده | ||
The main approach to conversation analysis is multimodal analysis, which can be explained by the distinction between the non-verbal and verbal expression in the communicative functions (Haddington & Kääntä, 2011; Streeck et al., 2011). The purpose of this study was to investigate whether there was a significant difference between non-verbal or verbal signals in conveying information in conversation. The participants of this study were 37 male Iranian B.S. Paramedic students at medical university for the Islamic Republic of Iran's Army. Two video talk show interviews were shown in order to determine the descriptive features for exchanging information. ELAN video annotation instrument was utilized for analyzing the interviews of this study. To find out which of verbal or non-verbal resources was effective in conveying information, a questionnaire was also developed by the researchers consisting of 19 items on the verbal and non-verbal signals. The results of ELAN analysis for both interviews showed that the descriptive visual cues such as hand movement, gaze, eyebrow motions, and torso were more frequent than the other non-verbal resources. Additionally, the analysis of the questionnaire data showed that there was a significant difference between the visual and verbal elements in the transmission of information from the students' viewpoints. Moreover, there was a significant difference between the non-verbal descriptive resources in conveying information. The findings of this study revealed that non-verbal cues were more effective in the transmission of information than the verbal cues. In addition, hand movements and laughing were found to be more effective than the other visual signals in conveying information. | ||
کلیدواژهها | ||
non-verbal signals؛ verbal signals؛ ELAN video annotation؛ interactional actions؛ multimodal signals | ||
عنوان مقاله [English] | ||
بررسی نقش علائم بصری در خصوص تأثیر متقابل آنها در مکالمه | ||
نویسندگان [English] | ||
آرش غلامی صالح آبادی1؛ سید محمد علوی2 | ||
1دانشگاه علوم پزشکی ارتش جمهوری اسلامی ایران، تهران | ||
2دانشیار آموزش زبان انگلیسی، دانشگاه تهران | ||
چکیده [English] | ||
این تحقیق به بحث در خصوص اینکه آیا تفاوت معناداری بین علائم بصری با علائم کلامی در انتقال بهتر اطلاعات در مکالمه وجود دارد، می پردازد. شرکت کنندگان این تحقیق 37 نفر از دانشجویان کارشناسی ارشد رشته پیراپزشکی دانشگاه علوم پزشکی ارتش بودند. علاوه بر این، پرسشنامه ای شامل 19 سوال از علائم بصری و کلامی توسط محقق تهیه و تنظیم شد. 2 مصاحبه ویدئویی نیز به منظور بررسی بیشتر نشان داده شد. ابزار ویدئویی ELAN جهت آنالیز هرجفت مصاحبه های ویدئویی به کار گرفته شد. نتایج بهدست آمده از برنامه ELAN در خصوص هر جفت مصاحبه های ویدئویی نشان داد که میزان استفاده علائم مشروح بصری همانند حرکات دست، زل زدن، حرکات ابرو و نیم تنه بالای بدن بیشتر از سایر علائم بصری بود. علاوه بر این، نتایج بهدست آمده از پرسشنامه شرکت کنندگان نشان داد که تفاوت معناداری بین علائم بصری و کلامی در انتقال اطلاعات مؤثرتر هستند. بعلاوه، مقایسه ای بین سوالات علائم بصری پرسشنامه جهت نشان دادن اینکه کدامیک از علائم مشروح بصری نسبت به سایر علائم بصری مؤثرتر در انتقال اطلاعات از نقطه نظر دانشجویان هستند، انجام شد. همچنین، نتایج نشان دادند که حرکات دست و خندیدن در حالات چهره، در انتقال اطلاعات مؤثرتر بوده اند. | ||
کلیدواژهها [English] | ||
علائم بصری, علائم کلامی, برنامه آماری-ویدئویی الن, تأثیر متقابل, علائم چندبعدی | ||
مراجع | ||
Allwood, J., Cerrato, L., Jokinen, K., Navarretta, C., & Paggio, P. (2008). The MUMIN coding scheme for the annotation of feedback, turn managements and sequencing phenomena. Multimodal corpora for modeling human multimodal behavior. Journal on Language Resources and Evaluation, 41(4), 273-287. Aran, O., & Perez, D. G. (2011). Analysis of social interaction in group conversations: Modeling social verticality. In A. Salah, & T.Gevers (Eds.), Computer analysis of human behavior (pp. 293-322). Springer. Ba, S., & Odobez, J. M. (2011). Multi-person visual focus of attention from head pose and meeting contextual cues. IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 33(1), 101-116. Boersma, P., & Weenink, D. (2007). Praat: Doing phonetics by computer 5.0.02. University of Amsterdam: Institute of Phonetic Sciences. Retrieved from http://www.praat.org. Carroll, W. R., & Bandura, A. (1982). The role of visual monitoring in observational learning of action patterns: making the unobservable observable. Journal of Motor Behavior, 14(2), 153-167. Chen, H. K. Y. (2011). Sound patterns in Mandarin recycling repair (Doctoral dissertation). Department of Linguistics. Faculty of the Graduate School, University of Colorado. Esfandiari, G. B., & Ágnes, A. (2013). An overview of multimodal corpora, annotation tools and schemes. Debreceni Egyetemi Kiadó, 9, 86-98. Foster, M. E., & Oberlander, J. (2007). Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Proceedings of the International Language Resources and Evaluation Conference (LREC), 41(3/4), 305-323. Haddington, P., & Kääntä, L. (2011). Language, body and interaction: A multimodal perspective into social action. Helsinki: Finnish Literature Society (SKS). Jewitt, C. (2006). Technology, literacy and learning: A multimodal approach. London: Routledge. Jewitt, C. (Ed.). (2009). Handbook of multimodal analysis. London: Routledge. Jokinen, K. (2009). Gaze and gesture activity in communication. In: C. Stephanidis, (Ed.), Universal access in human-computer interaction (pp. 495-506). Helsinki: Gaudeamus Helsinki University Press. Jokinen, K., & Vanhasalo, M. (2009). Stand-up gestures – annotation for communication management. Proceedings of the Multimodal Workshop at Nodalida Conference, Denmark. Kendon, A. (2004). Gesture: Visible action as utterance. Cambridge: Cambridge University Press. Knight, D. (2009). A multimodal corpus approach to the analysis of back channeling behavior (Doctoral dissertation). The University of Nottingham. Retrieved from www.core.ac.uk/download/pdf/98821.pdf. Koutsombogera, M., & Papageorgiou, H. (2009). Multimodality issues in conversation analysis of Greek TV interviews. Multimodal signals: Cognitive and algorithmic issues LNAI, 5398, 40-46. Retrieved from www.springer.com/content/pdf. Massaro, D. W. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, New Jersey: Lawrence Erlbaum. McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago: University Of Chicago Press. McNeill, D. (2005). Gesture and thought. Chicago and London: University of Chicago Press. Kipp, M. (2004). Gesture generation by imitation: From human behavior to computer character animation (Doctoral dissertation, Saarland University). Retrieved from www.Dissertation.com. Musgrave, D. J. (2012). A multimodal analysis of the communicative utterances of a language competent bonobo (Pan paniscus) (Master´s thesis). English Studies Department, Faculty of the Graduate School, Iowa State University. O'Halloran, K. L., Smith, B. A., Tan, S., & Podlasov, A. (2010). Challenges in designing digital interfaces for the study of multimodal phenomena. Information Design Journal, 18(1), 2-12. O'Halloran, K. L., & Smith, B. A. (2012). Multimodal text analysis. Singapore: Singapore University Press. Pajo, K. (2013). Joint multimodal management of hearing impairment in conversations at home (Unpublished doctoral dissertation). Behavioral Sciences Department, Faculty of Behavioral Sciences, University of Helsinki. Pallant, J. (Ed.). (2010). SPSS survival manual. New York, NY: Open University Press. Pastra, K., & Wilks, Y. (2004). Image-language multimodal corpora: Needs, lacunae and an AI synergy for annotation. In Proceedings of the Language Resources and Evaluation Conference (pp. 767-770). Athens: Institute for Language and Speech Processing. Saferstein, B. (2004). Digital technology- methodological adoption: Text and video as a resource for analytical reflectivity. Journal of Applied Linguistics, 12, 197-223. Skelt, L. (2006). See what I mean: Hearing loss, gaze and repair in conversation. The Australian National University: Canberra. Streeck, J., Goodwin, C., & LeBaron, C. (2011). Embodied interaction in the material world: an introduction. In J. Streeck, C. Goodwin, & C. LeBaron (Eds.), Embodied interaction: Language and the body in the material world (pp. 1-26). Cambridge: Cambridge University Press. Vilhjálmsson, H. H. (2009). Representing communicative function and behavior in multimodal communication. In A. Esposito, A. Hussain, M. Marinaro, & R. Martone (Eds.), Multimodal signals: Cognitive and algorithmic issues lecture notes in artificial intelligence (pp. 47-59). Berlin Heidelberg: Springer-Verlag.
| ||
آمار تعداد مشاهده مقاله: 951 تعداد دریافت فایل اصل مقاله: 719 |