|
February
January
November
August
June
February
December
November
October
July
May
November
September
June
May
April
February
November
July
April
|
Date and time: Monday, 6 August 2018, 15:00-17:00 Venue: Room 411 on the 1st Floor of Building 4, Ohashi Campus, Kyushu University, Fukuoka, Japan <http://www.design.kyushu-u.ac.jp/kyushu-u/english/access> Language: English Organizers: Hiroshige TAKEICHI (RIKEN) and Kazuo UEDA (Kyushu Univ./ReCAPS) Cosponsors: RIKEN and Research Center for Applied Perceptual Science (ReCAPS), Faculty of Design, Kyushu University
1. Time-shrinking illusion in the tactile modality: modality independent brain region for time perception Takako MITSUDO* *Kyushu University/JSPS
It has been reported that 'Time-shrinking illusion' occurs not only in auditory but in tactile modality. Using magnetoencephalography, we observed brain responses while participants engaged in tactile temporal judgment task, that causes time-shrinking illusion, by stimulating the left thumb. Time-shrinking illusion actually occurred in the behavioral response. In addition, temporal judgment led to increased activation in the right inferior frontal gyrus and the supplementary motor area. Modality dependent/independent brain mechanism for time perception would be discussed.
2. Interaction among binocular disparity, motion parallax, and pictorial depth cues in perceiving large depth Masayuki SATO*, Yuta OZAWA*, Daiki ARAMAKI*, Bungo SEKIJIMA*, and Yasuaki TAMADA* *The University of Kitakyushu
To examine the interaction among binocular disparity, motion parallax, and pictorial depth cues in perceiving large depth, apparent depth was quantified when either of these cues was given in isolation or multiple cues were available simultaneously. Substantial interaction between binocular disparity and motion parallax was found, however, perceived depth was much less than geometrical prediction when random-dot pattern was used. It appears that pictorial cues are crucial to see large depth in natural scene.
3. Irrelevant speech effects with locally time-reversed speech: Language familiarity Kazuo UEDA*, Yoshitaka NAKAJIMA*, Florian KATTNER**, and Wolfgang ELLERMEIER** *Kyushu University/ReCAPS, **Technische Universität Darmstadt
To disentangle the contributions of local and global temporal features of irrelevant speech in one's native and a non-native language, locally time-reversed speech was employed in the irrelevant speech (ISE) paradigm. ISE experiments were performed with German native listeners (n = 79) and with Japanese native listeners (n = 81), employing both German and Japanese speech with either sample. The results showed that the irrelevant sound effect worked differently for one's native and non-native languages.
4. Dynamic processing of visual information in the retina Masao TACHIBANA* *Ritsumeikan University
Although the whole retinal image is always in motion by eye, head, and body movements, we do not fully understand how the retina processes global motion images. Applying electro-physiological techniques to the goldfish isolated retina, we recorded the response of ganglion cells (GCs) to a moving target accompanied the global motion, which simulated a saccade following a period of fixational eye movements. Detailed analyses revealed that the moving target modulated the receptive filed properties and evoked synchronized and correlated firing among local clusters of the specific GC subtypes, providing a novel concept for retinal information processing of global motion images during eye movements.
We will get together after the Seminar.
Photos
The 42nd Perceptual Frontier Seminar: Perception and Communication
Date and time: Wednesday, 15 August 2018, 16:00-18:15 Venue: Room 601 on the 6th Floor of Building 3, Ohashi Campus, Kyushu University, Fukuoka, Japan <http://www.design.kyushu-u.ac.jp/kyushu-u/english/access> Language: English Organizer: Yoshitaka NAKAJIMA (Kyushu Univ./ReCAPS)
16:05-16:30 A spectral-change factor related to sonority makes noise-vocoded Japanese speech intelligible Takuya KISHIDA*, Yoshitaka NAKAJIMA**, Kazuo UEDA**, Gerard B. REMIJN**, and Seiya UMEMOTO*** *Department of Human Science, Kyushu University, **Department of Human Science/ReCAPS, Kyushu University, ***21st Century Program, Kyushu University
This study investigated the effects of factor elimination on the intelligibility of noise-vocoded Japanese speech. Stimuli were resynthesized from spectral-change factors obtained from 20-critical-band power fluctuations of running speech, by using origin-shifted principal component analysis followed by varimax rotation. Sixteen native speakers of Japanese listened to noise-vocoded speech resynthesized from factors obtained in 2-, 3-, and 4-factor analyses, and reported what they heard. The results showed that eliminating a factor around 500-1500 Hz, highly correlated with sonority, resulted in lower mora identification as compared to conditions where this factor was not eliminated whilst the total number of factors was the same. When the number of factors was 2, however, mora identification was relatively low (around 30%) even when this “sonority” factor was used. The results indicate that 1) using 3 or more factors and 2) using the “sonority” factor are necessary requirements for synthesizing intelligible noise-vocoded speech from spectral-change factors.
16:30-16:45 Multivariate acoustic analysis of initial consonant clusters in English Yixin ZHANG*, Yoshitaka NAKAJIMA**, Xiaoyang YU*, Kazuo UEDA**, Takuya KISHIDA***, Gerard B. REMIJN**, Sophia ARNDT****, and Mark A. ELLIOTT**** *Human Science Course, Graduate School of Design, Kyushu University, **Department of Human Science/ReCAPS, Kyushu University, ***Department of Human Science, Kyushu University, ****School of Psychology, National University of Ireland, Galway
Two-consonant clusters at the initial positions of English syllables were analyzed together with the following vowels in terms of spectral changes. In a 3-factor analysis, a factor closely related to frequency components in a range around 1100 Hz and another factor related to components near or above 3300 Hz appeared. These two factors were exclusive. In the consonant clusters, the factor score of either of them changed in one direction.
16:45-17:00 An acoustic analysis of preposition phrases in English Xiaoyang YU*, Yoshitaka NAKAJIMA**, Yixin ZAHANG*, Takuya KISHIDA*** , and Kazuo UEDA** *Human Science Course, Graduate School of Design, Kyushu University, **Department of Human Science/ReCAPS, Kyushu University, ***Department of Human Science, Kyushu University
The present study was on how spectral-change factors behave in English preposition phrases consisting of prepositions and noun phrases. Spectral changes of spoken English sentences in our new database were subjected to origin-shifted factor analysis. One of the factors was related to a frequency range above 3300 Hz, and its factor scores tended to be higher in the noun phrases than in the prepositions. Frequency components above 3300 Hz may play important roles to clarify noun phrases perceptually.
17:10-17:25 Shimeng LIU* *Graduate School of Design, Kyushu University
17:25-17:40 Effect of sound on memorization of visual images Natalia POSTNOVA*, Shin-ichiro IWAMIYA**, and Gerard B. REMIJN*** *Human Science Course, Graduate School of Design, Kyushu University, **Department of Communication Design Science, Kyushu University, ***Department of Human Science/ReCAPS, Kyushu University
A series of experiments on the effect of sound on memorization of visual images had been conducted. Our recent experiment replicated and confirmed the previous findings (Postnova & Iwamiya, 2017) under more controlled conditions. The experiment consisted of a memory-span test, in which sequences of black-and-white visual images were presented under three different conditions. When half of the images was accompanied by the sound, the number of correctly recalled images was significantly higher (p<0.01) with sound than without sound.
17:40-18:05 Principal component analysis of public speaking performance by English native speakers Yuko YAMASHITA* *Shibaura Institute of Technology
The aim of the present study was to quantitatively explore the characteristics of English-language public speaking. Fifteen native speakers of English studying at a university in Ireland participated in the study. They read aloud two scripts to a small audience while being audio-recorded. Their speech rate, duration of speech pauses, duration of speech units, the coefficient of variation of speech pauses were obtained as objective indexes. The results will be discussed in the talk.
|
|