The periphrastic anticipatory effect : An fMRI study of the linguistic-driven anticipatory activity of posterior brain areas in causal representation *

Causal relationships can be either direct (e.g., when one ball strikes another) or indirect (e.g., when one ball strikes an intermediary object that then strikes a second ball). Whereas it has been hypothesized that direct causal relationships are detected automatically by visual brain regions, semantic representations have been shown to mediate the perception of indirect causal relationships. Experimental psycholinguistic research has shown that lexical sentences such as ‘the orange ball moves the purple ball’ tend to describe direct causal events exclusively whereas periphrastic sentences such as ‘the orange ball causes the purple ball to move’ describe either direct or indirect causal events. Thus, the periphrastic structure might confer a semantic advantage in the representation of complex causal relationships. This advantage might be instantiated by top-down influences from frontal brain regions on parietal and posterior visual areas. With functional magnetic resonance imaging, we aimed to identify the neural substrates underlying the hypothetical semantic advantage of the periphrastic causative representation in causal perception while participants read periphrastic and lexical instructions. Greater activity in the frontal cortex, precuneus, and the secondary visual area was observed when the participants read the periphrastic instruction compared to the lexical instruction. These findings are interpreted as reflecting anticipatory activity of visual areas modulated by frontal topdown influences of the semantic representation elicited by the periphrastic causative structure.


INTRODUCTION
Apprehending the causal structure of the world confers an adaptive advantage by allowing individuals to predict and control their environment (Tolman & Brunswik, 1935).Such causal structures may either be direct or indirect.Direct causation describes situations in which two actors directly interact, or interact via an enabler (an intermediate object such as a tool).For example, if a car knocks down a tree, no intermediate object interferes between the two actors (the car and the tree) and this situation is a direct causal event.Indirect causation describes situations in which two actors interact via an intermediate object.For example, if a car strikes a tree, the tree falls down, strikes a window, and the window breaks, this event includes a non-enabling intermediary (the falling tree), and is thus an indirect causal event.Psycholinguistic research has shown that direct and indirect causal events are related to specific linguistic structures (Wolff & Zettergren, 2002;Wolff, 2003Wolff, , 2007;;Song & Wolff, 2003;Wolff & Song, 2003;Wolff, Klettke, Ventura & Song, 2005), and that language could mediate the representation of causal events.Several studies have examined the neural correlates of the perception of direct causal events.The neural basis of the relationship between language and causal representations, however, has been scarcely reported Limongi Tirado, Habib, Young and Reinke (Submitted).The purpose of this study was to examine, with functional magnetic resonance imaging (fMRI), the neural underpinnings of anticipatory linguistic representations of perceptual visual causal events.
To examine causality, researchers have sometimes adopted Michotte's launching paradigm (Thines, Costall & Butterworth, 1990;Kerzel, Bekkering, Wohlschläger & Prinz, 2000;Scholl & Tremoulet, 2000;Scholl & Nakayama, 2002;Choi & Scholl, 2004;Young, Rogers & Beckmann, 2005).In this paradigm, a shape (often a square or circle) moves across a computer display on a straight path until it strikes a stationary shape at which point the first shape stops moving and the second shape begins to move along the same trajectory away from the first shape (Figure 1).Though the Michottean launching paradigm does not cover all the possible causal events humans might encounter, it allows researchers to reveal basic properties of causal representation.The Michottean launching paradigm can be adapted to examine indirect causal events by inserting various forms of intermediate objects between the two moving shapes while simultaneously controlling for the spatiotemporal contiguities that are critical to judging causality (Young & Falmier, 2008).
A main question in the study of causal knowledge is whether judgments about causality depend solely upon incoming sensory information or whether higher order inferential processes and attention are also necessary.It has been hypothesized that direct causal events can be supported solely by perceptual or bottom up processes.Blakemore, Fonlupt, Pachot-Clouard, Darmon, Boyer and Meltzoff (2001) provided evidence in support of this position by demonstrating with fMRI that perception of direct causal events in a Michottean paradigm resulted in increased activity in V5/MT, superior temporal sulcus (STS), and left intraparietal sulcus (LIPS).They hypothesized that these activations were independent of attentional processes and concluded that the visual system alone could support automatic bottom-up causal perception.In a follow up analysis of the same data, Fonlupt (2003) reported greater bilateral activation in the superior frontal gyrus while the participants made judgments about causal events relative to merely viewing causal events.Based on these results, Fonlupt (2003) suggested that two different systems process direct causal information.Initially, the visual system and associated areas automatically detect the causal structure of a stimulus (V5/MT, STS, and LIPS).At this point in the processing, the visual percept only represents the spatiotemporal contiguities of the stimulus.The participation of the superior frontal gyrus would elucidate the causal nature of the percept.
By manipulating the spatiotemporal contiguities of direct causal events, Fugelsang, Roser, Corballis, Gazzaniga and Dunbar (2005) devised a task to contrast brain activity during three conditions: direct events (i.e., Michottean launching), spatial discontiguity, and temporal discontiguity.The spatial discontiguity included a spatial gap between the two colliding objects while keeping the temporal succession between the objects.The temporal discontiguity consisted in a delay in the movement onset of the second object while maintaining spatial contiguity.Using fMRI, Fugelsang et al. (2005) observed lateralized right posterior regions involved in detecting the spatiotemporal contiguities of direct causal events (e.g., during the Michottean launching).Specifically, the right inferior parietal lobule (RIPL) was hypothesized to be involved in processing the temporal properties of the causal event, whereas the right middle temporal gyrus (RMTG) was hypothesized to process the spatial properties.Additionally, these authors observed that the detection of causal events, as opposed to their spatiotemporal contiguities, involve top-down processing that was independent of judging causality and was mediated by frontal brain regions including the right superior (RSFG) and middle (RMFG) frontal gyri, and the right precentral gyrus (RPG).These results suggest that temporal and parietal areas might represent the spatiotemporal properties of causal events, whereas frontal areas might integrate such information to form a causal representation.
The studies described thus far have examined direct causal structures.As previously mentioned, however, causal representations may also be indirect and may be mediated by linguistic representations.Specifically, because indirect causal structures are necessarily more complex than direct casual structures, they might benefit from linguistic mediation.Experimental psycholinguistic research differentiates the linguistic structures that humans use to describe complex or indirect events.Two such linguistic structures are lexical causative and periphrastic causative sentences.A lexical causative sentence involves one clause with a transitive verb such as 'Katrina struck New Orleans'.A periphrastic causative sentence contains multiple clauses and includes causative verbs (e.g., cause, have).An example of a periphrastic causative sentence is 'a supply shortage causes the gas prices to rise'.It has been suggested that while lexical structures refer to direct causal events, periphrastic structures can describe either direct or indirect causal structures (Wolff, 2003).Therefore, periphrastic structures may offer an advantage over lexical structures in describing indirect or complex causal inference.In a 2-stage visual causal judgment task to examine the activity of the ventrolateral prefrontal cortex during causal judgments of visual events, Limongi Tirado et al. (submitted) found that Broca's area (BA 44 and BA 45) was more strongly activated when the participants evaluated visual causal and non-causal events after reading a verbal instruction encoding a periphrastic structure than after reading an instruction encoding a lexical structure.From these results, they concluded that differences in the semantic representation of periphrastic and lexical causatives were related to differential activity in language-related areas during causal judgments.
Although the results reported by Limongi Tirado et al. (submitted) provide initial evidence supporting the hypothesis that lexical and periphrastic causatives elicit differential neural activity, their results only refer to the differential activity found in frontal language-related areas (i.e., pars triangularis and pars opercularis) during the actual judgments of visual events.In this work, we expand their findings by testing the hypothesis that differences in the semantic representation of periphrastic and lexical causatives are mapped into differential neural activity during the reading of the verbal instructions.Those differences would be associated with differential neural recruitment related to top-down anticipated attentional control before the actual evaluation of the events (i.e., during the reading of the verbal instructions).Specifically, we hypothesize that attention-related prefrontal areas and parietal and occipital areas necessary for the detection of direct and indirect visual causal events would be more strongly activated during the reading of periphrastic causative instructions than during the reading of lexical causatives instructions.To test this hypothesis, in this study we analyzed the blood oxygen level dependent (BOLD) response collected in the first stage of the task reported by Limongi Tirado and coworkers.

Participants
Fourteen right handed normal volunteers (18 -36; 8 Males, 6 Females) each received a $25 gift card for participation.Three participants were excluded, 2 for computer failure, and 1 for excessive head movement.Participants signed informed consent forms, and the study was approved by the Human Subjects Committee of Southern Illinois University Carbondale.

Materials
Judgment of casual events was assessed by creating three different forms of computer animation based upon the Michottean launching paradigm: 1) direct causation (DC), 2) indirect causation (IC), and 3) non-causal (NC).Each animation consisted of 2 balls, an orange ball to the left of the screen, and a purple ball in the middle of the screen.In the IC and NC conditions, a blue cylinder lay equidistant between the horizontal path created by the orange and purple balls.At the beginning of each animation sequence, the orange ball began to move to the right.In the DC condition, the orange ball would 'strike' the purple ball, at which point the orange ball would stop moving and the purple ball would begin moving to the right.In the IC condition, the orange ball would 'strike' the blue cylinder and stop moving.The blue cylinder would then begin moving towards the stationary purple ball.The cylinder would then 'strike' the purple ball and come to rest, at which point the purple ball would begin moving to the right.In the NC condition, the orange ball and the blue cylinder were located below the vertical position of the purple ball.As in the IC condition, the orange ball would strike the blue cylinder at which point it would come to rest and the blue cylinder would begin moving to the right.When the right edge of the blue cylinder would reach the same horizontal position as the left edge of the purple ball (even though they were not on the same vertical plane), the blue cylinder would come to a stop and the purple ball would begin to move to the right.Original visual animations can be found at http://bcs.siuc.edu/facultypages/young/youngHome.html.

Procedure
The study consisted of a 2 (Verbal Instruction: lexical vs. periphrastic) × 3 (Animation: DC, IC, NC) repeated measures design.A total of 12 three minute fMRI runs were carried out.During each run, the order of Verbal Instructions (lexical vs. periphrastic) was randomized.Within each verbal instruction, the order of the three causal animations (DC, I, and NC) was also randomized.Thus, the participants would receive all animations under one verbal instruction before receiving all animations under the second verbal instruction.Each individual trial lasted for 27 seconds (Figure 2) and consisted of the following sequence of events: (1) general instructions (2 s), (2) lexical or periphrastic task instructions (7 s), (3) one of three animation sequences (2 s) repeated 6 times with a 500 ms blank period between each repetition (15 s), and (4) response window (3 s).
Each trial began with the presentation of the general instructions.The participants were instructed to carefully read the lexical or periphrastic instruction that was to follow and respond after the sixth presentation of the animation.Following the general instruction, the participants received either the lexical or periphrastic instructions.The lexical instruction stated 'Judge whether the orange ball moves the purple ball'.The periphrastic instruction stated 'Judge whether the orange ball causes the purple ball to move'.The participants then viewed one of the three animations 6 times.Following the sixth presentation of the animation, the participants were given 3 s to provide their responses to the lexical or periphrastic instruction.They were required to respond 'yes' or 'no' by pressing respectively the right index or middle buttons of a response pad.The entire trial period was scanned.
Animations were presented on the center of an MRI compatible LCD screen (IFIS-SA, InVivo, Orlando, FL).The LCD screen was attached to the back of a standard MRI head coil.The participants viewed the LCD screen view a mirror placed directly above their eyes.Responses were recorded by MRI compatible response buttons.
Imaging data were analyzed with SPM5 (Wellcome Department of Cognitive Neurology, London, UK, http://www.fil.ion.ucl.ac.uk/spm/) implemented in Matlab 6.51 (Mathworks, Natick, MA).Functional EPI volumes were 1) slice-time corrected for acquisition order, 2) realigned and motion corrected to the first image of the session, 3) normalized to a common template (Montreal Neurological Institute EPI template), 4) resliced to 2 x 2 x 2 mm voxels, and 5) spatially smoothed with a 10 mm Gaussian filter.A 128 s high-pass filter was applied to each time course in order to eliminate low frequency noise.Data analysis was performed in the laboratory of neurophysiology at the Venezuelan Institute for Scientific Research.fMRI data are available upon request to the authors.
Single-subject statistical contrasts were created using the general linear model.Conditions of interest (main effects of verbal instruction and causality, and the 2-way interaction) were modeled using a canonical hemodynamic response function.Here, we focused the analysis on the 7-second verbal instruction phase.Group comparisons were created using a random effects model.All contrasts were thresholded at p < 0.001, uncorrected for multiple comparisons.All coordinates are presented in the Talairach and Tournoux (1988) coordinate system.

Behavioral
Figure 3 shows the behavioral performance on causal decision task as replicating previous psycholinguistic research and predictions as reported by Limongi Tirado et al. (submitted).The lexical representation of the verbal instruction elicited higher proportions of positive response to direct causation than it did to indirect causation.In contrast, no difference in proportions of positive responses between direct and indirect causation was detected as elicited by the semantic representation of the periphrastic instruction.There was a significant main effect of animation, F(2, 6) = 36.48,p < .001,no main effect of verbal instruction F(1, 3) = 4.09, p = 0.13, and a marginal effect of the Animation × Verbal Instruction interaction, F(2, 6) = 3.93, p = 0.08.A planned contrast between the lexical and periphrastic conditions during the judgment of the IC event was performed to assess the specific simple effect of the verbal instruction.This contrast yielded a significant effect of verbal instruction during the processing of the IC animation.The mean number of 'yes' responses was significantly higher during the periphrastic condition (M = .96,SE = .03)than during the lexical condition (M = .53,SE = .07),F(1) = 11.65,p < .01.

fMRI
During the verbal instruction phase, activity was greater for periphrastic representation of causality in the left precuneus (BA 7 / 31), left middle occipital gyrus (BA 18), and the left middle frontal gyrus (BA 8), see Table 1 and Figure 4.No regions were more active for the lexical representation relative to the periphrastic representation.

DISCUSSION AND CONCLUSIONS
The main purpose of this study was to examine the impact of the semantic representation of the periphrastic causative structure on the anticipatory non-language neural activity as preparation for the detection of direct and indirect causal events.visual representations of different types (Miller & Cohen, 2001;Ullman, 2004;Herd, Banich & O'Reilly, 2006).Specifically, active representations associated with prefrontal activity such as the activity in the middle frontal gyrus reported here are critical for resolving conflict in decision making tasks (Liu, Banich, Jacobson & Tanabe, 2004;Fugelsang & Dunbar, 2005;Herd et al., 2006).
Previous research has implicated posterior visual areas in anticipatory activity.For example, in a spatial cueing task, neural activity in the topographic region relative to the cued visual space is enhanced prior to the presentation of the target stimulus (Pessoa, Kastner & Ungerleider, 2003;Heeger & Ress, 2004;Pessoa & Ungerleider, 2004;Sylvester, Shulman, Jack & Corbetta, 2007).Stimuli falling within this region are processed more accurately.This implies that anticipation of the target results in increased activity in the topographic area causing more efficient processing of stimuli that falls in that region.
Accordingly, the current results lead us to support that the periphrastic semantic representation, as opposed to the lexical representation, confers an advantage for causal detection by recruiting brain regions associated with the dorsal topdown attentional control system before the processing of the actual visual event.Here, the semantic representation of the periphrastic verbal instruction might have primed the system, by activating posterior visual areas, to evaluate the spatiotemporal structure of the animations.This anticipatory activity might have been initiated by feedforward influences from the frontal cortex that underlies top-down attentional control (Pessoa et al., 2003;Pessoa & Ungerleider, 2004;Fox, Corbetta, Snyder, Vincent & Raichle, 2006;Buracas & Boynton, 2007;Gabrieli & Whitfield-Gabrieli, 2007;Gomez, Flores & Ledesma, 2007).
A recent study by Fugelsang and Dunbar (2005) supports the present findings.In their study, the participants performed a two-stage task.In the first stage, they read linguistic narratives of complex causal chains classified as either plausible or implausible causal theories.In the second stage, they observed cartoon-like causal events and ranked the causal effectiveness of the events based on the previous narratives.Fugelsang and Dunbar found that the left superior (BA 9) and right inferior (BA 45/47) frontal gyri activated during the processing of plausible (i.e., causal) theories as opposed to implausible (less causal) theories.Moreover, the precuneus (BA 7) and occipital (BA 17/18) visual areas were also activated.It is worth noting that in our work activity in the secondary visual area (middle occipital cortex, BA 18) could have been elicited by differences in the physical features of both instructions.Although the detection of physical features of words is more likely to recruit activity in the primary visual cortex (inferior occipital area, BA 17) than in the secondary visual area, this alternative explanation should be considered when interpreting our results.
In conclusion, judging the causal nature of complex information requires the simultaneous representation of the spatiotemporal features of the visual event (as reflected in the recruitment of posterior occipital, temporal, and parietal regions) and the linguistic representation of the verbal instructions as reported previously.These two representations might result in conflict which can be anticipated by the top-down attentional system as shown in this work.Within this context, the semantic representation of periphrastic causative structures would fulfill the demands in neural processing required by the representation of causal knowledge.In future research, it would be interesting to address how the parietal and occipital anticipatory activity elicited during the reading of the periphrastic verbal instruction is functionally related to the left ventrolateral prefrontal activity elicited during the actual judgments of causal events reported previously.

Figure 2 .
Figure 2. Timeline of the causal judgment task as reported by Limongi Tirado et al. (submitted).Participants read either the lexical (left) or the periphrastic (right) verbal instruction.Following, they observed six consecutive presentations of one of three animations: direct causal (left), indirect causal (middle), non-causal (right).Finally, they made judgments as resulting from the integration of the semantic representation elicited by the verbal instruction and the perceptual representation elicited by the visual event.Data reported in this study correspond to the 7-second block of the verbal phase.The general instruction at the beginning of each trial (2 s) is not shown.

Figure 3 .
Figure 3. Mean decision proportions of yes responses to the lexical and periphrastic instructions during the scanning sessions (Limongi Tirado et al., submitted).

Figure 4 .
Figure 4. Regional activities elicited during the reading of the periphrastic verbal instruction compared to the lexical instruction.Coactivation of prefrontal and occipital areas reflects anticipatory preparation for the incoming visual animation.

Table 1 .
Peak activations in regions associated with the processing of causal and non-causal events during the verbal instruction phase, p < .001uncorrected.