Stephen Wilson 0:06 Welcome to Episode 14 of the Language Neuroscience Podcast. I'm Stephen Wilson and I'm a neuroscientist at Vanderbilt University Medical Center in Nashville, Tennessee. My guest today is Ina Bornkessel-Schlesewsky. Ina is professor of cognitive neuroscience at the University of South Australia in Adelaide. She does extremely interesting work on language in the brain using fMRI and EEG and I've been following her work for a long time. Her work is cross-linguistic and reveals common principles as well as intriguing differences between the ways that different human languages are processed by the brain. Today, we're going to talk about her work on argument structure, predictive coding and dorsal and ventral streams of language processing. Okay, let's get to it. Hi, Ina, how are you? Ina Bornkessel-Schlesewsky 0:46 Yeah, well thanks Stephen. How are you? Stephen Wilson 0:48 Yeah, so it's very early in the morning in Adelaide, right? Ina Bornkessel-Schlesewsky 0:53 Yeah, not too bad. Just after seven o'clock. Stephen Wilson 0:55 And are you always an early riser? Ina Bornkessel-Schlesewsky 0:58 Yeah, this tends to be the sort of a time of morning where I can, you know, sort of get some get some quiet work done before the madness of the day starts. So yeah, it's a good time for me. Stephen Wilson 1:09 And you're telling me on email, that you all live on a farm of about seven acres? Ina Bornkessel-Schlesewsky 1:17 Yeah, so it's a sort of farm per se. It's mostly sort of bushland, so we get to have a bit of land and then in, I guess, a very, very stereotypical Australian way got lots of kangaroos and koalas and birds, and so on and so forth. So it's, yeah, it's very nice. Stephen Wilson 1:38 I hoped that we pick up a bit of Australian bird sound in the background with any luck. Ina Bornkessel-Schlesewsky 1:44 There actually, there are some birds in the background, but they're probably too quiet for you to hear. So um, yeah, I just had a few sort of lorikeets flying past. Stephen Wilson 1:52 Oh lovely! (Laughter) So I didn't even, I only learned that you were Australian when I met you at a conference, maybe 10 ish years ago. I didn't, I thought you were German before that. But what is the actual, is it just a yes and yes kind of answer? Ina Bornkessel-Schlesewsky 2:12 I guess so. I mean, it's always a hard one. As I guess it's also, the, probably the case of yourself now having been in the US for so long. Stephen Wilson 2:20 Ah, no! (Laughter) Ina Bornkessel-Schlesewsky 2:20 But, um, no. (Laughter) I mean, for me, the the story was basically that I was born in Germany. So, you were right about that. But then move to Australia, with my mum, when I was seven. So basically spent most of my childhood years, all my teenage years in, in Tasmania, in fact, and then, then I moved back to Germany to go to university and sort of started my career there. So um, yeah and then came back in 2014. So that's sort of the that's the short version. (Laughter) Stephen Wilson 2:55 Right, yeah. So you've kind of really been twice in each country, I guess. Ina Bornkessel-Schlesewsky 3:00 Yeah pretty much, pretty much, um, so... Stephen Wilson 3:04 And did you learn English for the first time when you move to Australia when you were seven or had you learned it already in Germany? Ina Bornkessel-Schlesewsky 3:11 I hadn't learned very much. We used to go to Ireland for Summer holidays, and so I picked up a few words there. But I hadn't had any sort of formal instruction at school, which retrospectively, I think, was probably a good thing, because it sort of allowed me to learn just through immersion in the first, you know, two, three months, and um, and then that was sort of done, so... (Laughter) I was lucky. Stephen Wilson 3:36 Apparently, it was successful because you sound like an Aussie. (Laughter) Ina Bornkessel-Schlesewsky 3:41 Yeah, I think I've gotten a little bit of the twang back since I, you know, since I've moved back. Stephen Wilson 3:45 Right. Ina Bornkessel-Schlesewsky 3:45 I think a lot of it got, a lot of got washed out of it when I was living in Europe for a long time and um, yeah, I think it's maybe broadened out a little bit again. Stephen Wilson 3:53 Yeah. And do you remember that time? Like, do you have vivid memories of being a seven year old in a, in a country where you, in a new country where you didn't speak the language? Ina Bornkessel-Schlesewsky 4:04 I have a few sort of memory fragments from primary school. So from Yeah, from things happening that I didn't understand, because I didn't speak the language or, um, you know, me trying to pick words out of a dictionary and then, there's, there's this one really weird one, where I realized retrospectively that I was trying to find, I was actually trying to find a pronoun, I think, in the dictionary to try to translate something. And then I translated as "pron", because I didn't realize (Laughter) the um, that I didn't realize that was the word category and then that, it may not have been until I started studying linguistics that that then actually occurred to me that I thought, that's what happened and that's why no one knew what I was talking about. But it was also yeah, it was just it was such a different sort of school experience because my um, my year three teacher who was so my first, my first teacher, she is, she rescued injured animals. So we had a, like, injured baby wallabies and wombats and all sorts of interesting creatures in our classrooms. So that was... Stephen Wilson 5:07 Oh, wow! Ina Bornkessel-Schlesewsky 5:08 That was very, very different to the primary school that I went to in Germany for the first two years. (Laughter) But um, yeah, thankfully, I was still young enough, I guess to, to be able to pick up the language pretty quickly and then it all started to make sense and I knew what was going on. So... (Laughter) Stephen Wilson 5:26 Right. So you learned English by talking to injured wombat. That's what you're saying? (Laughter) Ina Bornkessel-Schlesewsky 5:32 Sort of, or talking to other people in the vicinity of injured wombats maybe? Stephen Wilson 5:38 Right. But anyway, so was that like, sort of multilingual upbringing? Did that end up playing a part in your, in the career that you ended up developing? Ina Bornkessel-Schlesewsky 5:51 I think, yeah, I think it definitely got me interested in language. So um, I mean, when I was sort of finishing school, I had a range of things that I was interested in. I had no idea really what I wanted to study, sort of range from Chemistry to Economics to what have you. But then, yeah, language was always something that I found really fascinating and I think that probably was the case, at least to a certain extent, because I, I, I've been thinking about it a bit more consciously than I otherwise would have, because of that sort of experience. So um, yeah. And then my mum at some point, and I can't quite remember when this happened, she also came across Steven Pinker's 'The Language Instinct' and then gave me a copy of that. So, I sort of devoured that and then... Stephen Wilson 5:51 Yeah. Ina Bornkessel-Schlesewsky 5:52 Yeah and then it was, yeah, it was that interesting language plus the, I guess the idea, I think, after reading that book, that by studying linguistics, I could sort of combine, in my interest in sort of humanities with something a little bit more analytical as well, which was something else that I was trying to do. I always had trouble sort of choosing between the sciences and, and the arts, I guess, to a certain extent. So yeah, language seemed to seem to combine that quite nicely. So... Stephen Wilson 7:09 Yeah. Did you study linguistics as an undergrad in Germany? Ina Bornkessel-Schlesewsky 7:14 I did, yeah. I actually started, started out studying computational linguistics, because I had this sort of this vision that um, and it's really sort of weird to go back and think about that now, but I had this idea that, you know, if we could build a computer model of how the human brain processes language, then we'd have a good idea of how the human brain processes language. That was actually my my understanding of what I wanted to do going into undergraduate linguistics at the University of Potsdam. Some, then, after a few semesters actually realized that computational linguistics, at least as it was being taught was a little too, let me say engineering focus. So, I thought, I'm not really interested in building speech recognition systems for large car companies, if the way in which those systems are built, has nothing to do with how the human brain actually processes speech and, and then I sort of switched into the general linguistic stream and became interested in psycholinguistics and language in the brain. And that is how that switch happened after, after a few semesters. Stephen Wilson 8:20 Okay, but you were kind of initially interested in language in the brain from the get go, but you just weren't quite sure, like, what was the way to get there. Ina Bornkessel-Schlesewsky 8:28 Yeah. (Laughter) Stephen Wilson 8:28 Yeah, that's quite interesting. Ina Bornkessel-Schlesewsky 8:30 I have to say the, you know, those, those um, those computational skills, doing some coding, all that has served me in good stead later on. So I'm really glad I did that. I still still something I enjoy quite a lot. Now I'm teaching, teaching data science to our own undergrads, which is sort of, it is not like computation linguistics, but it does seem like it's come full circle to a certain extent. Stephen Wilson 8:55 Yeah, it's amazing the way things just keep coming back from from our early lives, you know. Like, my dad was a computer programmer of satellite imagery and I learned some of that from him as a teen and like, I had no idea that like, you know, five, ten years down the road, I was going to use a lot of those concepts in, you know, working on MRI data analysis. So, after you finished your undergrad, where did you go next? Ina Bornkessel-Schlesewsky 9:25 That's when I moved to the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, to Yeah, to to take up my, my PhD studies. I'd actually already completed my master's project there, while I was still in Potsdam but then I was fortunate enough to, to be able to sort of switch to the institute to do my PhD. Stephen Wilson 9:51 Right, and did you whose lab did you work in at Max Planck Institute? Ina Bornkessel-Schlesewsky 9:56 I worked with Angela Friederici both as as a, as a PhD student and then as a postdoc after that as well. Stephen Wilson 10:04 Right. Well, you made a good choice, right? (Laughter) Ina Bornkessel-Schlesewsky 10:08 Yeah, absolutely. It was a very, very fortuitous thing that that happened to me at that time. Yeah, absolutely. Stephen Wilson 10:14 Yeah. So I encountered some of your earlier papers back when I was probably an end stage grad student and your early papers are about argument structure, which is a topic that I'm really, really interested in, because of one of my undergrad teachers, who was Bill Foley, who was professor of linguistics at the University of Sydney. I don't know if you've ever met him? Ina Bornkessel-Schlesewsky 10:39 Well, I've heard him speak. I haven't really met him. Stephen Wilson 10:44 Yeah, he's great. He told me about a lot of things and he's really passionate about argument structure. Kind of got me very interested in it as an undergrad. So your papers always appealed to me because of that. So you have this paper, I kind of wanted to like maybe focusing on a bit, called 'Who did what to whom?' which is kind of argument structure in a nutshell. Ina Bornkessel-Schlesewsky 11:10 Yeah. Stephen Wilson 11:11 So can you kind of, could we start by, you know, kind of, could you just explain, like, what argument structure is like, you know, why do we need it? Basically, you know, how do the world's languages tell us who did what to whom? And then we'll kind of talk about the neural basis of it. Ina Bornkessel-Schlesewsky 11:27 (Laughter) Okay, that's, that's, that's a big question if we're extending it to the world languages. But I guess in a nutshell, the concept of argument structure sort of refers to the fact that certain verbs are associated with certain requirements for the arguments that they, that they sort of go with, so that they they impose restrictions on the the participants if you will, taking place or taking part in the the event or state that's described by a sentence. So there's, you know, there's different ways of representing argument structure like, I guess what I focused on in my work was theories of thematic or semantic roles and how those are represented and how they sort of relate to each other. So the idea that there is um, there is actually sort of a hierarchical structure to semantic roles and the idea there was that really, we might be able to capitalize on this as we're actually comprehending language. So I guess that was sort of the starting point and it is true that I was quite interested in how this might take place in different languages from early on and interestingly enough, one of the reasons for that was, was actually a an Australian guest professor, who spent some time in Potsdam, Alan Dench from the University of Western Australia, who basically taught a course on Australian languages. So, talked a lot about ergative languages and about some of the super interesting phenomena that you find in Australian languages and that just got me so absolutely fascinated with how, how diverse I guess, linguistic forms are all over the world. And you know, sort of the idea of how does our brain actually deal with that? We've got a, we've we've got a language that's like English, but then we've also got a language that that's like Warlpiri or, or Mandarin Chinese, and that they're so different. So.. Stephen Wilson 11:31 Right. Ina Bornkessel-Schlesewsky 11:31 How does our brain cope with that in real time? Yeah. So I also really love ergativity and was just very fascinated when I learned about it and, but I'm kind of thinking that like, only a minority of listeners of the Language Neuroscience Podcast will know what ergativity is. So can we kind of like spell that out? So like, let's use an example sentence like 'the man spared the kangaroo' like, okay so in English, if I say the man spared the kangaroo versus the kangaroo spared the man, obviously that changes who did what to whom? And I accomplished that by word order. Can you tell us about ergativity and other ways that we can accomplish that in different languages? So when you when you started launching into that question, I thought, oh, no, he's gonna ask me to explain ergativity on a podcast. (Laughter) Stephen Wilson 14:22 And I did! (Laughter) Ina Bornkessel-Schlesewsky 14:23 And you did! Okay, let's see. Let's see how it goes. So um, (Laughter) it is hard without visual aid, so I mean, in a way what ergativity refers to is the it's one way of aligning the way in which different arguments are coded. So basically, in English, we have the, we basically have the subject of an intransitive verb being treated in the same way as the subject of the transitive verb. So if you say, "he slept" where you only have that one participant and "he speared the kangaroo". You, you have the "he" so you can really only see it in the pronouns in English and in word order. But then if you, if you look at the transitive object, you know, "the kangaroo speared him", then you have a different pronoun you have that you have the remnants of the accusative there. So the idea is that in a Nominative-accusative system, which is what the English pronominal systems has remnants of and say languages like German or Japanese have that system where you're treating the, the only participant in an intransitive event like "sleep" in the same way as the more agent-like participant in a transitive event, whereas in an ergative system, you're choosing the opposite alignment. So you're treating the only participant in an intransitive event like "he slept", that gets the same marking as the patient in, yeah and this is where there's no pronouns for this in English. Stephen Wilson 16:03 Yeah. Ina Bornkessel-Schlesewsky 16:04 You know that "he speared the kangaroo" could have that, you know, that you would have the, the pronoun that you would represent "the kangaroo" with would have the same form as the "he" in "he slept". And I know that that is going to sound awfully convoluted to someone who's never heard about this before with my, you know, you get those little diagrams where these are treated the same. Stephen Wilson 16:25 It's so crazy hard to sometimes talk about things without diagrams. This comes up like in every conversation that I have. But it's like, basically in Warlpiri or any ergative language, it's almost like, you know, "he speared the kangaroo", but if you're gonna say "he slept", you'd say something like "slept him". Or in other words, the single argument of the intransitive verb would be patterning just like the object of the transitive verb , that's called absolutive case. And then the subject of the transitive, which is "he" would be ergative case. Ina Bornkessel-Schlesewsky 16:56 Right, exactly. Stephen Wilson 16:57 But yeah, I don't know.. Ina Bornkessel-Schlesewsky 16:58 I think you should have just explained it in the first place rather than me. (Laughter) Stephen Wilson 16:58 Yeah, I don't know if, I'm sorry. Ina Bornkessel-Schlesewsky 17:06 I think, I think what also makes this so fascinating is that if you haven't really thought about this, it just seems like such a, if you're speaking with a nominative accusative language, and the first time you hear about this, it just seems so strange, right? Stephen Wilson 17:22 Yeah. Ina Bornkessel-Schlesewsky 17:22 It seems so counterintuitive, because you sort of think, wow, that's not, I never knew that a language would work like that and I think that was the effect that it had on me. Stephen Wilson 17:29 Yeah. Ina Bornkessel-Schlesewsky 17:30 And I thought, Whoa, Stephen Wilson 17:33 because it makes you realize that your whole notion of a subject is problematic. Like, to us, it just seemed so natural that in "he slept", the "he" role is the same as you know, "he speared the kangaroo". It just seems so natural and yet, like, you know, in ergative languages, that's not the alignment, like the argument of the intransitive is aligned with the object of the transitive that just, it was, you know, that's why I got excited about argument structure. Ina Bornkessel-Schlesewsky 18:02 Right yeah, and then you then you get the, you get the split ergative languages as well, which have multiple patterns and so yeah, absolutely. Stephen Wilson 18:09 Yeah. Ina Bornkessel-Schlesewsky 18:09 yeah anyway... Stephen Wilson 18:10 Split ergative, look it up. It's, we won't talk about that today but that is so interesting. I just, I am just really trying to restrain myself right now from saying something further about ergative languages. But um, okay, so in that 2005, let's talk about that kind of the brain and argument structure. So in, in that paper, you know, 2005, you sort of talk about the work on that had been done today, and how it kind of confounded syntactic structure with argument structure and you want to get around that by using German, where you could dissociate them. Now, I know it's more than 15 years ago, but do you think you can kind of explain the, the central logic of the paper? Ina Bornkessel-Schlesewsky 19:01 Sure. So I guess one of the things that we were interested in looking at, in that paper was the idea that the, the order of arguments in German, is not just governed by syntactic concepts. So I mean, like, I guess this builds quite nicely on what we were just talking about. So you know, there had been a lot of discussion in the literature about how object before subject structures were possibly more complex, and then required the engagement of particular brain networks to be processed and I guess just based on theoretical knowledge about how word order in German works, where it's been well known for some time, but it's really not just subject nor object, but a whole range of other factors that are maybe not syntactic in nature, like argument structure, but also things like animacy. The idea was to say well, can we actually have a look at whether reordering arguments in a sentence according to these other information sources, does that sort of lead to the same results as changing the order object and subject that really was sort of what we were trying to do. And that paper was sort of the first in a series where, we indeed used thematic roles and argument structures, try to look at that, specifically, by using verb types where the object actually has a higher thematic role in German than the subject. And again, that's something that's really hard to explain using English examples, because there aren't really any, there aren't very good sort of English analogues, the best sort of translation we were able to come up with for the types of verbs that we used was something along the lines of 'he', you know, he's pleasing to me, that sort of thing and there the, the to me, which in German is expressed as a dative, the experiencer is generally thought to be the, that the higher the gramatic role, the more in fact, the more agent, like the gramatic role, because it's that it's that person who experiences the situation and it's really in a way, the responsibility for that situation is with that person, whether, whether you're pleasing to me, depends on me rather than you. So that, that was sort of the idea and then what we basically found was that, indeed, this had a, this, this, this, basically reverse the order, we didn't entirely reverse the pattern, but it's, it seems to play as important a role as the desire to sort of place the subject before the object. So um, and that was, then I guess, an indication that for, you know, observations of word order related activation changes in regions like the left inferior frontal gyrus or left posterior superior temporal cortex, it didn't just seem to be a matter of, you know, syntactic complexity in the sense that object before subjects leads to a more complex structure, and then that leads to more activation. But rather, what we suggest that there is that it may actually reflect a more subtle interplay of these different factors that govern how things are ordered. And then by extension, to sort of say, it probably isn't about syntactic complexity, per se, it's, it's more about these, you know, these these, these different factors that actually influence the sequencing. Stephen Wilson 22:32 Right? Yeah, I mean, because you, you really kind of were overturning like that syntax- centric view that had prevailed at that time, right? Because you showed that it wasn't just about syntactic complexity, it was more about like, what drove higher signal in these language, left hemisphere language regions was not just really syntactic complexity, but was kind of like violations of expectation related to a whole host of factors that will, that influence argument structure such as animacy, word order case, agreement, and that really just put it on a different foot. And I think it actually ends up like kind of like tying into some of your later work, at least in my reading of it. In terms of, like predictive coding. Ina Bornkessel-Schlesewsky 23:21 Yeah, absolutely. Absolutely and, and I guess, also into the, you know, into the crosslinguistic, was sort of the crosslinguistic work that followed, not so much in the, in the FMRI domain but, you know, sort of following that, that sort of 2005 paper, that was then when I was fortunate enough to have my own research group at the MPI (Max Planck Institute), which was entitled Neurotypology, where the idea was to actually try to compare more systematically, how human brains process languages of very different types. And yes, I still think it's quite fascinating that there are just some questions that are really difficult to ask in a language like English, so you know, even just that one from from from the 2005 paper. In English, there's really there's, there's not very much you can do right to change word order, you have to use, say, object relative clauses. And so you're already having to employ a very specific sort of construction to even try to, try to get it at at these kinds of questions. So whereas in other in other languages, it's much more readily accessible. Stephen Wilson 24:33 Yeah. Do you think that as a result of that work that you did, did you end up feeling that the processing of languages is more, has more in common or do you think that there are important differences between languages in the way that they're processed in the brain? Ina Bornkessel-Schlesewsky 24:47 I'd say, I'd say actually, both, interestingly enough, I think would be my answer. I'm and I'll try and I'll try and explain what I mean by that. So, on the one hand, I I do think that the basic sort of mechanisms are the same, or, or tend to be very similar, at least, you know, sort of looking at the, you know, my focus is sort of being the sentence level and above. So let's sort of concentrate on that. And I think, because you mentioned predictive coding before, I think in the end, it probably does come down to some sort of a mechanism like that. So essentially trying to use, you know, your current sort of, model of where you're at, in in building a representation for a particular, for a particular sentence, trying to understand what what that sentence means as a comprehend, and then trying to use that to, to set up expectations for the next piece of input that you're gonna get. But at the same time, I think where we do see profound differences is with regard to the information sources that that draws on and, you know, to a certain extent, I guess, this is also this is already what some proponents of the competition model were saying, you know, decades ago, they said, it's about these different cue weightings in different languages and Stephen Wilson 26:09 That's Brian Macwhinney, Liz Bates Ina Bornkessel-Schlesewsky 26:13 yeah, yeah, and colleagues. So, so I still think that, that's a, I, I think that's a really sort of valuable concepts and so what we've sort of proposed more recently, within this sort of predictive coding view is that depending on the type of language, and then also, I guess that you know, the type of construction within a language, you can get more specific, the type of cue that seems to be weighted more strongly as part of this sort of predictive coding mechanism, will, will differ from language to language. So what we found in particular is that, you know, in a language like English, where everything is very much order driven, the, the bottom up cues, so the, you know, the more specific cues of an incoming word like animacy, almost don't matter as much, because you, you, they can't override what that sort of order driven top down representation is already giving you. Whereas, in a language where you have to pay much more attention to that sort of bottom up, cue that you know, that the cue that that comes with the, the, you know, the word itself as it as it as it's been perceived, then for, for that type of language, again, there seems to be a stronger, a stronger weighting towards these sort of bottom up cues. So that's, that's why I would say, you know, on one hand, the basic mechanism is similar, but the way that that mechanism operates, is conditioned quite strongly by the features of the language that we, you know, we learn to attend to, in order to be able to build up a, you know, a meaning representation, as we're processing that language online. Stephen Wilson 28:03 Right. Cool. So I guess since we're talking about predictive coding, maybe we can talk about this recent paper that you published in frontiers in psychology that I think is really interesting. It's, you know, sort of stepping away from the argument structure now, but kind of building on that idea. You kind of have this really interesting attempt to explain the N400 and to connect it with other negative components. So, can we start by kind of talking about like, I mean, I guess we've been throwing the time around, but can you kind of explain what you mean by predictive coding, first of all? Ina Bornkessel-Schlesewsky 28:43 Sure. So this course draws on a broader theory that has been quite prominent in cognitive neuroscience outside of language for some time now and I guess the basic idea of predictive coding to start with is that you're constantly or that, you know, the human brain is general constantly, so human brain is constantly generating predictions for upcoming inputs in order to, to to test whether its internal models of the external world are correct and then those predictions are matched against the incoming input, if the incoming input does not fully match the prediction, then that error signal that sort of that, you know, that that, that that difference, I guess, between the prediction and the incoming sensory input, is that that is sort of what is, processed further, if you will, so, so the, so the idea is, first of all, that when we actually perceive something, we don't really need to generate brain activity that's associated with that, you know, the entire perception of that thing we're really actually only need to encode the discrepancy between what we're expecting and what the our sensory input actually is. So that was that, I guess is the that sort of the first aspect of predictive coding that it's a, it's an, it's an efficient coding scheme, because we don't need to represent everything, we only need to represent what we weren't expecting. And then this sort of ties into, in most theories, the idea that we process these sort of predictive signals in a hierarchically organized sort of cortical set of cortical structures or the cortical hierarchy, where we have a very sort of specific sensory predictions at the bottom of the hierarchy. But the predictions can get more abstract and also operate on longer timescales as you move up the hierarchy. So there's, there's not just one level of prediction, but basically, you're getting a whole range of predictions being generated and tested at the same time, at the different levels of this hierarchy. Stephen Wilson 31:13 Right. And I guess another thing that seemed really important from your paper, as far as I understood it was that, that signal of the you know, that encoded different signal is also exactly what you need to update your internal model, right. Not only is that an efficient representation of the sensory input, but it's also the exact information that you need to update your model, right? Is that kind of what you were saying or? Ina Bornkessel-Schlesewsky 31:36 Right. So that's the idea that that's, that error signal is basically it's a, it's a model, as you say, it's, it's a trigger for model updating. And, and I think that's, that's also an important thing to note, because error, you know, when you sort of, when you talk about error signals, that always sounds like something negative, but in this case, it's actually it's actually a really useful thing, because it's the error signal that allows you to, to update your internal models and make sure that the, they more accurately reflect the external world. So it's actually just a you know, rather than being something that's problematic, or something to be avoided, and it's it's sort of a key part of the the overall architecture. Stephen Wilson 32:18 Yeah. So in your paper, you talk about the Mismatch Negativity, or the MMN and how that that has been in other literature outside of the neuroscience of language has been fought over a lot with respect to this predictive coding idea, right? Ina Bornkessel-Schlesewsky 32:35 Yep. Stephen Wilson 32:36 Can you sort of talk about what are those experiments look like? Like, what is, what's the basic claim there? Like? What's that interpretation of the MMN look like? Ina Bornkessel-Schlesewsky 32:46 Yeah, so the MMN is really well established ERP components, which is most typically elicited by presenting people with an auditory oddball paradigms, so basically, they're presented with a sequence of repeating standard tones and then every once in a while, there's a, there's a, there's an oddball term that doesn't match the standard on some dimension, often, frequency, but it can be, it can be other things as well. And the the mismatch negativity is basically the ERP component that is elicited by the deviant or oddnball tones, in comparison to the standards. And now the MMN has been looked at quite extensively in the predictive coding literature in regard to how it might actually reflect internal model updating in in quite quite a simple sort of setting and there are some really interesting observations that people have made one being that rather than there always being this, you know, this sort of, if you will, this, this sort of error signal that's, that's generated, that's, that's more pronounced, then what's happening with the standard, it's actually the the signal for the standard, that decreases as the number of standards that are presented in a row increases. So it's as though you know, when you get a new tone that you haven't heard before, when you know, you've launched your oddball experiment, you hear that first stimulus, the MMS will be equally high for whatever tone you present, it's just that when the standard is established across a sequence of tones, your predictive model for that next stimulus gets better and better. So, the error signal is reduced more and more and so the, so the the MMN for the for the deviant for the oddball is not, it really just means that there's, there's no prediction that was made. There's no, it's, it's like, you know, it's like hearing the standard for the first time. So that was that was one of the things that I found interesting about this. The other thing that you can sort of see is that with more elaborate MMN paradigms, the idea was that the MMN not only reflects an error signal per se, in your, what you're predicting about upcoming information, but it also reflects what is referred to as the precision weighting of that error signal. So the idea being that, an error signal is not always taken into account the same way, but rather, it can be weighted, depending on the circumstances. So, perhaps the easiest way to think about this in a more general sense is, you know, if we're, we're listening to something in, in noise, you know, we're listening to something, whether it's a, you know, a piece of music or, or speech or anything we're listening to, you know, have auditory input in a noisy environment, and we can we hear something that we didn't expect, then, we might be less likely to trust our perception of that, you know, that that sort of surprising incoming input, because we think I just didn't hear that properly and it sort of, intuitively speaking, we might tend to trust our model, our internal model of what we should have heard more strongly than our, our sort of representation of the incoming stimulus itself. And, by contrast, if you know we have a, we have a very clear auditory signal and we hear something unexpected, then the likelihood that we're going to trust our expectations, more than what we thought we actually heard is considerably lower. So what I'm trying to say with this is the, the way in which you're likely to use or in which the sort of the predictive coding system uses an error signal is not set in stone, it depends on a whole range of different circumstances and the MMN literature has sort of looked at this quite nicely both by, you know, sort of varying expectations by changing the nature of the sequences at certain intervals, but also looking at how these, how these sort of, you know, sort of precision weighting might change as people age and that idea that we have a sort of a, a way of measuring these, on the one hand, these prediction errors in the brain, but at the same time that it's not just a, it's always the same error signal. It's actually it's a, it's a weighted error signal, depending on the circumstances, that then ties very nicely in, in our view to what we can actually observe with the N400. Stephen Wilson 37:34 Yeah, alright, so let's kind of relate the N400 to the MMN then to the MMN is like this negativity, as its name suggests, and it happens at about 100 milliseconds, the N400, obviously, but based on its name, peaks around 400 milliseconds. But in your view, they're really just like two instantiations of the same underlying process of coding, predictive error, right? Or precision weighted predictive error. Ina Bornkessel-Schlesewsky 38:06 Right. Yeah. So the idea is that, essentially, we're looking at, if you like, sort of a family of components, I mean, this idea has been around in the sort of the P300 literature for a long time, right? The idea that we were looking at fundamentally similar processes, but that they might manifest themselves in electrophysiological terms with different latencies and also slightly different topographies in terms of scalp recorded EEG, depending on, yeah, a whole range of different things could be the task, it could be the complexity of the stimulus, etc. So that was that yeah, that that in a way is sort of the, the idea here as well and um... Stephen Wilson 38:56 Yeah, so now, what's really cool and so like, why then, is it 400 milliseconds instead of 100? I mean, what's, what's that? What's going on in that temporal domain that makes these different components? Ina Bornkessel-Schlesewsky 39:09 I think it really is the complexity of what we're processing here, right? Because if we're trying to extract meaning from a linguistic stimulus, then that is a there's just I guess, in a way, there's just there's just sort of more that needs to be done, as opposed to if we're, you know, we're just we're processing a tone and the key difference to the tone that we processed before is just that this one has a, you know, has a higher frequency. So that yeah, that that, I guess, would be the be the key and of course, we also know that the the N400, particularly in the auditory domain, can also sort of shift its latency depending on when information becomes available to, to either, you know, sort of confirm or, or in most cases actually disconfirm a prediction. So that if, you know the, and I'm just I'm blanking on who, who actually published this study, but there's, may have been Van Petten and colleagues, and hopefully not getting this wrong who sort of compared the N400 responses to words where the you know, the very the very initial sounds of a word actually confirm or disconfirm, your prediction about what you're going to encounter, as opposed to the last few sounds of a word and sort of showing that, in the case of the new, the initial few sounds already, basically telling you that your prediction was incorrect, you see a profound sort of shift of the N400, even to before the word recognition point. So, so really, you're not waiting until you can recognize the word to be able to, to see that signal. It's really, as soon as the information becomes available, that's when it that's when it shows up. So, so I think in the end, it's that we that there has been a lot of discussion about the N400 being remarkably stable in terms of its latency, but at least in the, you know, the auditory domain, I think there is quite a bit of evidence that it does move around, depending on just when the information becomes available. So, that's also what I would say, is happening here with regard to the you know, so the comparison between the MMN, and the N400. Stephen Wilson 41:30 Right and I guess I'll just say that in the paper, I don't think we should talk about now because it's too complicated, even even more complicated than ergativity, but like, in the paper, you also talk about the left interior negativity and the and the ELAN and kind of, you know, build up this model where they're all related components. So, you know, you've got this kind of predictive error view of the N400. How do you contrast? How do you distinguish that from this other view that's out there, whereby it kind of reflects the extent of semantic integration that has to take place into the existing context? Ina Bornkessel-Schlesewsky 42:10 Yeah, I think, I mean, there's been a lot of discussion in the sort of the language domain with regard to this prediction versus integration distinction and then likewise, with regard to the N400, there's also been a big discussion about whether you know that the N400 actually reflects something like pre-activation, as opposed to say, an error signal per se, in response to something that wasn't expected. I actually think that with sort of a predictive coding perspective, there's not really a need to, to have a hard and fast distinction between those, those, those sort of concepts in, in either of those cases, because I think in, in actual fact, it's actually both. You're on the one hand, and then it really sort of comes back to that that interplay of top down versus bottom up, there's always, I would say, there's these there's the there's the predictive model, which is going to mean that there's a there's a stronger expectation for something that matches up with, you know, with your, with the input that you've processed so far. Hence, you get lower error signals, you get something that you might call pre-activation, if that's what you want to call it and you also get something that is sort of going to be easier to integrate at the same time, there's still a need for a residual error signal for things that you weren't able to predict. So I think it's really, in my view, at least the you know, the predictive coding account actually says, we need to have both. And really, we're just looking at two sides of the same coin. So there's not really a need to, you know, for us to keep debating over the next 20 or 30 years, whether it's actually prediction or integration or an error signal versus pre-activation. Yeah, I think in both cases, they're much more integrated than Stephen Wilson 44:08 The model kind of captures it. Ina Bornkessel-Schlesewsky 44:10 Yeah, maybe we thought in the past Stephen Wilson 44:12 Similarity between those two interpretations. Ina Bornkessel-Schlesewsky 44:15 Yeah. Stephen Wilson 44:16 Yeah. It's a very interesting paper. I really enjoyed it. I hadn't, I don't do EEG myself but I found it, like, easy to read and, and it made a lot of things make sense. Ina Bornkessel-Schlesewsky 44:27 Well, thank you! Not sure everyone feels that way, but (Laughter) Stephen Wilson 44:35 Okay, so can we talk last about your work on dorsal and ventral streams? You have a couple of papers from the middle of the last decade in Brain and Language and TICS. As we know, there are many people in the field that kind of have dual stream models of language, but yours is kind of unique in several ways. What do you sort of see as the fundamental differences between the dorsal and ventral streams in your model? Ina Bornkessel-Schlesewsky 45:03 Yeah, so I guess some, I guess, in a way, this mirrors some of some of the other things that we've sort of talked about in regard to in regard to my work, which has, for a long time, I guess, been at the contention that we probably don't want to be separating out fundamental linguistic sub-domains in the brain that you know, rather than sort of thinking about syntax versus semantics in the brain, maybe there are other sort of mechanisms at work which provide the sort of fundamental representational and mechanistic differences with regard to how linguistic information is processed, and the sort of the dorsal and ventral streams view that we put forward is to a certain extent, a you know, another sort of take on that so I'm essentially what we proposed was that the the fundamental divide between the dorsal and ventral streams is that the the dorsal stream processes information where sequences very important. So basically, the order of things, the timing of things is very important. Whereas for the ventral stream, the order is less important and the the sort of the processing of evermore complex auditory objects, as we call them, in keeping with Rauschecker and Scott's view of this is these these evermore complex auditory objects are essentially put together by combining simple elements in a non sequence based view. So basically, the idea is both streams serve to establish more and more complex representations as you travel along the streams and in keeping with this view of sort of hierarchical processing, that the, you know, the ordinary objects that are processed more strongly by the ventral stream, that they're constructed in a non sequence based way, whereas the, the dorsal stream is concerned, more strongly with this. Yeah, this sort of sequence based processing, as we call it. Stephen Wilson 47:19 Right. So it's like this fundamental distinction between time dependent and time independent processing. And I guess a whole lot of, you know, what might seem to be linguistic domain distinctions could map on to that, right, because you know, that that kind of then immediately starts to predict that syntax will, in some respects, be a dorsal stream function, because it's so ordered dependent. But the actual conceptual structure that's built up as a result of comprehension process is, is going to be ventral represented the ventral stream, because, you know, you're building up a complex representation out of the parts that isn't is no longer inherently audit, right? Is that kind of the idea? Ina Bornkessel-Schlesewsky 48:04 Right, but then, you know, coming back to I guess, what we were discussing at the beginning, I would say that the, you know, the order dependence aspects of processing are not exclusively syntactic. Of course, syntax plays a role there but, you know, as we were discussing before, order can also be determined by a range of other factors that are not necessarily syntactic so there's a Stephen Wilson 48:24 Oh yeah, yeah, I guess. I got that in your papers. But I have a different view of syntax than you just, it's probably it's only a terminological difference. Like, to me, all of that stuff is kind of syntax. I mean, to me, morphosyntax is syntax, like I would see no reason to not think of things like case marking and animacy indefiniteness, to me, those are all very much like part of syntax. Ina Bornkessel-Schlesewsky 48:47 They are all part of syntax. That's fair enough. Stephen Wilson 48:50 I mean, you could use the term differently. I mean, maybe I'm using it wrongly. I mean, but you know, I guess we can just to find the time however you want. But yeah, I agree with you. But so in your work, you kind of talk about how, you know, with this sort of view, where it's not based so much on language domains, but on something more fundamental, you can then, you argue that the primate auditory system can be a good model for the human language system. Is that can you tell me about that perspective? Ina Bornkessel-Schlesewsky 49:18 Yeah, so the, the idea there was basically to say that if you know, if these two fundamental sort of mechanisms that we're assuming, in this view of what the dorsal and ventral streams do in language, if these are essentially this sort of sequence based processing of information and non-sequence based construction of auditory objects, then it seems that if you look at what the primate auditory system does, when it processes auditory information, then, then those fundamental mechanisms are actually already present. Right? So the, again, like the work with regard to the ventral stream that Joseph Rauschecker and colleagues wre doing in the, in the, in the 1990s, where they, you know, they looked at these auditory objects, and they sort of this idea of sort of hierarchical processing of auditory objects in the ventral stream. And then at the same time, the idea that the dorsal stream of the nonhuman primate auditory system seems to be very good at, say, processing the location of sounds in space, where, essentially what you need to be able to process that is likely also some sort of a, some sort of some sort of internal predictive model, which in turn, is very good at, you know, lending itself to processing information in time. So that that, I guess, was sort of how, how this idea came about that. Of course, there, there are different vast differences in terms of scale and complexity of information that's able to be processed. But really, if, if we are claiming that these are the, these are the two fundamental mechanisms, and there's at least evidence to suggest that the non-human primate auditory system is already able to process that type of information. Stephen Wilson 51:18 Right. And the difference between humans and non-human primates, then you kind of suggest that maybe it's the way that prefrontal cortex integrates the streams. Ina Bornkessel-Schlesewsky 51:30 Yeah, we had a few speculations. (Laughter) Stephen Wilson 51:33 You want to characterize that as a speculation? Ina Bornkessel-Schlesewsky 51:35 I'd say it's, uh, yeah, I, I'd have to say it's a speculation. I'm not going to claim here that, you know, we have the answer to, it's sort of to the, to the evolution of language comparing non-human primate auditory systems and the human auditory system. Yeah, we're so, so one of the that, indeed, was one of the ideas that the the ability for the information from the two streams to be integrated, which we would assume is, at least in part, accomplished through sort of top down control from prefrontal cortex, that might be one key mechanism. Of course, there's also possibilities such as higher complexity within each stream, I think, was another sort of possibility that we talked about and of course, there's all sorts of additional complications that this type of dual stream model just tends to ignore, like, how the streams may, how they might be crossed off between the streams, as you know, as that not just via frontal cortex, but by, say, subcortical structures, and so on and so forth, which, which we didn't even touch upon. So there's Yeah, obviously, there's a, there's a lot of additional complexity there that would need to be looked at in potential sort of future models. Stephen Wilson 53:03 Yeah, the brain is very complicated and I think any, any of our models of it are really kind of just scratched the surface. But yeah, you are, you're a maker of models, you know, I've you've got models in many different domains and, yeah, I think that's super interesting and it takes a lot of boldness to make a model, I think, because, you know, you have to commit, like, it's easy to just, you know, have general ideas about how things work and wave your hands. But like, you know, you're you're someone that takes the time to like write papers, and actually put down models on paper. So, I like it. Ina Bornkessel-Schlesewsky 53:38 Thank you. (Laughter) Stephen Wilson 53:40 So what are you working on these days? Is there any projects that you're particularly excited about right now? Ina Bornkessel-Schlesewsky 53:46 I'm actually doing quite a lot of work on individual differences at the moment. So not a lot of that has been published is that there's quite a bit in the pipeline. So hopefully, over the next few years, there'll, you know, there'll be there'll be more on that coming out. But basically, this is sort of continuing on from this idea of predictive coding, and that there might be differential weightings between how the sort of the different components of the system interact. So you know, how strongly you might weight your your internal model in comparison to the incoming input and how that might change depending on the circumstances. We're actually doing some work at the moment to see how that might change, but how that might differ between different individuals and what factors that might be correlated with. So it's actually looking at individual differences with regard to how people process language but also extended to other domains of information processing. Stephen Wilson 54:51 Oh, cool. I'll look forward to seeing what comes out of that line of work. Ina Bornkessel-Schlesewsky 54:55 Yeah, just just additional complexity for starters, Stephen Wilson 55:00 Okay. Ina Bornkessel-Schlesewsky 55:01 I think some some, some, some, some interesting findings. So we've got some, stay tuned. Stephen Wilson 55:08 Alright! Will do. Well, I guess I should let you get to your day because I know it's, I don't even know if you had breakfast yet. But thanks so much for taking the time to talk with me. Ina Bornkessel-Schlesewsky 55:20 Yeah. Thanks very much for having me. Stephen Wilson 55:21 Yeah. It's great to talk. Great to catch up, even if it's in a virtual form. Ina Bornkessel-Schlesewsky 55:26 Yeah, absolutely. Stephen Wilson 55:27 Well, I'll see you. Hopefully, I'll see you in real life before too long. Ina Bornkessel-Schlesewsky 55:31 Yeah, that will be that'll be lovely. We can get back to get back to a bit of real life. (Laughter) Stephen Wilson 55:39 Yeah, okay. Bye for now. Ina Bornkessel-Schlesewsky 55:40 All right, bye bye. Stephen Wilson 55:42 Okay, that's it for episode 14. If you'd like to learn more about Ina's work, I've linked her website and the papers we discussed on the podcast website at Langneurosci.org/podcast. Thank you for listening and see you next time.