Stephen Wilson 0:05 Welcome to Episode 26 of the Language Neuroscience Podcast. I'm Stephen Wilson and I'm a neuroscientist at the University of Queensland in Brisbane, Australia. My guest today is Laura Gwilliams. Laura is currently a postdoctoral researcher at the University of California, San Francisco. But in a few months, she'll be starting a new position at Stanford University, as an Assistant Professor of Psychology, Neuroscience and Data Science. Laura is an outstanding up and coming researcher, who uses magnetoencephalography and electrocorticography to study how the brain derives representations from auditory input, and to investigate the computational processes involved. Today we're going to focus on her recent paper, Neural dynamics of phoneme sequences reveal position-invariant code for content and order, with co-authors Jean-Rem King, Alec Marantz and David Poeppel, that just came out in Nature Communications. Okay, let's get to it. Hi, Laura. How are you? Laura Gwilliams 0:58 Hi, I'm good. How you doing? Stephen Wilson 1:00 I'm good, too. So, it’s very early in the morning in Brisbane and I think it's like, you know, the sun's just coming up. How about you? Where are you at and what time is it for you? Laura Gwilliams 1:11 Yeah, it’s just past lunchtime. So, the sun is very high in the sky and I'm well fed, ready to go. (Laughter) Stephen Wilson 1:19 Alright. I've just got my coffee that I'm working on here. So, I don't think I’ve met you but I certainly have seen your talk at some conferences and I know that you're friends with some of my students, right? Laura Gwilliams 1:31 Yeah. Yeah. Some of my favorite humans are your students actually. Yeah. Deb Levy and Anna Kasdan. So, I feel like I know you very well, because I know them so well. Stephen Wilson 1:44 Yeah. I mean, they've probably told you some things about me that may or may not be true. (Laughter) Yeah, so that's like a really neat connection. And, where are you working at the moment? Laura Gwilliams 1:58 So, I’m at UCSF, right now. I, coming up on my three year anniversary of being a postdoc here with Eddie Chang. But, some exciting news, come September 1st, I'm actually starting my own lab at Stanford. So yeah, really… Stephen Wilson 2:18 That’s so great! Laura Gwilliams 2:18 Really excited for that. Yeah. Stephen Wilson 2:20 Yeah. You mentioned that that was in the in the works, and I'm really glad that it's all panned out and that you'll be starting there. I mean, what a great place to land for your first faculty job. Laura Gwilliams 2:33 Yeah, yeah. And it's gonna be really ideal I think for, the type of multidisciplinary work I tried to do, because I'm going to be officially and jointly appointed between Psychology, Neuroscience and Data Science, with a link to the linguistics department as well. So I think it's going to be a pretty nice combo. Stephen Wilson 2:56 Oh, it'll be perfect. And they have such a great linguistics department there, you know, like, when, you know, because I, because I got my first academic training in Australia, working on Australian Aboriginal languages. And, a lot of the theory behind all that work came out of Stanford people like Joan Bresnan, Lexical functional grammar, because like, kind of mainstream generative grammar didn't really do very well with free word order languages of Australia. And, you know, Bresnan and, you know, the work that she'd done there at Stanford, was definitely like, sort of the guiding theory behind like the Australian linguistics when I was an undergrad. So, I’ve always really appreciated that, that department. Laura Gwilliams 3:42 Yeah, yeah. I think it's going to be a great mix of minds of people with different expertise. So yeah, I think it's going to be great. Stephen Wilson 3:50 And do you even have to, are you going to keep living, are you living in San Francisco right now, in the city? Laura Gwilliams 3:54 Yeah, that's right. And I'm going to continue living in San Francisco. Stephen Wilson 3:58 And just commute down? Laura Gwilliams 4:00 Yea. See, I'll see how the commute is. Luckily, I'm an early bird. So I don't mind kind of traveling before the majority of people are traveling down. So I'm hoping that that will work in my favor and won't make it too bad. Stephen Wilson 4:15 It's not that far. Laura Gwilliams 4:16 Yeah. I think on a good day, it's about 40 minutes door to door, which isn't terrible. Stephen Wilson 4:23 Well, you just have to find some things to do in the car. Like listen to my podcast. (Laughter) Laura Gwilliams 4:26 I can listen to your podcast. Yeah, exactly. Stephen Wilson 4:29 Or the train. Are you going to get the train or are you going to drive? Have you figured that out yet? Laura Gwilliams 4:33 Yeah, I'll figure that out. I might do a, do a switch up, depending on how I'm feeling. Stephen Wilson 4:38 Yeah. So can you tell me about how you came to be in this world? You sound like you're from Britain. Where were you born and where did you grow up? Laura Gwilliams 4:49 Yeah, so I grew up in a pretty small town called Shrewsbury, which, is right on the border between England and Wales in the Midlands and would probably take about three hours to drive up from London. So it's a pretty small town like 60,000 people in the countryside. Most people haven't heard of it. But it does have one claim to fame, which is that Charles Darwin was born there. They don't mention the part that he left really probably, when, when he could. But yeah, that's my one claim to fame from where I grew up. And I then studied my undergrad at Cardiff, in linguistics. Stephen Wilson 5:41 What did you, well, before that, like, were you, so you kind of started, came into the field from the language side of things. So, were you interested in languages as a kid? Like, how did you get to ending up in a linguistics program? Laura Gwilliams 5:54 Yeah, I guess I guess I can say this, now that I have a faculty position. Honestly, I wasn't really interested in language or anything school related at all. I was much more interested in underage drinking with my friends in fields. (Laughter) But one thing that I did know about myself is that I really liked to write. And, actually, I nearly didn't go to undergraduate University at all, it was a very last minute decision that, I actually didn't I didn't have good grades or anything. I was like, Okay, well, if I'm going to do this, I should probably have reasonable grades to go to like, have a bit more choice. So I just kind of scrambled in my last year of high school to redo all of my tests to get like, a, like reasonable grades, and then decided to go. And I chose linguistics, because I was like, Okay, I don't know what it is I want to do really, but I know that I like writing and so it sounds like linguistics is probably a topic that would allow me to do something write writing related in the future. So that's, that's why I picked it. So yeah, I definitely don't feel like language studies were really kind of written in the stars, from, from my childhood or anything, it was definitely just making making a choice based on the few things that I knew about myself at that time. Stephen Wilson 7:33 That's really funny. And I like to the specific detail that you give that the underage drinking happened in fields. (Laughter) It definitely reminds me of when I went to England, with my parents and my my brother, when I was eighteen, for a family holiday. They let, my parents like, generously let me go out on the town and by myself in Oxford. And I don't know why they, I mean, that would be, now that I have kids, I'm like, that would be terrifying. But maybe you get more like you loosen up as they get older, I guess. Anyway, so I wandered around Oxford and like, somehow I met some hippies and they took me out to a field where we drank like, and there, there was a tent there. I don't really understand the setup, or if they were living there or what but like, definitely, like, I was taken, that was like my one experience of like, genuine like, English drinking with English people was, it happened in a field. (Laughter) Laura Gwilliams 8:28 Yes, interesting thing and there are certain fields, kind of earmarked for the drinking location. So, all of the youths know that this is where you need to meet to know 11am on a Saturday, though. Because this was, making myself sound old, but there was, this was pre phone era, like not everyone had phones, but everyone just knew you just need to go to this certain field at a certain time and you see people there. (Laughter) Stephen Wilson 8:59 That's wonderful. So, when you studied linguistics in college, did you get into it at that point? Like did you start to get into the subject matter? Or were you, how did that go for you? Laura Gwilliams 9:11 Yeah, this was kind of a transformative few years for me. I never really, as I said, put much kind of passion into, into schoolwork, and then everything just changed and I found myself just fascinated by what it was that I was doing and fell in love with the process of learning for like, the first time I think. And yeah, I also I was really fortunate that I had an undergraduate advisor, Lise Fontaine. We often think of her as the the angel in my life. And she really believed in me and really made me see for myself that I could do these things and that, I had some kind of academic talent. And yeah, I guess this kind of spurred me on to pursue the topics further. I think the thing that I really enjoyed about linguistics is that, it felt very systematic and I liked that I could sit down and create, say, a novel sentence and then use a set of tools in order to understand how it is that that sentence came to be and why it's structured in a certain way, and not in a another certain way. And I'd never thought about language in this systematic sense before, it was always just something that came out of my mouth without really considering it. So… Stephen Wilson 10:50 So, syntax is what kind of appealed to intellectually? Laura Gwilliams 10:55 Yeah, I also, I studied a particular, but I didn't study Chomskyan Linguistics. I studied functional grammar, which is a slightly different flavor. Actually, Halliday, I think he is Australian. Stephen Wilson 11:11 Yeah, and functional grammar, It's yeah, like you're talking to one of the few people in our, that’s had some exposure to that probably at our, in our field, because Sydney University where I did my undergrad, had a big functional grammar kind of wing. And there was this kind of like, unspoken tension between the, you know, kind of generative people who were, like I said, like, kind of, in the Stanford School of generative linguistics, and they're all focused on, you know, Australian Aboriginal languages, mostly and then there was the functional people. And yeah, so I took courses in that and I really enjoyed it actually. It’s like, it's just a different you know, it's just a different kind of grammar, right? It's, it's more semantic, it's definitely more semantic from the get go and it's more tied to pragmatics and information structure. Laura Gwilliams 12:00 Yeah, yeah, exactly. Stephen Wilson 12:01 So yeah. I appreciate it, like having that as part of my training. So that was big way you were? Laura Gwilliams 12:07 Yeah and we didn't have a generative wing. And it was all functional grammar. So, I wasn't exposed to, to generative linguistics until much, much later. Yeah, yeah. Which I think, is not so common. Stephen Wilson 12:26 Yeah. One of my favorite, like assignments that I remember from uni, was I did a functional grammar analysis of the Leonard Cohen song, Story of Isaac. I don't know if you… Do you know that by any chance? Laura Gwilliams 12:38 No, I don’t. Stephen Wilson 12:38 It's like, it's okay. It's kind of obscure. It's like, it's like the telling of the, the Abraham and Isaac story, but the point of view of Isaac. Instead of doing the killing, you're like getting killed. And then I compared it to like, the biblical version, and with a functional grammar analysis. And I put, like, so much work. Laura Gwilliams 12:59 Oh, cool. Stephen Wilson 12:59 And I put so much work into that, like, you know, I studied, like, the structure of every single sentence in like, the biblical telling of the story, and then the song, and just kind of showed how, like Leonard Cohen had, like, you know, just changed the narrative, and like, how he, you know, all the devices he'd used to like, flip the perspective. It was cool. Laura Gwilliams 13:15 It’s really cool. Yeah. Nice. Stephen Wilson 13:18 Okay, so, I know that you went to grad school at NYU. Did that kind of just follow? How did, how did that come to be? Laura Gwilliams 13:26 Yeah. So now, so then, after my undergrad, this undergrad advisor of mine, Lise, she asked me if I wanted to do a PhD with her in like theoretical linguistics. And I thought that would be a good idea. Honestly, I didn't have kind of a better plan. I really enjoyed what it was that I was doing with her. So, I thought that was a good idea. But, I didn't, I was too late in the process to actually apply and the UK system requires you to have a master’s before you can apply to a PhD program directly and I was also a bit too late in that process. So, I was working in a falafel shop at the time and my plan was to continue working in this falafel shop for a year to save up enough money, which, we can just pause there for a second. I'm not sure in what world I was living in that I was going to save all of this money working in my falafel shop, to save up enough money to be able to afford to do a master’s, such that I could then eventually do the PhD with Lise. But this, this was all my plan. And I signed a lease in Cardiff and I was all geared up to stay there for another year. And then, kind of out of the blue Lise messaged me and was like I saw that there's this master’s program, and it's just one year, and they covered the tuition fees. So maybe you could do this and then you would be good and just a year's time to do the PhD. I was like, this sounds great. And she's like, there's just a couple of things. One, it's in the Basque Country, in northern Spain. (Laughter) And two, it’s slightly different from what it is that you've been working on so far, in as much as it's about the cognitive neuroscience of language. At this point, I was like, Okay, Lise, I love you, but I think that you've lost your mind. I don't know anything about the brain other than like, roughly where it's located in my body. (Laughter) But, I was like, Okay, what Lise, would go. So, I was like, Okay, I'll apply to this thing. There is no way that anything is going to come out of this. There's just, there's just no way. But, I put my application together, and it was also partly in Spanish, which I was like, Okay, what? The master’s itself was going to be in English, but the application was partly in Spanish. Stephen Wilson 16:16 At least it wasn't in Basque. You should be grateful. Laura Gwilliams 16:18 Yeah, actually, I did have the option, and I didn't, I didn't even approach that side. Yeah, luckily, one of the people living in my house, knew some Spanish. So she helped me with this. Like yeah, I put my application together, I’m like, my, everyone was saying, what are you doing? I'm like, I don't know. But this will make Lise happy, so I'm going to do it. And I put my application in and honestly just forgot about it. I was like, Okay, well, that's a week of my life. I'm never gonna get back, but it’s fine. I just continued working my falafel shop. And then, sorry, I'm giving you the long version of this. But, we can maybe fast forward it. Stephen Wilson 16:59 I want the long version, yeah, it's good. Laura Gwilliams 17:02 So then, yeah, fast forward. Three months later, I'm going on a long bike ride with a friend of mine. And we're going down one of these beautiful wealth hills, and she gets a puncture in a tire. But luckily, at the bottom of this hill, there was a pub. So we stopped at this pub to fix her puncture, and pay when the tires fixed. She goes inside to like, wash the oil off her hand, whatever. And what do you do in these idle moments, but pull out your phone and and check your emails. So I did that. And I see that. Okay, new email, had a like, congratulations. I wait, what? And I open it up, and it's from the BCPL, the Basque center of brain cognition and language. Like, Oh, I totally forgot that I applied. And then my friend comes out and she must have my face must have been a picture because she was like, Laura, you okay, what's going on? Like, I just got accepted into a master's program that you apply to a master's program. And like, yeah, she's like, that's great. Where is it? Is it in London? Is it here in Cardiff? I'm like, no, no, it's in the Basque Country. What's the Basque Country? I'm like, Yeah, exactly. So yeah, and I, I wasn't gonna go, I was like, I, I'm not gonna just rock up to, like, a country where I don't speak either of the two languages and study a thing that I have no, like, I know nothing about. But then, I Googled imaged San Sebastian, where the research center is, and I mean, to this day, it's the most beautiful place I've ever seen in my life. So yeah, I decided to pack my bags and endeavor into this adventure with the idea that okay, if I failed miserably, which, at that point, I was like, This is much more likely than not that I'm just going to, this is going to be a catastrophe. But I'll be in the same position that I would be in if I didn't try, so I decided to do it. And in the first it was tough. It was like my, I've never never done statistics before. I didn't know anything about the brain before like it was all brand new. And I feel like I had a constant like brain pain for the first few months of just like growing all of these neurons because clearly that's how it works. But I just fell in love with it, completely. Stephen Wilson 19:52 The subject matter? Laura Gwilliams 19:53 The subject yeah, and yeah, I told Lise I won't be coming back to do a PhD with you in theoretical linguistics, I think I need to pursue the cognitive neuroscience of language instead. And yeah, I just, I just loved it. Stephen Wilson 20:15 And how did how did she feel about that? Was she as supportive as you would have hoped a mentor to be? Laura Gwilliams 20:23 Yeah, no, she's, she's amazing and it's amazing. I mean, she was disappointed, because I think she was excited about having me back to work with her, but, I had also corresponded with her as I was going through, and they were very clear. Yeah, exactly. So I think she was just really happy that I found something that I was really passionate about. Stephen Wilson 20:47 Yeah, I think that's really all you want for your students, right? At the end of the day, you want them to find something where they're going to really thrive? Laura Gwilliams 20:54 Yeah. Stephen Wilson 20:55 Even if it means letting go, which is hard to do sometimes. Okay, so did you do research there, or was it all coursework? Laura Gwilliams 21:02 I did a couple of behavioral studies. It was a lot of coursework, it was mainly coursework, but we had projects that was going on alongside. And yeah, I investigated morphological processing with like a auditory lexical decision task, which was a really nice introduction to doing experiments and I worked with Arthur Samuel and Phil Monaghan there. Both of whom are, I feel like I also owe a lot to them. They were really, I learned a lot from them about how to do science and how to think about problems and how to write papers. So, that was, yeah, that was a really great experience to…um… Stephen Wilson 21:53 And did you go to NYU after that? Laura Gwilliams 21:56 Right, okay, then (chuckle), then I knew at that point that I wanted to do a PhD in like neuro linguistic topics. And I decided that I wanted to do it in the States. Because I felt like you maybe had a bit more autonomy over the studies that you conduct that you have a little bit more time to do them. But I, I didn’t feel quite ready to just go straight into a PhD program. So, I decided to apply to like lab manager, research assistant positions instead. So, yeah, so looked at a different open positions, and NYU had an opening at the Abu Dhabi campus, Stephen Wilson 22:51 You’ve got a bit of a history of going to odd places. (Laughter) Laura Gwilliams 22:55 Yeah, I can't say that it was always really planned. But I mean, I yeah, definitely wouldn't change a thing and so, yeah, so, fast version I applied to this position and Alec Marantz and Liina Pylkkänen were the directors of the MEG language lab in, in Abu Dhabi. And, yeah, so then I packed my bags, and went over to Abu Dhabi for for a couple of years and worked as a lab manager there. And also had the great opportunity to even do my own experiments as a research assistant. So, that’s where I learned how to do MEG and kind of taught myself how to code and… Stephen Wilson 23:48 Yeah, because I know you have publications dating back to about then, so, I didn't realize that, that was as an RA rather than a PhD student. Laura Gwilliams 23:56 Yeah. Yeah. Yeah, exactly. Yeah. That was, yeah, that was really great and Alec and Lena were really supportive of that of like, me kind of coming up with my own experiments and testing different things. Yeah, and then from from there, I applied to different PhD programs in the US and decided that NYU was, was the best fit and then I continued working with Alec and then convinced David Poeppel to join my team as well. So, yeah. Stephen Wilson 24:34 I am sure he didn’t take much convincing. Laura Gwilliams 24:41 Yeah, he was very enthusiastic about having me on board. So, yes, that was great. They had like two lab families, really, one in linguistics with Alec and his group and then one in psychology with David's group, and I just, literally just split my day in half, like I would spend the morning in one building and the afternoon and the other and it worked, worked really well. Stephen Wilson 25:08 Isn't that the story of our field? Right? I mean, it's just like, inherently, you have to bridge these, these worlds, right? Like, if you want to do language neuroscience research, you just need to, it's always on the border of two things. Laura Gwilliams 25:24 Yeah, yeah, I think so. Yeah, I think that's part of what makes it challenging, but also exciting. Like, I think in order to, to really do language neuroscience properly, you do need to pull in expertise and ideas from, from all of these different disciplines. So, yeah, I think it's not, also I mean, that kind of gets reflected in the, like, backgrounds of the different people who work in this field, like, someone could be a background in linguistics, or computer science, or neuroscience or biology, and everyone is equally as relevant and welcomed into the community I think and everyone has their piece to contribute. Stephen Wilson 26:18 Yeah, it's so true. And it's funny, I often don't know which field people came from when I first meet them, like, and, you know, on the podcast, I always like asking people where they came from, and like, I'm, sometimes I'm just surprised to learn that they came from like, CS or, you know, I philosophy or whatever, like, people just come from all these different backgrounds. Okay, so shall we get to talking about the paper that we decided to talk about today? Laura Gwilliams 26:45 Yeah. That would be great. Stephen Wilson 26:46 Is this dissertation related? Is this from your dissertation time, this paper? Laura Gwilliams 26:51 Yeah, this, yeah, exactly. Yeah, I worked on the, towards the end of my PhD, and it's one of the papers that, that made it into my dissertation. Stephen Wilson 27:02 Okay, so it's called Neural dynamics of phoneme sequences reveal position-invariant code for content and order. Just came out in Nature Communications. I looked at the received date and the accepted date. (Laughter) This is a two and a half year review process, which we may we'll talk a bit about as we go through. That's maybe one of the longest I've seen. I'm sure you have some, some more stories from that. Laura Gwilliams 27:33 Yeah, yeah, it was quite a journey. Which Yeah, a number of people have commented on, on that duration of time. But it was also I mean, eighth of May 2020. Yeah, no one was in a good frame of mind in that month, either. So, I think there were a lot of historical things going on during that time too. Stephen Wilson 27:53 Yeah. Right. Yeah, this is the pandemic paper, or at least the review. You must have finished work before then. Okay, so yeah, I mean, well, you have, it's been out as a preprint for a few years and I know that it's a well cited preprint and it's now out in top journals. So congratulations. It's a new paper and it's about something which people don't really study enough, which is sequencing. So can you tell me why, why do you think sequence like, why do you think the study of sequential processing is so important for studying speech comprehension? Laura Gwilliams 28:29 Yeah, I mean, I think that sequences obviously play a crucial role in many different parts of language processing. Here, I'm looking at phoneme sequences, but it's also crucial for sequencing words together, sequencing phrases together. But looking at sequences at the phoneme level, I think is also nice, because, the closer you are to the sensory input, the more readily able you are to get clean signals non invasively. So yeah, so from that standpoint, it was a really nice kind of subjects level to look at. I also feel like the phoneme sequences in particular, I find it really fascinating because it's kind of at the moment that you're going from processing something sensory, to then connecting to something symbolic. So, yeah, you're going from something kind of in the outside world only to something that only exists in your own mind. And I think that that's one of the most exciting things about this paper, is it starting, we're starting to understand how the sensory pieces are actually used to then connect to stored representations in, in our mind, in this case, say lexical representations. Stephen Wilson 29:58 Yeah. So that, yeah crossing that that bridge between an acoustic input and an abstract linguistic representation. Laura Gwilliams 30:07 Right, exactly. Stephen Wilson 30:08 And so how did you decide to work on this question? Did you, was, did you kind of develop that interest in sequencing? Or Did somebody say, hey, why don't we work on sequencing? Laura Gwilliams 30:20 Yeah. So, the, I mean, the the lead into this was actually pretty straightforward. It was something I've been thinking about for a while, based on a previous study that I had done, where I basically found that the brain encodes the properties of a speech sound, for a super long period of time, for around about a second, after that sound has completely disappeared from the actual sensory signal. Which, to me just led to a conundrum. Like, your, how can you keep this information around for such a long period of time, while you're still hearing, other sounds come in at the same time as well. So, it suggests that there's a very high degree of parallel processing going on. But, up until now, I hadn't come across a study that really explained how it's possible that all of these different sounds get processed at the same time, without them interfering with one another, and actually, also keeping track of the order with which all of those sounds actually entered into the ear, which are two very important things that you want your system to do correctly in order to be able to correctly figure out what people are saying. Stephen Wilson 31:45 Right. Yeah, I'm just kind of thinking back to like, what would have been, you know, before you did this work, what was the paper that showed us the most, and it was like a Mesgarani et al., 2014, from the lab that you're now in, right? About how speech sounds are processed in the brain. I’m sure you were living in, I mean, I'm sure you knew the paper well. But it kind of, and I talked to Eddie about it on Episode Three, two years ago. But, you know, in that paper, you kind of see the brain respond to each phoneme as it comes in and there's like a very distinctive signature, you know, that, that allows them to reconstruct, you know, to reconstruct what phoneme is being listened to at that moment in time. But it's all very like, moment by moment in time, right? It's almost like if you hear the word cat, it's like, ‘k’ happening in the, in the neural signal, and there a ‘a’ and then there's a ‘t’ and there's never, and that's not in that paper, like a, you know, any mechanism by which you would integrate those sequential phonemes into something larger. So you're saying that you've noticed that, that yeah, those representations don't actually just disappear the moment the phoneme finishes, they linger for maybe a second, and you're going to do something about it? Laura Gwilliams 32:58 Right, exactly. And with the Mesgarani paper, that, there you're looking at Electrocorticography electrodes. So, you're recording from, say, 10s of 1000s of neurons, as opposed to in my work where I primarily use magnetoencephalography. There, you're pulling over like hundreds of thousands of neurons together. And so I think that that's why that discrepancy kind of comes out. And that was part of the question of this paper as well, like is the reason that, if you're looking at just a reduced set of neurons, is the reason this looks like a transient encoding, because the actual information is moving across space. And when you're looking at the global whole brain view that you get with MEG, you would be able to actually tect the responses as they, they are actually moving across a much larger neural population. So I think that all of this is kind of consistent with, the different sensitivities that different recording techniques have as well. Stephen Wilson 34:14 Okay, that's interesting. So can you tell me about the study that this paper is based around, like, the participants and the stimulus that they listened to? Laura Gwilliams 34:26 So, um, sorry, you mean, what was it that, what task people were performing in this experiment or in the previous experiment? Stephen Wilson 34:35 Yeah. Oh, no, this, the current 2022 paper that just came out, Laura Gwilliams 34:39 Okay. Yeah, so, here the task is quite nice because the participants just need to listen to stories for a couple of hours while their heads in the MEG and this naturalistic listening approach is actually something that I really, I feel like it's a good way forward and something that I plan to continue doing in the future. So, yeah, so people listen to these stories, and I annotated the stories for precisely what phoneme occurred at what time in the story, and what phonetic features belong to each of those sounds. And then also, importantly, where that sound actually occurs relative to word boundaries. So, is it the first phoneme of the word or the second volume of the word or the third? Such that, then we can investigate the sequencing of the sounds within the word. Stephen Wilson 35:42 Okay, that's, that sequencing piece, that’s so novel here. So, you end up having 31, what you call linguistic features, and they're kind of, like you just said, they're kind of in a few different categories, right, so 14 of them acoustic phonetic, kind of capturing the properties like place, manner and voicing. Then you've got like, order related features that are encoding where the, where the phoneme is in the context of the word and syllable, and morpheme. That's, that you know, that, there's your like, original, you know, you did your first studies ever on morphology and so now you're like… Laura Gwilliams 36:19 Yeah, it's like that. Right, exactly. Stephen Wilson 36:24 And then you also model, you've also got coding for boundaries between words and then you've got these information theoretic measures like surprisal, and entropy and frequency, can you can you talk about why those are in the model and how you computed those? Laura Gwilliams 36:42 Yeah, so some of the properties here are included, essentially as covariates. So, representationally, I was primarily interested in those 14 phonetic features. But language has a very annoying habits, although I actually think it's intentional, of correlating with itself in terms of its, its features are correlated and so certain sounds tend to occur at certain positions in the word with a given likelihood. So, to make sure that I was truly investigating phonetic processing, and not other aspects of language that I didn't want to focus in on here, that’s why I included some of these other properties like, yeah, the, where the sound is in the word and some of the statistical properties. That being said, some of, some of the properties in here too, I actually, in later analyses, wanted to look at phonetic processing, as a function of the, for example, surprisal, to see whether phonetics gets processed the same in a highly predictable environment or in a low predictable environment. So, to make the model kind of complete, I put everything in the model to begin with, and then split things up by those different factors later on. And in terms of how I… Stephen Wilson 38:26 Yeah, it ends up being quite… Laura Gwilliams 38:28 I was just going to say, sorry… Stephen Wilson 38:29 I didn't mean to talk over you. Yeah, well, yeah, you can get Yeah, why don't you finish? Yep. Laura Gwilliams 38:36 Okay. Yeah, I am sorry, just in terms of how I computed these, I used the, a very large language corpus basically to, a spoken language corpus to determine how likely a sound is given all of the other possible sounds that could occur given the sequence of sounds in the word at that point. Stephen Wilson 39:02 Okay. So, originally, some of these are intended as covariates, but they ended up being quite important to the analysis. I mean, the, I mean, they kind of end up being, you know, but what the order, the positional stuff, was that always intended to be a central part of this study, or was that, did that kind of just come later? Laura Gwilliams 39:22 Yeah, no, this was, that was always intended to be a key part of the, of the analysis. And yeah, in the later analyses I break up the phonetics based on the location that it occurs in the word and things like that. But one of the, yeah, I actually quantify location in kind of three different ways. So, where the phoneme is in the syllable, where the phoneme is in the word and where the syllable is in the word. And it didn't make it into the final version of these analyses , but I think it's also an interesting question of what the kind of end goal of the sequencing is? Like, is the goal here to make words out of phonemes? Or to make syllables out of phonemes? Or maybe, maybe phoneme isn't actually the right kind of basis unit to be looking at, in the first place. So, yeah, part of the goal of including these different distance metrics was to try to adjudicate between them. Stephen Wilson 40:36 Okay, that didn't make it. And that's why, I was wondering why you kept on saying (sub) lexical, with ‘sub’ in parentheses throughout the paper, like, and so what are you talking about? And now, now I see like, you're thinking that it might be phonemes to syllables, and not two words, right? Laura Gwilliams 40:53 Yeah, right. And here, I didn't really have the ability to discriminate whether phonemes were being made into morphemes or lexical items. So, the parenthesis ‘sub’, is to kind of indicate that maybe what actually is happening here is sequencing into morphological constituents, which, as you mentioned, morphemes are something very close to my heart. So, I feel, this is something that I would like to investigate in the future, precisely, what is the unit being connected to? And like, later, downstream, or maybe higher downstream? Stephen Wilson 41:37 Yeah. Cool. So you code all these things in the dataset, like you have 21 people, they listen to two hours of stories, you go through phoneme by phoneme, coded by hand, I guess. And then you, then you try to predict them, using neural data from the, from the MEG and also using the spectrogram, with the stimulus, kind of like as a control condition, I guess. So, the prediction model is really complicated. It's called back to back regression and it was developed by your co-author Jean-Remi King, and some of his colleagues and in a previous paper in Neuroimage, that I definitely had to like read, in order to, well, try to read to understand your paper. And, as far as I understand it, like what it's trying to do, is respect the fact that like, the neural data, and the featural data are both multidimensional, right? So the neural data, you've got, like 208 channels of MEG recordings, and then, then you've got these 31 features, and it's like, multidimensional on both sides. So, it's not just a typical regression model. So, can you explain why you chose this back to back regression approach, and maybe try and give us the gist of how it works? Because it's really quite hard to understand. Laura Gwilliams 42:55 Yeah, so the main motivation for the back to back is to try to overcome the challenge that I mentioned that, if we take all of these different features of language, those features are very likely to be correlated with one another. And so, when you're trying to code feature a, you want to be certain that you're decoding feature a and not that which is correlated with feature a, and the back to back regression essentially allows you to, to separate out the correlation between these different features. So you can be confident that you are actually just decoding the feature of interest. And I guess, briefly, the way that this works, and the reason it's called back to back regression, is because, the first thing you do is fit your run of the mill decoding regression model, in order to get a prediction for let's say, all of those 31 features that you're interested in. But then the way that you then evaluate how good that decoding was, rather than just taking the say the truth of feature a and correlating that with the predicted feature a, you're going to take the, the truth of all of your features, and compare that to the predicted of just one feature. And, so you're just going to look at the variance accounted for above and beyond the variance that all of those correlated features accounts for as well. This is, it ends up being a pretty harsh analysis essentially, kind of, in proportion to how correlated your features are. Fortunately, the in this case, the features aren’t too detrimentally correlated. So it ends up being possible. But yeah, the power of your analysis just ends up reducing with the strength of the correlation. If obviously, if your features are identical, a correlation of one, you won't be able to separate them. Stephen Wilson 45:17 Yeah. Okay, that, that helps me understand a bit. And there was another thing that I was a little unclear on. So when you're predicting like, either from the neural data, often the spectrogram. Are you using just a single slice of time? Or is it like a window of time over which the signal is evolving? Like, is there a moving window kind of approach? Laura Gwilliams 45:38 Yeah, I just use one single time stamp at a time. Stephen Wilson 45:45 It's like literally a 208 vector of MEG signal from one moment in time predicting the features, either at that moment or at a different moment. Laura Gwilliams 45:55 Right. Stephen Wilson 45:56 Okay , cool. So, in the, you know, when the paper starts out, you kind of have this, I guess, it's like a proof of principle in the first figure, where you just show that you're able to predict most, if not all of the features from the MEG data, is that a fair characterization of that first figure? Laura Gwilliams 46:14 Yeah, and also, kind of showing that it is indeed the case that these features are decodable for a long period of time, they're not just instantaneous, and then disappear. but they're around for like, half a second or so. Yeah. Stephen Wilson 46:30 Exactly. Okay. So yeah, what the figure, the key part of the figure shows time on the x-axis, and then the explainability of various features like nasal, vowel, voicing, approximant fricative, as well as those other features like entropy and surprisal, and location features. So all of them are kind of predictable for about half a second, related to the onset of the phoneme. And so that's what you told us before, like about how like, these things are not just representative for a moment, the representative over the over time. And you know, how long is a phoneme right? So like, just so, I mean, this I, I was calculating this from your methods section, you've got two hours of data, and you've got 50,000 phonemes and so that's basically seven phonemes per second. So, therefore, if a phoneme is being instantiated in the signal for half a second, that means that you have three phonemes at a time, right? Laura Gwilliams 47:28 Yeah, yeah, exactly. Yeah, each, yeah, in these stories, each phonemes are about 80 milliseconds or so. Stephen Wilson 47:37 Yeah, so something of the order of three to a little more than three. Laura Gwilliams 47:42 Yeah. Right. Exactly. Stephen Wilson 47:43 So yeah, that first, so what you're trying first is you can decode this information from MEG, or M E G, Liina told me that, that you guys don't call it MEG. Laura Gwilliams 47:54 You can do whatever you want, Stephen. It’s your podcast. Stephen Wilson 47:57 I guess so. Yeah. But you know, I need to get the terms, right. And then you also, okay, so the actual like, you're talking about this in the paper, like, the actual predicting, predictability or the prediction performance is not high, right, it's only slightly above chance. Don't take this as any kind of critique, because I'm, like, you know, an fMRI guy, and we, and we deal with signals that are like, you know, a fraction of 1% and get really excited about it. So, I know that like, it's not about the size of the signal, it's about the, you know… Laura Gwilliams 48:32 What you can do with it. Stephen Wilson 48:33 What you can do with it, exactly. (Laughter) So, you know, you're actually only predicting like, you had a chance and your prediction for your binary features will be 50% and you're mostly like, hovering around 51, 52, but with super low p-values, because you've got two hours of data and 20 subjects. So, that's not important. Right? Can you explain like, you know, it's not, it's not really about like, how predictive it is, just the fact that it is predictive. Laura Gwilliams 49:01 Yeah. And, yeah, so I think it's expected that, especially with the continuous story, listening, I mean, we've already kind of indirectly demonstrated here that a lot of the signal is being washed over from previous sounds and subsequent sounds and so, it's not surprising that the, the amount of variance you can explain is tiny, because there's so many things going on. When you're doing something like listening to a story. I think you would expect the effect sizes to increase much more if you were just blasting someone with one syllable and then taking it away. But yeah, so the point here is the robustness of the effects given how many repetitions we have of these different sounds and how many subjects we were able to record from. Stephen Wilson 49:59 All right. And then you have your control condition of predicting from the spectrograms. And it looks like from the spectrogram, you can probably predict better overall that you don't really talk about that in the paper. It's all kind of relative. But it looks like, you can predict the acoustic phonetic features, but you can't really predict the other features like surprisal and order, right? Is that a fair characterization of the findings there? Laura Gwilliams 50:22 Yeah, yeah, exactly. And, indeed, the same sounds that we can decode better from the spectrogram. We also decode better from the neural responses. But that doesn't hold true for the more higher order properties, like the statistics, and like the the order of those sounds. So it seems that those properties are not present in the spectrogram, and the fact we can decode them from the brain responses is, it reflects that it's something that the brain is applying to the acoustic signal, it's not something present in the acoustic signal. Stephen Wilson 51:03 Right. Yes, that's clear. Okay. So then you kind of, so that's the, that's just the sort of setup. Then you ask the question, like, how does the brain you know, that you're representing these phonemes for like, half a second each and you ask like, well, how does the brain then do that without mixing up the phonetic features of the multiple sounds that it must be coding at once? And the first hypothesis that you test is what you call position specific encoding. Can you explain what that is, and how you tested it and why you decided that’s not the answer? Laura Gwilliams 51:37 Yeah. So, one solution that you might think that the brain employs to be able to process multiple sounds at the same time without getting confused, is, well it’s easy, you have one set of neurons that like let's say, P sounds when they occur at the beginning of a word, and a totally different set of neurons that like P sounds when they're in the second position of the word, etc, etc. And so you'll have a different set of neurons for each sound, but also different depending on where the sound is in the word and that would leave you to code that up, that would give you a reasonable solution to this kind of conundrum that we see that these different sounds are processed in parallel. So, the way that I tested whether or not that seems to be the solution the brain implements, is by taking my decoding algorithm, and I train it on just the responses to the first sounds of the word. So let's say I'm trying to distinguish a fricative from a non-fricative sound, at first phoneme position. If the, if the way that the brain solves this is by having a totally different set of neural populations, if that same fricative was in second phoneme position, then my decoder should be completely useless at reading up that information at second phoneme position. And essentially, that doesn't seem to be true. I can take the decoder, which I trained on phoneme one, and that generalizes very well for phoneme two, and I can do the same train on phoneme two and that generalizes very well to phoneme three. And you can do any kind of mix of position you want, and you're still able to read out the information, which suggests that, there is a set of shared neural populations, which encode genetic information, regardless of where that sound actually occurs in the word. Stephen Wilson 53:41 And I'm guessing you weren't surprised, to rule out that theory, right? Because we kind of knew that neural response was, in at least in considerable part related to the spectrotemporal receptive field of the neurons and like, that, you know, the phoneme, you know, the different phonemes would have would differ a lot on those dimensions, and that that was going to be reflected in the neural signal. So, you are going to be able to do it across position. Right? Laura Gwilliams 54:09 Right. And position is of itself, I don't know, like, I could make up a word, which is 321 phonemes long, and you wouldn't expect to be able to arbitrarily have a neural population which can encode any position. Stephen Wilson 54:27 Only if you're a native Hawaiian. (Laughter) Laura Gwilliams 54:30 Right. Right. What is kind of cool, though, is that even just from a spectral standpoint, sounds which occur at word onset do tend to have very different spectral properties than those occurring, like say at the end of the word or like before or after a vowel and these kinds of things. So, I get into this a little bit deeper later in the paper, but this is one of the first indications that what we're looking at here is something slightly abstracted away, from the actual acoustic signal. That it is common across the different acoustic realizations of the same phonetic feature. Stephen Wilson 55:12 Right. So, if that's not the answer, let's talk about this other coding mechanism that you then investigate. And here, I'm going to read a quote from your paper, because I think you said it more clearly than I could. ‘The idea is that each speech sound travels along a processing trajectory, whereby the neural population that encodes a speech sound evolves as a function of elapsed processing time.’ So can you tell me like, what would that look like? And how would that solve the sequential representation problem if, if it works like that? Laura Gwilliams 55:47 Yeah, so one, one idea is, okay, you have a sound enters the ear, it goes to cortex, and it's processed in that same set of neural populations for 500 milliseconds. However, this would lead to the problem that then you have the same neural population, essentially trying to process multiple sounds at the same time. So, an alternative that I wanted to test here was that it's not just one neural population that processes a sound, but that you'll have population A processes it and then pass it to population B, which processes it passed it to population, C. And so as a function of time the information gets passed between different neural populations. And one of the important things here, is that it always gets passed along that same spatial trajectory. It always it always traces the same path as a function of time. And why would this solve the problem,? It solves the problem, because by the time you're hearing the next sound of the word, those neural populations have already kind of passed the hot potato over to the next set of neural populations. And so their work is kind of freed up, ready to receive the next pack, packets of information. And then they in turn, pass it over to the next set of workers. And so, this means that parallel processing can occur without, without ever having to burden the same neural populations to process more than one sound at the same time. Stephen Wilson 57:43 So, how do you test this in your paper? Laura Gwilliams 57:47 So, as you nicely set me up, as a function of time, I'm fitting a different decoder at each millisecond. And then figuring out the accuracy of each of those decoders as a function of time. So here, simply what I'm doing is asking, Okay, the, the decoder that I trained at, let's say 100 milliseconds after the sound, the gun, is that decoder, still informative to reading out the information, which at a later point in time, so is the way that that neural pattern exists at 100 milliseconds, the same at 200 milliseconds? And that would suggest that the information is in that same set of neural populations for that amount of time, or has the neural pattern actually changed such that I wouldn't be able to decode the information from decoder earlier in time to read out information later in time. Stephen Wilson 59:01 And this is a, this approach is called temporal generalization. And it's laid out in a paper by again, Jean-Remi King with Stanislas Dehaene from 2014. It's a really interesting paper and the way you implement that, in this is really nice. So, it's basically expressed in Figure three, which is like really the core of your paper, right? Is that Is that a fair thing to say? Like, if you were to, you know, kind of make a postage stamp about this paper would just basically have figure three on it. So yeah, I It's hard to describe figure 3 to the audience, because it's, it's very complicated. I pored over for a long time. It looks like, a rainbow of fingers and a 45 degree angle and I'm not really going to try and explain it because I think you just explain what it really shows. But the idea of the 45 degree angle, is it's kind of showing that like, as you can only decode a certain moment in time past a phoneme by using the, that same moment of time to predict it. So in other words, what's going on in the brain, like 100 milliseconds after you hear a ‘Peh’, is not the same as what's going on in the brain 300 millisecond after a ‘Peh’. So you can't make that prediction well. If you want to predict what's going on to the brain, 300 milliseconds after hearing a ‘Peh’, you have to look at other instances of 300 milliseconds after hearing a ‘Peh’. So in other words, it's, these phonemes are consistent in the neural response, but instead of just being a, like a drawn out, like 500 milliseconds of the same thing, it's like an evolving response that's predictable, but it's evolving. So I'm not saying anything other than what you just said, I'm just trying to say it more than once, because I think that's necessary. It's really complicated. I definitely advocate people to take a look at that figure. It's, it's very nice. So, you know, by and large, like these, the fingers don't really overlap in the figure, which is how you're showing that you're processing the phonemes parallel, and simultaneously. But there's this one exception in the figure that I found really intriguing and I think you know what I'm talking about. You call it in the paper, you call it a left sided appendage, which I think is funny, because yeah, you're also like, and I, but you don't really explain it in the paper. You just mentioned it. It's like this, Okay, so what it is, and I'm sorry, if this is like, getting really deep in the weeds, but I'm very curious. It's, it seems to be that the first phoneme of the word has a different pattern to all the others. Like with the first phoneme only, you can predict early responses with data from later. It's almost like there's some kind of echo. Does that… Laura Gwilliams 1:01:55 Yeah. Stephen Wilson 1:01:55 What's up with that? Do you have any idea what's going on with that? Laura Gwilliams 1:01:58 Yeah, I have some hypotheses. This is another thing. I mean, I guess this always happens when when you work on a project, it raises all of these really interesting things to look at in the future and this is certainly one of them. So yeah, here you can basically decode the properties of the first sound, when you're at the offset of the previous word, and one of my hypotheses here is that this is a predictive effect. And the, maybe this only happens for the first phoneme of the word. Because when you're, you're listening to continuous speech, and you're trying to anticipate what it is that these people are saying, or you just kind of naturally anticipate let’s say. Once you know what the first sound of the word is, knowing that reduces your uncertainty about the identity of the word more than any other sound in that sequence. And so I wonder if the way by which we kind of predict what word it is that someone is going to say is partly by just anchoring ourselves on to what that first sound is, and then allowing the rest of the process to unfold. So I think this, this might be a predictive mechanism, which facilitates lexical recognition. But yeah, this is something that I really want to look at further. Stephen Wilson 1:03:35 Yeah, that's really interesting. I, I, I think I agree with you. Well, I mean, that it's something special about like, predictability of the word boundary. But I wonder if it might be my, my sort of first guesses, it's almost the opposite of what you said, like, like that prediction of the next phoneme is actually much greater within the word than across the word boundary, like the crossing the word boundary as a point of low predictability of the next phoneme, relative to being in a word. And I would interpret it as like, showing not so much that you can predict, I mean, I think it looks like you can predict early from later, not later from earlier, but I don't know it's very complicated and I don't think, I think that that's, it's something like what you said like, there's a difference in the predictability. That's, like, the first phoneme is very special. And it's like, also, like some kind of when it's like some kind of like, index into the lexicon that's privileged. Laura Gwilliams 1:04:34 Yeah, I think so. And I think that first phonemes are in general, special in terms of how they narrow down the lexical items. So, yeah, it'd be really great if we could use such a simple analysis of this in order to be able to investigate lexical prediction and processing like that, then that'd be great. Stephen Wilson 1:05:00 Yeah, I mean, I think that future, future follow ups are pretty, like pretty clear what you'd want to do there. Okay, so now I want to ask you something about, from the peer review process. So this is interesting. Um, so you know, this journal publishes the whole peer to peer review document. Laura Gwilliams 1:05:21 Oh, wow! You read through that! Stephen Wilson 1:05:23 I didn't read it word for word. It's like 66 pages or something. I found it really interesting. It just gave me a, I mean, so, kind of what I want to ask you something that a reviewer asked. But also want to talk about the mental process. So I feel like this paper, it's funny, because you're reading this document, right? And it's referring, it's like a back and forth conversation. There are five reviewers here, by the way. Laura Gwilliams 1:05:46 Yeah. Stephen Wilson 1:05:47 Which I am sure you know, I'm just saying that for the listener, there are five reviewers. And, you know, there's, it goes through three versions, I think, at least. And, it's very interesting, like thinking about, okay, is this valuable, seeing this peer review history, and it definitely is, as, as from my perspective as the reader, but it's also kind of brutal, because you're reading about this paper that's morphed over the two and a half year period and it's changed quite a bit. Like, it's the trivial things like, you know, figure numbers are different and page numbers are different. And like, you know, they're talking about a figure that doesn't exist anymore, that's got a different number now. So there's a lot of a lot of detective work. But it's also kind of at the end of the day rewarding, because you do get to see, like, at a deeper level, like what, what people thought about different aspects of the paper. What do you think? Do you feel like the paper got better in peer review? Laura Gwilliams 1:06:39 Um, yeah, I mean, I think that there, it definitely helped to clarify a lot and to, I think that some things were highlighted, that would have been something that a lot of readers might also have questioned or had uncertainties or confusion about. So I definitely think that it helped with that. There, there also was no, I didn't have any MEG simulations in the paper beforehand and I think that also helped a lot to really, also convinced myself of what it was that I was seeing. And I don't think that I had any of the spatial analysis on the… Stephen Wilson 1:07:24 No, you didn’t. Laura Gwilliams 1:07:27 And I think that, that also really helped. So, yeah, the reviewers if you're listening, it was painful, but I think that the the outcome was, was a positive one. Stephen Wilson 1:07:40 That was my impression. I mean, I didn't actually read the preprint separately but just from seeing the final product and seeing the review, I could see what had changed. And so your, your reviewer two here, I mean that in the pejorative sense, not in the actual numerical sense, was reviewer four. (Laughter) So reviewer four your reviewer two, right? this is the the trickiest reviewer and they said, they, their biggest critique was that it's no big deal that you're seeing this processing and trajectory, because it's just going to reflect responses propagating through the auditory system. You end up rebutting that later, like reviewer four, they are like, I am done here. They don't come back and, and the editor brings in reviewer five to deal with the reviewer four response, and then, it's just a long story. Listeners you should go read this. (Laughter) But anyway, you end up rebutting it, like not even immediately, but later on down the line, like to reviewer five. So tell, tell, can you tell us how you, how do you address that critique? Laura Gwilliams 1:08:44 Yeah, so this was I mean, this is where the spatial analysis comes in. So one idea which I don't know, maybe you can consider it trivial, I don't think it would have been trivial, even if it had been true. But I think it's interesting that it's not. Which is, okay sure, you just see that the information is moving because you're going from like primary auditory cortex, frontal to frontal cortex as a function of time. And so, what I did to see whether or not that was the case, was, by looking at the weights of the decoder, basically, just to see at these different time points, so as you go through the trajectory, where is the informative information on those MEG sensors, and what I find, is that all of the phonetic features actually ends up kind of looking like a spaghetti mess. (Laughter) But the information, which, I don’t know is interesting finding, that the information seems to stay local, within auditory cortex. I don't have the ability to make super strong spatial claims, but if I was to guess where this would be, I would say, it’s hanging out in Superior temporal gyrus. It seems to be the case that, the information isn't just kind of moving, say anteriorly as a function of time, but actually, the information is just kind of being reconfigured in the same brain area as a function of time. And that actually is a really important finding for understanding how the system is actually using this information and why it gets configured this way. So, yeah, so I think that's a really nice outcome of this work to say that the information still remains local in one brain area, it just, the specific neurons within that brain area, or what are kind of passing this information around. Stephen Wilson 1:11:06 Right. So yeah, that's just kind of telling us something about the nature of this temporarily extended trajectory, in terms of its anatomy, cool. So yeah, I mean, I don't know, I feel like the reviewer was harsh, but like they ended up, you know, getting you to put some stuff in the paper that wasn't there before. And that and that made the paper better. So that's, that's what I suppose that's what's supposed to happen. Okay, so, you know, I know, we talked before that, you know, we have a time cutoff coming up soon. I was like, Oh, don't worry at all that we won't take longer to talk about this and of course, we're coming up close to our time. But um, want to ask you just a couple more things. So, the last major empirical aspect of the paper is, concerns like the way that the timing of these representations depends on high level linguistic factors like surprisal, and lexical entropy. Can you tell us about those findings? Laura Gwilliams 1:11:59 Yeah, yeah. This, I think, is really a very, very cool aspect of this work. If I'm allowed to say that. Stephen Wilson 1:12:07 Of course you are. Hey, you just got a job at Stanford, you can say anything. (Laughter) Laura Gwilliams 1:12:11 Right. But what I saw that, I know every so often, your data is just I don't know, just surprises you. So yeah, I wanted to, the way this happened was I was looking at the rainbow fingers, as you describe them. And I, I noticed that a couple of things. One, that, the information about the first phoneme of the word seems to be maintained for a much longer period of time, than, for example, the last phoneme of the word. And similarly, I also noticed that when I tried to decode the properties of that first sound, I can't do it until later in time, as opposed to if it's the last time of the word, I can decode it much earlier. And I wasn't sure if this was really a word onset /offset response, or rather, something to do with how much you can, how well you can predict. What it is you're about to hear, and ultimately, what word this person is actually saying. So I basically repeated the finger analysis. But I broke it up as a function of those two things I just said, so low and high surprisal would be the cases where you can really well predict the next sound versus you can't well predict the next sound. And I quite clearly see that when you can predict what the next sound is going to be, I can decode it from the neural responses around about 100 milliseconds earlier. Stephen Wilson 1:13:56 That’s a lot. Laura Gwilliams 1:13:58 Yeah, then, in the cases where you can't well predict it. And so my interpretation of this is, I should say, it's not that I can decode it before it happens. And I think that is an important distinction. But once the sound has happened, you can reconstruct it faster from the neural responses if that thing was already anticipated, let's say. And so I think this maybe suggest that as the brain is processing all of these very rapidly incoming sounds, it kind of sets itself up in a certain state of process, and maybe kind of pre, pre-activates the synaptic weights of certain sounds that it's expecting to hear. And so if it then if the sensory input then indeed matches those weights, then the process can essentially just unfold much faster as opposed to If they don't miss much, and it's not the case that the whole process breakdowns entirely, but it just, it takes a longer period of time to get to that same level of processing. And then, on the flip side, I, again split those fingers up, but this time as a function of how, how certain the listener can be about what this word that's being said, what the identity of that word actually is. So, you can be very certain that, yeah, that I'm saying a word, that I'm saying word X, or very uncertain I’m saying word X. And this, I also think is really, really cool. So in the cases where it is not very clear what word this word is going to end up being, the actual phonetic information itself, hangs around for a longer period of time. And contra, if actually, okay, the word hasn't finished, but it doesn't really matter, because I already know what word you're saying, then the phonetic information actually gets discarded faster. And so, I think that this, I mean, both of these things, I think, but especially the kind of lexical entropy stuff really highlights how flexible this processing system is, it's not just like, a blind process that always involved with the same kind of temporal latency and duration, but that the system can kind of choose to keep information around for a longer period of time, in precisely the circumstances where it might need that information in order to disambiguate the lexical identity, which yeah, maybe you can consider is kind of part of the point of the phonetic processes to be able to link to the higher order structures. Stephen Wilson 1:17:03 Right. So do you think at the end of the day like is that the most important contribution of the paper even because I think the paper makes, you know, has several, it makes several distinct contributions into one thing is kind of just showing again, that like, phonemes are process for much longer than their actual temporal duration. Secondly, it's showing how that takes place, ie, it's not just kind of an extended representation that's static over time. But it's an evolving representation that allows you to decode order as well as identity. And then third, there's this modulation by predictability and by processing needs. I mean, those are three distinct contributions here, which one is the most exciting to you at this point? Laura Gwilliams 1:17:51 I mean, I think honestly, all of them are really exciting. I think that they all kind of, are part of the kind of emerging understanding of how all this works. Like there's just, there's a constant interaction between these different levels of representation and like the information available to the system at any given time. So I'm really excited about the stuff I just talked about the lexical entropy, because that I think, is getting us closest to what we talked about in the beginning of the linking the sensory signal, to making contact with that, like, special moment where you go to something kind of stored and symbolic and kind of step away from the sensory stuff. But I think that we need to look at it from kind of the whole perspective of how it is that you even get to that point. And I think that that's through the processing of these sequences. Stephen Wilson 1:18:54 Yeah. Laura Gwilliams 1:18:55 And another, I guess, final thing is that I've started to look at these fingers, not just the phonetic properties of speech sounds, but also say, lexical properties of words and they seem to follow this similar kind of trajectory, trajectory type process, which also, I think is also really exciting because it suggests that maybe this is kind of a processing motif that just gets kind of like recycled at these different levels. So I think that's another reason why I think the whole package is really important for yeah, understanding the system that kind of like a broader level. Stephen Wilson 1:19:43 Yeah. It's a lovely paper and I hope everybody takes the time to take it, you know, take a look at the figures and understand it more deeply then then we could get to in conversation. So I was gonna of course ask you about like what you've been doing in Eddie’s lab and what you're gonna do in your new lab at Stanford, but we ran out of time. So I'll just have to invite you back some other time when you've done more stuff, and we can talk about it. I think that there's so much more, and I'm looking forward to seeing what you're going to do next with your work. Laura Gwilliams 1:20:13 Great. Thank you. Yeah, I'm ready to be invited back anytime. So it's been a lot of fun. Stephen Wilson 1:20:18 Yeah. Great. Well, thanks so much for taking the time and I'll let you go and we will hopefully catch up at, will you be at SNL? Laura Gwilliams 1:20:28 Yeah, yeah, in Marseille. Stephen Wilson 1:20:30 I’ll see you in Marseille. Right. Take care. Laura Gwilliams 1:20:34 Thanks, you too. Bye. Stephen Wilson 1:20:35 Bye. Okay, well, that's it for Episode 26. Thanks very much, Laura, for coming on the podcast. I've link to the paper we discussed in the show notes and on the podcast website at langneurosci.org/podcast. I'd like to thank Marcia Petyt for transcribing this episode, and the Journal of Neurobiology of Language for generously supporting some of the cost of transcription. A brief plug for the journal, of which I do serve on the editorial board, I hope that you'll all consider submitting your work to the journal. There's a lot of reasons to do so. It's open access, it has very reasonable article processing charges only $850, if you are first and last authors are members of the society. That's like probably a third or a quarter of the cost of what most journals are charging. It has really expert and fair editors and reviewers in my experience. I've submitted four papers there, two are published, and two are in the review process and I've been really happy with how it's gone. And it's a new journal, so it doesn't have an impact factor yet, but it will have an impact factor in the near future. So, I hope you'll consider sending your work there. All right, well, thanks for listening, and I will see you next time. Bye for now.