Stephen Wilson 0:06 Welcome to the Language Neuroscience Podcast, a podcast about the scientific study of language and the brain. I'm Stephen Wilson and I'm a neuroscientist at Vanderbilt University Medical Center. In my lab, the Language Neuroscience Lab, our research focuses on neuroplasticity in recovery from post-stroke aphasia. In this podcast, I'm going to talk with leading and up-and-coming researchers about their work and ideas. This podcast is geared to an audience of scientists and future scientists who are interested in the neural substrates of language, from students, to postdocs, to faculty. We have many avenues of scientific communication--papers of course, conferences, talks, maybe even Twitter. This is an experiment to see whether podcasts might be another medium that we can use for scientific communication. This is Episode One, recorded on the 22nd of January, 2021. My inaugural guest is an outstanding researcher, Ev Fedorenko. Ev is an Associate Professor of Brain and Cognitive Sciences at MIT. I want to keep this show informal and not do the standard guest intros where you list all the awards and achievements. Instead, I'll just say this, Ev is, in my opinion, doing some of the most innovative and exciting work in our field. Her work addresses fundamental questions about the neural architecture of language and its relation to other cognitive processes. She's incredibly productive with numerous lines of work, any one of which would be more than enough for most people. So today, we're only going to talk about just a subset of her work. Okay. Let's get to it. Hi, Ev. How are you? Ev Fedorenko 1:35 I'm good, Stephen. How are you? Stephen Wilson 1:37 I'm good. So just to kind of set the scene for our listeners, I'm sitting in my front room, in Nashville, Tennessee. It's sunny and cold. About month 10, month 11 of lockdown. There might be kids bursting in at any moment. My dog might decide to join us. How about you? Where are you at right now? Ev Fedorenko 1:57 Very similar. On all the latter parts, except in Boston. It's also pretty cold and sunny, but yeah, dogs and kids may also join in, at some points. Stephen Wilson 2:09 So thank you very much for joining me for this podcast. You were certainly one of the first people that I thought of, and I thought that I would get things started by kind of asking you how you came to be... whatever it is you call yourself. So what would you call yourself? Like, if somebody asked you, like... Ev Fedorenko 2:28 I generally say, neuroscientist or cognitive neuroscientist? Stephen Wilson 2:33 Uh huh. So not many people sort of... you don't usually grow up, you know, as a six-year-old being like, "oh, when I grew up, I'm going to be a cognitive neuroscientist." So can you kind of tell me like, how did that come about for you? Like, did you have any childhood interests that sort of pointed to this in retrospect? Or how did it come about for you? Ev Fedorenko 2:51 Yes. So um, that's a good question. I mean, so I always liked language, like, I always found myself very attuned to language. I loved doing these little exercises that I don't know if you guys do in the US in schools, where you like, take a sentence apart and figure out which word plays different roles and all that stuff. And then I started learning foreign languages. So I was learning English and French. And then a little bit later, Polish. I have Polish ancestors, so I was working on that as a potential backup plan for escaping the crazy country I was growing up in, which was Soviet Union. And then I started learning Spanish as well, and German. And so I was learning a bunch of languages. And I found myself really excited about seeing some parallels, some different solutions that languages seem to have, for figure, you know, putting words together into complex meanings. And then at some point, after kind of getting enough familiarity with a few of these languages, I thought that I had a really great idea. And I still think it's actually worthwhile for someone to pursue. I thought I would start, like a big international training program to teach kids foreign languages, in families, in language families. So instead of learning French, we would teach them all the Romance languages in parallel, due to huge overlap in both structure and... and so that was my plan. I thought maybe I'll just start a company. And I knew that if I were to end up in some Western world, it would be super popular because you know, these days, many parents are obsessed with making their kids, you know, as versatile as possible and all that. And I didn't really know about cognitive science, or cognitive neuroscience as a field. And then when I came to college, so I got a full scholarship to come to Harvard. That was in '98. I took a class with Alfonso Caramazza. And I just couldn't believe that people do that for a living, like they just study language and how it works. And then I just knew I was gonna do that. Stephen Wilson 5:02 So was Alfonso Caramazza the first person that you met in this field who sort of studies like the psychology or the neuroscience of language? Ev Fedorenko 5:11 Yeah, yeah, I mean, I had, prior to that, I had taken a couple of linguistics classes. And I actually really liked historical linguistics with Jay Jasanoff, who I think is still teaching there. And that was really fun. But that seemed like more kind of like, I don't want to say a hobby, because it kind of makes it like... I just didn't see myself doing that as a kind of full time job. But I really enjoyed this. And then I think Alfonso's class was the first kind of exposure to like actually thinking about the psychological, computational, and neural mechanisms that give rise to this ability that we have. And, yeah, and then I ended up in his lab as an undergrad, and he was my academic grandfather. Stephen Wilson 6:01 Oh, that's great. What a way to start. It's kind of gonna be hard to live up to after that, isn't it? Ev Fedorenko 6:07 It's very true. Stephen Wilson 6:08 And what did you work on when you were an undergrad in his lab? Ev Fedorenko 6:13 Well, I worked on lexical access. And that was, that was cool and fun. But ultimately, I just knew that I wanted to scale it up. Because of course, we don't speak in isolated sentences. And I was worrying about the paradigms that were in use at the time, which was all kind of, you know, picture naming style tasks. I was worried about scaling up what we learned from those to kind of more naturalistic conversation, when we plan things in much larger chunks. So even for production, I was not sure how far that will ultimately scale up. And so I started looking into classes that dealt with sentence level understanding and production. And so I took a class with Ted Gibson at MIT as an undergrad. You can cross-register between MIT and Harvard. And I was like... That was really quite a revelation. I just thought that that was just the coolest thing. Those were the coolest questions. And so then for grad school, I applied mostly to labs that dealt with sentence level understanding, mostly. Some production, but mostly understanding. Stephen Wilson 7:22 Yeah, and that's, that's really continued to be your focus, right? I mean, I'd say that you definitely like have, you know, made syntax your primary domain of interest, as far as I can tell. Ev Fedorenko 7:33 And compositional semantics. Stephen Wilson 7:35 Is there really any difference? Ev Fedorenko 7:39 Some would say "yes". I think they're very strongly interconnected, as you know. But yeah. Stephen Wilson 7:47 So you went to MIT, Brain and Cognitive Sciences, and you worked with Ted and with Nancy Kanwisher, right? Ev Fedorenko 7:55 That's right. Stephen Wilson 7:56 So how did you kind of put together that mentoring team? Is that is that a fair way to kind of characterize it? Ev Fedorenko 8:03 Oh, yeah, yeah. It was, you know, it was love. So I fell in love with Ted. And we started a relationship. And that's just how things go sometimes. And people can make all sorts of rules about what's allowed and not allowed, but love transcends all rules. And... Stephen Wilson 8:04 Yeah, I couldn't agree more strongly. You gotta do what you gotta do. Ev Fedorenko 8:25 Exactly. And so, so then I needed a solution. And I knew that Ted couldn't advise me anymore. And so with Ted, in the first, like, two or three years of my grad program, I had gotten interested in the specificity of representations and computations that supported language. And I did a little bit of work on working memory systems that may or may not be shared between language and other things. And then somehow, in parallel, I had an interest in music. And with Josh McDermott, who was a grad student, at the time, senior to me, and with Nancy Kanwisher, we started playing a little bit, looking at neural responses to music structure. And we were getting some interesting piloty results. But they were not, you know, they were not, kind of, knock-your-socks-off strong responses. And we were looking at pretty subtle manipulations. But then in parallel, I got involved with Ted. And then I, you know, I came to Nancy, and I said "I need a new advisor." And she was very funny ab out it in the Nancy kind of way. She said "This is not good, Ev. This is not good. What's gonna happen if things don't work out? It's gonna be really, really hard. Like, it's a hard, it's a hard path." And she was right. There were a lot of challenges later on, in terms of dealing with the two body problem. But she was also of course very supportive. And, you know, she said she'd be delighted to work with me. And so then she and I, at some point, ran some language conditions as a control for music. And of course, we got beautifully strong responses in every individual we tried. And I initially didn't, I was hesitant to go in the brain and language field because it seemed a little bit like an old boys' club, perhaps due to the sociology of the field, having it start out in the clinic, where a lot of people who got access to scanners were doctors, clinicians, not always trained in all the right ways. And there seemed like there was a lot of, quote unquote, foundational stuff there was not very, there was not done in the most rigorous of ways, that was hard to build on. And so I knew it wasn't going to be an easy path if, you know, we were to start going in that direction. But ultimately, the questions about the relationship between language and non-linguistic cognition, and the fact that we were just getting such beautiful strong signals that seemed like they were begging to be investigated further, just kind of won me over. And then I guess I started doing that around 2007ish, maybe? Or so yeah, I guess I've been doing that ever since. Stephen Wilson 11:17 Yeah. Okay, great. So the first paper of yours that I read was your 2010 paper, in Journal of Neurophysiology. And obviously, that describes a line of work that you must have been working on for several years prior to that where, you know, the central insight was that it would be better to look at language regions in terms of individual functional regions of interest, rather than the group analyses that were dominant at the time and still are, to a large extent. So can you tell us about how that work came to be? Like, how did you how did you kind of narrow in on that approach in your work at that time? Ev Fedorenko 12:06 Yeah, that's a great question. I mean, so that was a big advantage of growing up in a vision lab. Nancy's lab was a vision lab. And there were two advantages, actually. One was that the methods in the field of vision research, I think, are just more advanced, further along. And I think there's interesting reasons for why that might be the case. I think the animal physiology grounding may be a big part of that. But also, it was really helpful to train with somebody who didn't come with priors from the field of brain and language. So I read the literature that was around at the time. I was familiar with things that people have done. But Nancy and I kind of took this approach, like, okay, we'll know what people have done, but let's just imagine that, you know, we've got access to this cool machine that can see inside the brain, how would we figure this out from the ground up? Let's try to build up an enterprise like how we would, you know, pretending like the context above us didn't exist, and then we'll try to integrate it with other stuff later, just you know. And that was, it would have been very hard to do this with any language researcher, because of course you come in with, you know, huge theoretical biases, huge, methodological biases, and so on and so forth. Anyway, so, you know, Nancy was, you know, somebody who had pioneered these kinds of approaches for studying vision. And when you talk to her about this, she'll often say, "I didn't pioneer like, that's just how people did things." You know, that was just what you would do. Well, how would you do anything else? Like, of course, you want to find these effects in individual participants. That's what you do with monkeys. That's what you, in physiology work. Anyway and so naturally, we just, you know, tried to extend this to language. And it took a little while of fiddling with different kinds of contrasts, auditory and visual. I was targeting higher level linguistic representations, so I knew that it would have to be something that doesn't really care about whether you're reading language or listening to it. At that point, that should converge. And so you're playing around with a few contrasts. And initially, the approach that we took was trying to break apart the processing of word meanings, and the compositional structure and meaning building. And my priors led me to think that we would find a subset of brain regions, or maybe a network, that responds more strongly to word meanings, and another subset that responds more strongly to structure. So the reason the original experiments had those four conditions in there--sentences, word lists, Jabberwocky sentences (structured but meaningless) and scrambled Jabberwocky non-word lists--was because I was thinking that that will be kind of a two way localizer. It will find us both subsets. And after a few years looking for that dissociation, I didn't see it. And so then I said "Okay, we'll just use the broader contrast to find that superset." And it seems like the other contrasts give you the very same sets of regions, just they're a little bit weaker. And so that was how we chose the contrast. And then, like I said, we'll generalize it to all sorts of other modalities, to other languages, and the materials don't really matter, what you do in terms of a task don't really matter. And in fact, recent work from Randy Buckner's group by Rod Braga and others shows that with enough resting state data within a person, the network that emerges from just looking at intrinsic fluctuations maps beautifully onto the kinds of contrast we've been using, which I find really reassuring. And, to me suggests strongly that it's really a natural kind, and our localizer is just one way to quickly pull out that subset of the brain for studying. Stephen Wilson 15:47 Yeah, that's really cool. I mean, and it would have a lot of... If you could identify language areas from resting state, that would be very clinically useful, because it would make it a lot easier to do pre-surgical Ev Fedorenko 15:57 Indeed. Stephen Wilson 15:58 But anyway, so Nancy was very well known at that time for the fusiform face area, right. So, I mean, I guess I'll try and kind of put it in context. And you can just tell me if I, if I mis-state anything. But essentially like, her general approach was, if you want to understand the function of an area, first you identify it with a functional localizer. And in this case, she would use I think faces versus scrambled faces, but it didn't... Ev Fedorenko 16:22 Versus objects. Stephen Wilson 16:23 Okay, faces versus objects. And then she would identify in every individual this region in sort of occipitotemporal cortex, in the fusiform gyrus specifically, that would respond selectively to faces. And then the research program was basically first to identify that in each individual, wherever it may be, and then you look at how it responds to different manipulations of faces, or how it responds to different classes of visual stimuli and so on. So that was the approach that you brought over and applied in the language domain. Ev Fedorenko 16:56 That's exactly right. So I think this, the thing that you highlighted, the fact that you're not going to answer a question about the underlying computations of a brain region in a single study. You're bound to need multiple hypothesis testing, gradually kind of constraining the space of possibilities for what it is that this region could be doing. So that it cares about, you know, X, Y, and Z, but doesn't respond to these other manipulations, and so on. And that's the cornerstone of robust and replicable science. Because different labs can be working on different questions, but if they use the same kind of anchor, the same way of finding the, quote unquote, same bit in the brain that they're studying, then we can relate findings straightforwardly to each other's, and ask different questions and have some convergence of results from multiple approaches, multiple theoretical frameworks, whatever characterizes different labs. And the problem with doing that with the traditional approach, which, you know, in case, again, like people's backgrounds are different. So the standard approach is just to take some manipulations, scan a bunch of people, align those activation maps in the common space, and then assume that now in each point, and that template space, you have functional correspondence across people. The problem with relating findings to each other in that approach is the following. So the output you get from such an analysis is a set of activation peaks. You'll say, "Okay, I found syntax sensitivity in XYZ coordinates such and such, XYZ coordinate such and such." To interpret these things, people generally resort to anatomy. So say, okay, this coordinate falls within the inferior frontal gyrus. Sometimes they'll use the Brodmann map estimates of where cytoarchitectonic boundaries would lie, which of course is incredibly crude. Those maps are hugely variable across people. But anyway, so people will say, "I found it in BA (Brodmann Area) 44", say. And then, the way that they would interpret this is by looking at other studies that have found activation in quote unquote BA 44. And they say, "Oh, you know, there was a study that found sensitivity to working memory in BA 44. And I found sensitivity to syntax and BA 44. Therefore, syntax must draw on working memory." And those inferences are prevalent, like if you read the literature from, you know, the 90s, 2000s. And then when some papers out today, people make such inferences. And I just think that's too many degrees of freedom and interpretation. It's just ultimately not that meaningful, necessarily, because those are all big chunks of cortex. They're highly heterogeneous. Yeah so basically, you would find papers that find two nearby peaks across different studies, and they have some prior to think that those functions are related. And they'll say, Oh, look, those two peaks are nearby. It's probably, you know, the same function that's generating them and make some inference like that. And then some other things studies will find similarly spaced peaks. And they will have a different prior for that those are distinct functions. And they'll say "Oh look, these are nearby, but not exactly in the same spot. Therefore, maybe there's different functions nearby." And I just didn't want to build science on making these kinds of inferences. And so I wanted to make sure that from one individual to the next, from one study to the next, from one continent to the next, to the extent that the approach gets adopted by others, we're talking about the same regions. And that's the main kind of conceptual advantage of the approach. And of course, the technical approach, the technical advantages is you get, you gain huge amounts of sensitivity, you have vastly more power, and you're much better able to resolve between nearby distinct peaks. Because aligning maps in the common space, and smoothing them, just blurs the hell out of activations, and then you potentially lose the magical resolution we have, which is, you know, limited, but still really quite good. Stephen Wilson 20:56 Yeah. So I just want to see if I can summarize my understanding of your approach. And again, you can just correct me if I'm wrong. But it involves each individual, there's a language condition and a control condition. And like you said, it doesn't matter exactly what they are. But basically, it's kind of syntactically structured real language versus, I think, words in like word lists? Ev Fedorenko 21:25 Or non-words. Stephen Wilson 21:26 Or pseudo-word lists. Okay. And then you find, then you kind of have like a number of pre-specified regions of interest that are fairly large, like say, for instance, IFG (inferior frontal gyrus) pars opercularis, IFG pars triangularis, posterior temporal, I don't know exactly what they are, it's not critical for our discussion. And for each individual, you find the subset of voxels within that larger region of interest that are responsive to the language versus non-language contrast. And then you define that as that individual's IFG pars opercularis language region, right? And when you compare across individuals, you're then comparing, you know, for each individual, the subset of voxels, within their pars opercularis, that are responsive to the language contrast. Is that all fair? Ev Fedorenko 22:14 That's exactly right. Stephen Wilson 22:15 Okay. And how satisfied are you with the uptake of your ideas in the field? Like I mean, obviously your paper is well-known and highly cited. Do you feel like you've made the impact, methodologically, that you hoped to? Ev Fedorenko 22:30 Um, I think so. I mean, it could be better, but I think the field is getting stronger. I think over the years, I've had many labs get in touch. And I've helped people adapt these tools for their needs. We developed a SPM toolbox, which makes these kinds of analyses really straightforward, and just basically, take a few minutes on top of whatever preprocessing and analysis is routinely done. I mean, ultimately, like, I'd love for people to use our tools, because I want people to do better science. Like, I'm not after the most citations from my paper. Like, I want to understand how language works. And I think if we work on this jointly using approaches that allow us to compare and build on each other's findings, that's great. And some labs are doing this, and that makes me very happy. And I think there is now a lot of useful collaborative efforts going on that build on this methodological foundation. But you know, and ultimately, if people want to keep doing things in a way that I think is vastly suboptimal, I mean, it's not my place to tell them how to do their work. If they think that the inferences they draw are satisfactory to them, you know, that's fine. Like, I'm not going to take those findings and use them because I just don't know how. I just don't think they're ultimately interpretable. And but... Stephen Wilson 23:54 Yep. Ev Fedorenko 23:56 We only have one life. Stephen Wilson 23:57 Yeah, no, I mean, I think I'm a lot like you. I mean, I primarily believe mostly the things that we've done in our lab. Ev Fedorenko 24:03 Exactly, and it shouldn't be like that, right? Stephen Wilson 24:06 There's some I mean, there's definitely some, I mean, there's plenty of other people in the field whose work I, by and large, believe. Ev Fedorenko 24:13 Same. Stephen Wilson 24:13 But it's not, it's not the default. I'll put it that way. So, okay, so, you know, like you said earlier, so the approach kind of was birthed in the field of vision. Right. And so in vision, we understand very well from, you know, primate neurophysiology, that there are discrete visual areas. They have, like, well-defined borders, where you have, for instance, things like, you know, changes in retinotopic orientation at the border from like V1 to V2, to wherever you're gonna go after that. So you've kind of got this discrete mosaic of visual regions that are the same in every individual. When you translate that into the language domain, I don't feel like we're quite there yet in being able to say that there is a distinct mosaic of language regions that are the same in every individual. Do you agree that we're not there yet? Or...? Ev Fedorenko 25:11 What do you mean by "the same in every individual"? Maybe I'm not...? Stephen Wilson 25:14 Well, okay, so, you know, like, every individual has a V1 and a V2 and a V4, that have... You know, I'm talking here about, like, you know, visual regions, obviously. Ev Fedorenko 25:26 Yeah. Stephen Wilson 25:26 And they're, and they're more or less anatomically, you know, obviously, like, you know, the specific anatomical location with respect to gyri and sulci is gonna differ a little bit, but they're gonna be laid out pretty much the same. And more importantly, like, kind of, from your perspective, functionally, they're the same, right? So you know, V4 is gonna have a similar function in every individual. Ev Fedorenko 25:45 Exactly. Stephen Wilson 25:46 And we have a way of identifying those, based on their function. Now, in language, we're not at that level, right? I mean, like, so your approach, sort of, it generates a set of language regions. But they're not functionally distinct in the way that the set of visual regions that are generated by similar... So you know, what do you do with that? Like, what's the next step? Ev Fedorenko 25:54 I mean, that's an interesting question, right? I mean, so there's some reality out there that we're trying to understand. Now. Importantly, we do know that the perceptual regions that support early stages of speech perception, speech acoustic processing, and reading, are totally distinct from these high level regions. Stephen Wilson 26:33 Yes. Ev Fedorenko 26:33 There's no question about that. So they have different functional profiles. And presumably, they serve as the input regions to the higher level language regions. Similarly, on the production side, there's a set of regions that support articulation, that are totally distinct from these higher level, what we call higher level language regions, that do everything pre sending the articulatory commands down to the motor, premotor areas. Now, the question is whether there are meaningful functional distinctions within this frontotemporal network that we and many others, and you and many others, have been looking at. And I don't know. Like I said, I came in with a prior that there would be a clear separation between syntax supporting regions and lexical meaning supporting regions. That just doesn't seem to be the right cut. Now, whether there is other distinctions hiding in there, and we just haven't looked for them, because, again, we've brought in certain priors from certain theorizing, and so on, there may well be, um, and I think there's a host of approaches that are being developed, including with artificial neural networks that have gotten really good, which may help us in a perhaps data-driven way, discover different features of language that may preferentially drive some parts of this network compared to others. Now, if such distinctions are meaningful distinctions are found and they're replicable, then we can ask questions like "Okay, do these distinctions show up consistent with a consistent typography across people? Is it always, you know, chopped up in the same way? Divided up in the same way or not? But it's also worth saying that many people, you know, back from Marsel Mesulum, have been talking about complex functions like language, like mapping forms to meanings: there may not be a mosaic of little functionally distinct bits. It may well be this distributed network. We know it's highly interconnected. So maybe the structure that exists there is highly redundant across these areas, and is designed in this way to kind of be most protective of damage or whatnot. I mean, it's hard to say, but I'll certainly keep looking for structure within that network. That's not, you know, I don't think the door's closed on that. It's just some of the distinctions that had been argued for, I think, are not the right cuts. Stephen Wilson 28:54 Yeah, it's just fascinating to me, like so. So you're kind of like, basically, at this point, you're holding on to the null hypothesis that all of the language, frontotemporal language regions are functionally equivalent. Ev Fedorenko 29:08 Until somebody shows me convincingly otherwise. Stephen Wilson 29:11 Mm hmm. Yeah. I mean, I don't, I don't share your intuition that that null hypothesis will hold up. But I, you know, obviously, the data are the data. And so this is kind of like just a, you know, kind of maybe a slightly silly philosophical thing, but like, you know, you're at MIT, the birthplace of modularity. And, in a sense, you know, there's aspects of your work that are very modular. Like you've, you know, you've really focused on the distinction, like the sharp distinction, between the language network and other cognitive networks, and hopefully, we can have time to talk about that more in a moment. But then when it comes within the language system, you're basically arguing against modularity. Is that...? Ev Fedorenko 29:58 I'm a very strong empiricist. I'm just a very strong empiricist. I came in with priors that there would be dissociations within that network. When I don't see them over and again, across dozens of studies now across different methodologies... You know, recently trying single cell recordings. Like, if you try so many different things, and you just don't find it, ultimately I think, okay, based on the data so far, I just don't see strong evidence for functional dissociations. You know, it may be that in a few months, we'll discover something that really clearly splits that system into, you know, either frontal and temporal components, or, you know, anterior temporal, you know, some whatever split, and they'll be functionally really clearly different, then I'll change my mind. I think that's a big part of what we should be able to do in science. If we learn otherwise, we don't just keep telling the same story. And I think that's also something that's held back our field in some ways. Because I think some of the early generation scientists, earlier generation scientists, have been trained in a way as like "you've made your name with that hypothesis, hold on to it, you know, you know, through everything, just keep arguing for that, because that's what you'll be famous for." And I, that's just not how I was ever trained. Like, I want, I want the truth. Stephen Wilson 31:12 No me neither. I've been like, I've been totally wrong about some things. Ev Fedorenko 31:17 Yeah. Stephen Wilson 31:17 And that's, that's fine. Ev Fedorenko 31:19 It's the fun of science. Stephen Wilson 31:21 Has anybody given you any grief about your kind of distributionalist bent, that is not very traditionally MIT or are people well past that? Ev Fedorenko 31:31 At MIT? I mean, in the field, people are still very resistant to the notion that there's no little blob that just does syntax, and nothing else. Like that still seems like it's people are coming around, some are coming around. Some are not. But no, at MIT, I think most people just want to figure out how it works. And if it were, just let the data tell you the story, you know. But in general, yeah, it's been, it's been challenging. I don't know if we want to go there, but in general, if you challenge old dogmas, it's tricky. And if you're a woman doing so, it's trickier. And there have been just anecdotally, there have been many cases where, earlier on in my career, we often went and gave invited talks together with Ted Gibson, my husband. And he would just be amazed at the tone that people take like, in some question periods. And he's like "Nobody ever talks to me like that." And it took me a really long time to learn to walk this very fine line between being able to talk like calmly and confidently about your work, and being called a bitch. Because for a man, a tall white guy, to talk with confidence, "Oh, gosh, he really knows what he's talking about. This is awesome." And if a woman takes very much the same tone, it's like "Oh, she's really bitchy." And you know, for a few years, I really struggled with this. I mean, I don't want to say almost left the field over it, but it was just really, like, I didn't want to do science within that kind of an environment. But you know, eventually it got better. And, you know, and hopefully I can mentor young women to be able to withstand that kind of stuff. Stephen Wilson 33:08 That's, I mean, that's just so hard to hear, you know. I mean, I'm really glad that you have found your way to like, you know, a way of presenting your work that is, like you said, like, clear, confident and making no apologies like, for having a position that is, you know, you're challenging, you know, some old ideas, many of which are probably wrong and need to be challenged. And... Ev Fedorenko 33:33 Yeah, it's like, it seems like it should be okay. And we should be able to debate and talk productively and not put each other down and yeah. But I do think it's getting better. I think the field is moving in good directions, both with respect to treatment of women. Also URMs (underrepresented minorities). I mean, I think that's a little behind. But I do think there's positive changes. So let's hope it all keeps moving in that way. Stephen Wilson 33:56 Yeah, I noticed that you guys have, that you guys are making like concrete steps towards increasing underrepresented minorities in your lab and creating opportunities for people. Ev Fedorenko 34:04 We're trying, yeah. Stephen Wilson 34:06 Yeah, that's really great. So after you, in that 2010 paper, where you kind of, you know, you laid out your methodological approach. After that, you have this series of papers over the next few years, where you describe what you call the multiple demand network, and then show how sharply it is distinct from the language network. Could you kind of talk about that series of papers and those discoveries and you know, what... Were you surprised by them? Did you expect them by that point? Like, how did that line of work develop? Ev Fedorenko 34:38 Yeah, so that was some, there was a question that initially drew me to cognitive neuroscience methods and kind of convinced me to go into that area, where I was deeply interested in how language relates to other things. And just when I started grad school, that was 2002, Hauser, Chomsky, and Fitch paper came out in Science you know identifying recursion as maybe the key feature of language, making it unique among other communication systems, and hinting that perhaps it's the combinatorial capacity, so abstract that it would also support things like math and music and other human unique abilities and so on. And I thought that that idea was very interesting. And of course, there were also kind of a host of other claims. Some totally unrelated, like, from the mirror neuron literature, some somewhat related, making parallels between language and music, for example. And it just seemed like a fundamental question like, to what extent is the combinatorial machinery that language relies on shared with other capacities? And, and so that was, you know, the most straightforward way to do that was to identify these language-responsive regions, in individual people and then ask across a series of studies, do these regions work hard when you have to solve logic problems? Do you do they work hard when you do math? When you solve a little programming task? When you think about others thoughts? And so on, and so forth. And again, at the time, when we were doing that, there was so many claims about the overlap in the frontal cortex between language and other things, I was kind of expecting that maybe in the temporal cortex, we'll find language specificity, but maybe in the front, it will be much more multimodal, and maybe there is really this abstract hierarchical processor. What a beautiful idea that, you know, maybe emerged in humans specifically and supports language and other human-specific abilities. And that just wasn't the case. And so yeah, I was surprised! I was surprised, and we kept, you know, running more and more conditions. And people always say, when you try it, you know, establishing specificity is tricky, because people always say "Well, you've tested these tests, but maybe if you test this other one..." And that's again, you know, at some point, you just have to, like, make, okay, if I test ten, fifteen different tasks, and none of them elicit responses, and they kind of have spanned all the main proposals that have been made about why language might share machinery with other things, then, okay, I'm pretty convinced that, at this point I'm pretty convinced it's a network that, at least in the adult brain, is highly specialized for dealing with linguistic input. Developmentally, we don't know. Because it hasn't been looked at carefully enough. Stephen Wilson 37:15 I mean, yeah, you must have a pretty strong prior there though. But, um... Ev Fedorenko 37:19 Oh well, like my prior is that it develops with age. I think it starts out as a very general social processor. So that's the current working hypothesis. Stephen Wilson 37:26 That what does? Ev Fedorenko 37:27 The language system. I do not think that... Stephen Wilson 37:29 Oh right, right, right. Yeah, there's that. Okay. That is that thing. I heard you say that in a talk a few months ago. Ev Fedorenko 37:34 Yeah. Yeah. Stephen Wilson 37:35 Yeah, no, that's very interesting Ev Fedorenko 37:36 Because like, we don't know language when we're born. Stephen Wilson 37:39 Right, right. Ev Fedorenko 37:39 And I think what the system does is basically storing these form-meaning mappings, and it takes time to acquire them, and then store them. We use that system. And so... Stephen Wilson 37:48 Surely. Yeah, but the distinction between the language system and the multiple demand network, I think you, I'm sure you hypothesize is very discrete from the get go? Ev Fedorenko 38:00 Oh, yeah, I think so. I think that's right. Stephen Wilson 38:02 So just to kind of back up and kind of, for people that haven't read all those papers. So like, I think the gist of it is, you know that you identify the language regions in the way that we discussed already. And then you look at how those regions respond to... I think it's mostly focused around like difficult and easy versions of tasks, right? Is it always about that difficulty manipulation? Ev Fedorenko 38:22 That's one. So that's with respect to the multiple demand system in particular, but then there's other studies looking at more social things. There's other studies looking at music, where, you know, it's not demanding tasks that would drive the MD net, multiple demand network. But it's showing that those things also don't seem to be... Stephen Wilson 38:40 Oh okay. Ev Fedorenko 38:41 But for the multiple demand system, it's mostly these goal-directed behaviors like working memory, task cognitive control, task logic puzzles, math problems, these kinds of things. Stephen Wilson 38:49 Yeah. And is it mostly about the difficulty manipulation? Or do you care like what it does relative to some baseline? Ev Fedorenko 38:56 We always look at stuff relative to baseline. The difficulty manipulation comes into place, because we want to make sure that it elicits the right responses somewhere, and difficulty manipulations for those tasks are what's been used to identify regions that say, care about working memory, or are sensitive to cognitive control or sensitive to math, you know, whatever demands and so... Stephen Wilson 39:17 So you find that in language regions, you not only don't see a response relative to baseline for these sort of cognitively demanding tasks, like arithmetic or working memory, but you also don't see an effect of task difficulty either? Ev Fedorenko 39:30 Right, I mean, I would phrase it like backwards, I would say not only do you not see a sensitivity to the difficulty, even relative to like a low level baseline, you just see that these language regions are doing as much when you're solving a math problem as when there's a blank screen and you're doing nothing. Okay, they're just not much engaged during math. Stephen Wilson 39:49 Right. Yeah. I know, I agree. I would put it that way, too. I kind of think that like the difficulty manipulation, to me is a tighter contrast. Whereas whenever you're comparing something to like a blank screen baseline, I'm always just like, well, what? What is that? Ev Fedorenko 40:04 Yeah. Stephen Wilson 40:04 What is that condition? Ev Fedorenko 40:06 Right. Stephen Wilson 40:06 Whereas the difficulty manipulation, it's like kind of both ends of it are controlled, so I like those better. Ev Fedorenko 40:11 Yeah. Stephen Wilson 40:13 Okay, so. So you've kept working on MD network over the years since then. Are you really interested in the MD network? Or is it just sort of like a control condition for you relative to the language network? Ev Fedorenko 40:24 That's a great question. I mean, it's a network that historically, supports cognitive capacities that have historically been very tightly linked to language. And I still think that there may be some relationship between these two networks that we haven't uncovered. So what we know right now is that during typical language processing, that network just doesn't seem to be doing much. In fact, passively listening to language, with no extraneous demands, elicits hugely strong responses in the language regions. And you know, no response in the multiple demand network. However, there may be some aspects of language, perhaps especially so in production, and/or it may be that there are conditions under which that kind of general problem-solving capacity may be helpful for understanding language. Like some people have reported effects in the multiple domain regions for listening to speech under adverse conditions, or listening to speech that has an accent, where basically, the perceptual information coming in is suboptimal in various ways. And so whether these effects are causal, or just reflect, okay "I'm struggling here", and it's just some signal of, you know, "help help. I'm having a hard time" or whether it's actually that that system is doing something to help you solve that difficult speech perception problem is not clear yet, I think. And then, of course, as you know, there's this other hypothesis that maybe this network can help us reboot the language system after adult onset damage. And you're not finding evidence on this in your very careful review of the literature from the last bunch of years. But I think, again, some of those studies, may have been done in ways that were underpowered. Or not. I don't know, at this point... Stephen Wilson 42:20 I think that everything that's been done in aphasia neuroplasticity could all be revisited. Like, you know, there's nothing there's nothing, there's no final word on anything in that field. Ev Fedorenko 42:31 So we're trying with Swathi Kiran, we're trying to see if there's something there to that. Stephen Wilson 42:35 Yeah. So you guys, I know you have, you have NIH funding to address that question of, you know, whether there might be plasticity of the MD network, in post-stroke aphasia. And I know that work with people with aphasia takes many years. And so it's probably, I'm sure you don't know the answers yet. But do you have anything that you can share with us yet about, like what you've learned so far? Ev Fedorenko 43:00 Yeah, I mean, we've had pilot data for a while, that shows some responses, language-like responses in some MD areas, in individuals with aphasia. But I will not be convinced in the causal role of these regions until we can do something like a longitudinal investigation, where we look at early responses in the MD areas during some, you know, acute or post-acute stage, and then are able to predict the degree of language recovery later. Like, I think without that kind of evidence, it's going to always be quite challenging to interpret the responses in the MD regions to language. I think it's not impossible. Like I think you can make some inferences. But I think the strongest inferences about, you know, the system may be being repurposed partially, to now help a domain which has suffered due to damage, I think it's going to be challenging. And now, of course, everything is on hold. And we're prioritizing just collecting large amounts of behavioral data on the target population, but we're hoping to restart sometime later this year, once vaccinations are in place, but we don't have the answers yet. Stephen Wilson 44:11 Yeah, for sure. Yeah no I think it's, I think it's just inherently really difficult for like, the reasons that you said. I mean, with people with aphasia, you know, their experience of language is so different than those of us that do not have aphasia. And so, you know, when you, if you did see increased multiple demand activity in a person with aphasia doing a language task, you know, the first hypothesis always has to be, well, this task is much harder for them. Like, you know, it's gonna be, it's gonna have more cognitive demand. And is that like a, you know, does that really reflect reorganization? Or is it just reflecting the fact that you know, of the experimental situation? And obviously, you've thought about all these things, but it's just a very hard question, isn't it? Ev Fedorenko 44:53 That's right. Well, so one thing that might help is seeing whether the parts of the MD network that respond to language, whether they eventually, or at some point, even cross-sectionally, become somewhat specialized for language. Like if you see a lower response to say, a working memory task relative to what you would expect in that part of the brain, then maybe there is some repurposing going on, and like suddenly, like this little bit is starting to do language. And the fact that this system is so incredibly flexible. And we know that these cells in monkeys' MD systems, they basically will attune to whatever current task demand is, and maybe in humans, they have the capacity to do this over longer timescales, like if you constantly need extra resources to solve language, maybe you'll just repurpose some part of it to help you. I don't know, it's fascinating to think about how that kind of reorganization could happen. But we just, yeah, I think we just don't know yet. Stephen Wilson 45:48 Yeah, I know, for sure. Well, I'm glad you guys are working on it. I think that you and Swathi make a great team with your, you know, with your complimentary perspectives. Ev Fedorenko 45:56 Yeah, we like working together. Stephen Wilson 45:58 Yeah. Cool. And so you mentioned a few minutes ago something which I heard you say before and I thought was really interesting, which was, you know, you kind of learned from your work with, with John Duncan and Nancy Kanwisher, that, essentially, that there's like this really sharp distinction between the language network and the MD network. And so that kind of took you away from that, you know, Chomskyan view or what, how you interpreted that Chomskyan view, that language is built on top of that kind of, of that kind of thinking. And then you suggested that maybe language is built on top of social thinking. So can you kind of flesh that out a bit? Because I thought that was a really interesting idea. Ev Fedorenko 46:39 Yeah, I mean, so that's an emerging hypothesis, and it seems to be emerging in a couple of different labs independently. So that's always encouraging, that maybe there's something on the right track there. So it seems like if you look at regions that support, brain regions that support social perception, cognition, a lot of them span lateral frontal and temporal cortices, in some ways, but in an adult brain. And now historically, people who have studied these different capacities have been in different fields. So there's people who study face perception, there's people who study voice perception, there's people who study speech acoustics, there's people who study languages, people who study theory of mind, and so on, and so forth. And I think this kind of fractionation of social cognition and perception across fields may have prevented people from seeing some generalizations and organizing principles of the mind and brain that may underlie social and linguistic development. Now, of course, a lot of people, there's a lot of reasons to think that language and social cognition are very tightly linked from development, from some developmental disorders, from just generally the fact that a lot of language relies on kind of pragmatic reasoning about others minds, and a lot of this signal is not literally the meaning of the words that you're saying. Stephen Wilson 47:59 And you know at a more basic level, just you know, some people think that language is for communication. Ev Fedorenko 48:04 That's right. Stephen Wilson 48:06 Although this is not a universally held position, but anyway... Ev Fedorenko 48:08 That's not uncontroversial. That's right. But I certainly think it is designed for communication, I think that's the optimization function. Anyway. So one possibility is that maybe in development, and perhaps even evolutionarily, we start out with a bunch of cortex that's just attuned to social agents. Given that doing so is pretty critical to infant survival, because we're born very, very useless. Human babies are born incredibly useless. Cute, but not useful. And eventually, as we get more and more experienced with social signals, which of course have the richness of auditory and visual, and tactile information in them, maybe that socially attuned cortex fractionates into sub-regions, sub-networks, that specialize, you know, for processing different aspects of the social signal. And so, of course, you know, the way to answer it would be to look over development. And the problem is, of course, that with methods like fMRI, it's really hard to scan kids at the ages when that, you know, language development kind of happens at its most, which is like between six months and two years is kind of your hot spot. And it's not impossible, you know, people in Nancy Kanwisher's group and a few other groups are now scanning awake infants on task paradigms, mostly with naturalistic-like stimuli, of course, the attention. But maybe we'll be able to tackle something like this. But for now, one thing we're trying in our lab is collecting neural responses to a very diverse set of socially communicative relevant communicatively relevant signals, and trying approaches like those that Sam Norman-Haignere developed in the study of the auditory cortex of trying to discover the underlying components. That maybe, you know, there are some axes, organizing axes of this whole socially responsive cortex that people just haven't noticed because they've been focusing on different parts of it, because it's just not all studied kind of in tandem. So we'll know more in the years to come. But that's one effort that's currently ongoing. Yeah, that's really cool. I like that way of thinking about it. And I guess the lateralization might end up being a big part of that story, right. I mean, if it sort of starts out as a fairly symmetrical social network. And then one of the two hemispheres becomes more specialized for linguistic... And one more for social. Stephen Wilson 50:30 Other aspects, maybe? Yeah, like different, sort of, channels. Yeah, well, very cool. Okay, so we've kind of used up most of our time. And like I, I've maybe got to like maybe a third of the things that I wanted to talk to you about, because you just have so many different lines of work. It's really, it's really quite something. But maybe we can chat again sometime, and we could talk about some of those other lines of work? Ev Fedorenko 50:55 Yeah, one of my favorite language researchers. So I'll chat with you anytime. Stephen Wilson 50:59 Well thank you. And yeah, I'm glad to report that no one's children or dogs interrupted, and... Ev Fedorenko 51:09 A dog attempt. Stephen Wilson 51:11 You had a dog attempt? Ev Fedorenko 51:13 Yes, blocked by someone, so. Stephen Wilson 51:17 That's great. Well, thank you so much. I really appreciate your time. And it's great to talk to you. Yeah. All right. Take care. And I'll see you again sometime soon. Ev Fedorenko 51:27 Sounds good. Thanks so much, Stephen. That was fun. Stephen Wilson 51:29 All right, you too. Bye. Ev Fedorenko 51:31 Bye. Stephen Wilson 51:32 Okay, well, that's it for our first episode. If you'd like to learn more about Ev's work, I've put some relevant links and notes on the podcast website, which is langneurosci.org/podcast. I'd be grateful for any feedback. You can reach me at smwilsonau@gmail.com or smwilsonau on Twitter. Okay, bye for now.