Stephen Wilson 0:06 Welcome to Episode 28 of the Language Neuroscience Podcast. I'm Stephen Wilson. My guest today is Rob Cavanaugh. Rob is a research data analyst at the Observational Health Data Sciences and Informatics Center (OHDSI) at Northeastern University. He's been doing some really high quality methodologically rigorous work on aphasia, and I got in touch with him to talk about his dissertation, which he just completed at the University of Pittsburgh under the mentorship of Will Evans. It's called ‘Determinants of Multilevel Discourse Outcomes in Anomia Treatment for Aphasia’. Okay, let's get to it. Hi, Rob. How are you today? Rob Cavanaugh 0:40 I'm doing great. Thanks. Stephen Wilson 0:42 So, what time is it, there over on the east coast of the US? Rob Cavanaugh 0:46 It is a little after 7pm. I've finished up dinner with my parents actually, which is quite nice. Stephen Wilson 0:53 Okay. And you've got like, family too, though, right? Rob Cavanaugh 0:56 Yeah, yeah. My wife was also there and our dogs, yes. Stephen Wilson 1:01 Okay. So I like to ask people at the start, how they became the kind of scientists that they are. So can you tell me a bit about what you're interested in when you were a kid? And what led you to this place? Rob Cavanaugh 1:11 Yeah. Well, I suppose I should have gone back and listened to a few of your podcasts to prep for this question. But I don't know that I have any, like, sort of foundational memories as a kid that led to this place currently. But I think a lot of how I think right now, I was I think I was always the kid who asked ‘why’ too many times. Annoyingly, so. And sort of picking up the premise of everything. Even if I don't know that I was doing, I didn't know that I was doing that at the time. And that has sort of percolated throughout my, sort of academic experience, and certainly is a big part of what I enjoy doing now. Which is asking, like, why? Like, why do we? Why do we do things like this? Why did we do this in the first place? Does this make sense? Stephen Wilson 2:04 Yeah. Were you interested in language or neurological disorders or anything like that early on, or did that come later, Rob Cavanaugh 2:12 I was interested in rehabilitation and sort of helping people recover from injuries. And I was actually more thinking about physical therapy or occupational therapy as an undergrad. But I took a fantastic class with Mike Dickey, who you know, and, and became sort of fascinated with, like, maybe intellectually, at first with the idea of what is aphasia, that, like, you can have somebody who knew what they wanted to say, but couldn't find the sort of whatever linguistic aspects of communication to say it and just that that could be a phenomenon that occurred was so interesting to me. That sort of spurred my interest in aphasia. Stephen Wilson 2:16 So you were an undergrad at the time that you took that class? Rob Cavanaugh 3:03 Yep, that would have been, I believe that class is actually a phonetics class, if I remember correctly, but Mike is obviously very aphasia focused, and I think that's where that started. Stephen Wilson 3:14 Okay. So, you were doing an undergrad degree in what subject at that time? Rob Cavanaugh 3:19 Yep. Yeah. So that's at the University of Pittsburgh at the Department of Communication Science and Disorders. The details are fuzzy. But there was I, my, my, my distinct memory is just becoming very fascinated by aphasia, and then deciding that I wanted to be a speech pathologist, and that was what I was going to do. Stephen Wilson 3:44 Okay. And so, did you become a speech pathologist at Pitt as well? Rob Cavanaugh 3:49 Yes. Not. So I left Pitt for the University of North Carolina Chapel Hill, where I, where I finished my master's, and worked in research with Katarina Haley, and Adam, Jacks who are still there. Stephen Wilson 4:04 Yep. Great people. Rob Cavanaugh 4:05 And the intersection, absolutely, of apraxia of speech and aphasia. Stephen Wilson 4:12 I didn’t know you had with them. Rob Cavanaugh 4:14 Yep, yeah. Yeah, I had a fantastic research experience, especially with Katarina and her lab, who and she is just this person who like you have this idea, and she will say, great, let's refine it, and why don't you run with that? That sounds fantastic. And so I did a Master’s thesis with her on very mild aphasia. We eventually published the paper, which was just a very, very small qualitative study on the experiences of people with, who were scoring sort of above the cutoff on the Western Aphasia battery, so 93.8, but still identified as somebody who had aphasia. Stephen Wilson 4:55 Yeah. Rob Cavanaugh 4:55 What were they struggling with? And maybe what were the service gaps? What were we not addressing for them? Stephen Wilson 5:02 Yeah, that's really interesting. I talk a lot with people who have very mild aphasia, and it's always interesting just to think how high they would score on a, you know, on an assessment and yet and how much this is still a big part of their life and sometimes unaddressed. Rob Cavanaugh 5:17 Yep. And that was in, I don't know if how familiar you are with Chapel Hill or Carrboro, North Carolina, but probably one of the highest rates of Master’s and PhD degrees in the state of North Carolina are in Chapel hill and Carrboro. And so, you know, I was interviewing people like, somebody who had a doctorate in pharmacy or somebody who had a PhD, right, and they're essentially saying, you know, I, Nobody knows, nobody can tell that I have this. But like, I don't feel like I can do my work at the level I was doing. Stephen Wilson 5:50 Yeah. Rob Cavanaugh 5:51 We don't have a test for like, language at the level of PhD scientist. Stephen Wilson 5:58 No, we don't. Yeah, that's really interesting. And so after you finished your master’s did you work as a speech pathologist for a while before coming back for your PhD? Rob Cavanaugh 6:08 Yep. Yeah, I did. I worked for three years, mostly in outpatient rehab, focussing on, on TBI, more than stroke, but I had a decent, a sizable stroke caseload. And then, you know, floated where I was needed in the hospital to inpatient rehab. I worked in our, we had a specialty clinic for ALS. And so I did that, which involved a lot of AAC and counseling. Stephen Wilson 6:09 Yeah. Rob Cavanaugh 6:10 and then, yep. Yeah. Stephen Wilson 6:21 And then were you always planning to come back for a PhD? Or did that just kind of like, become apparent to you? That's what you needed to do. Rob Cavanaugh 6:46 I think I always was thinking about it. I think if you asked, like, if you asked Katarina, she would say, yeah, he was always thinking about it. I didn't know how long I was gonna want to work. And of course, once you start making a salary, the salary of a PhD student becomes less and less appealing over time. Stephen Wilson 7:02 Where are you working in that time? Like what city? Rob Cavanaugh 7:05 the hospital was Carolina’s rehab. Stephen Wilson 7:07 Okay. So you are in North Carolina. Rob Cavanaugh 7:09 Charlotte, Carolina. Yeah. Yep. Which is a great sort of, you know, the hospitals a level one trauma center, and I was sort of nested in the main rehabilitation center. So we were seeing sort of the complex cases, which was intellectually interesting, and then also quite challenging. And then also, we were one of the few clinics in the area to take Medicaid. And so there's a whole other sort of aspect of thinking about access to care and disparities in care laid on top. Stephen Wilson 7:43 Yeah. Are you from North Carolina, or that's just where you kind of wound up because you'd done your master’s there? Rob Cavanaugh 7:48 No, I grew up in, I grew up in Durham, North Carolina for most of my childhood. So yeah, that was sort of going home for me, although my parents had left by then and had come up here to Portland, Maine. Stephen Wilson 8:00 Yeah. Rob Cavanaugh 8:01 I sort of always identify with, you know, bluegrass music and you know, summer nights on the porch. Stephen Wilson 8:08 Very cool. Yeah. Rob Cavanaugh 8:10 Yeah. Stephen Wilson 8:10 I mean, the part of North Carolina that I know is basically the Great Smokies. That's mostly as far as I got into it, or Asheville. Rob Cavanaugh 8:18 Yeah. Stephen Wilson 8:19 Rightly place to visit from Nashville. Rob Cavanaugh 8:22 Definitely. Yep. Stephen Wilson 8:24 Okay, so when you decided to do a PhD, did you know you're gonna go back to Pitt? Or did you kind of like, have to like, think about what you wanted to do next. Rob Cavanaugh 8:33 You know, I, I wasn't planning on going back to Pitt. I mean, so my, my wife who went down to North Carolina with me, I met at Pitt. And we were both just really loved Pittsburgh, the city. At the, at the time, I think I left Pitt, I think Connie Tompkins was at Pitt and running the PhD program. And, you know, Mike Dickey does a lot of very sort of linguistically theoretically motivated research, which as a clinician was not terribly interesting to me. I really wanted to do translational work. And so I wasn't, I was I think I, you know, I think I had I might have reached out to like Melissa Duff at Vanderbilt. I, I'm pretty sure I had a conversation with Swathi Kiran at BU, and I had kind of put feelers out around. And then somebody introduced me to will Evans, who has, was maybe in his second year at Pitt at the time. And, and will and I just have extremely well aligned interests in terms of what we want out of research. And I think, like, personality wise, we just mashed from the get go and so that that was kind of the obvious choice. Stephen Wilson 9:52 Okay, that's great. I think it turned out to be a really good fit for you based on what came out of it. Yeah, so that was what we were gonna focus our conversation on today was your dissertation. Rob Cavanaugh 10:04 Yep. Stephen Wilson 10:04 Which I found really interesting. It's called ‘Determinants of Multilevel Discourse Outcomes in Anomia Treatment for Aphasia’ as you probably know. And yeah, like, you're in a unique position, that somebody's actually read your dissertation, ie me. (Laughter) Because I don't think that anybody would ever have read mine. And I think that, you know, the kind of the, the modal number of readers for a dissertation, not counting the committee is probably zero. (Laughter) Like… Rob Cavanaugh 10:16 I'm a little surprised, you're still talking to me, but you know, that's okay. Stephen Wilson 10:37 Yeah, no, I mean, I think, obviously, we are dissertations get read when they get turned into papers. But I did read yours, because that's what I wanted to talk to you about. So can you kind of tell me and our, and our listeners, the premise of your dissertation, without any spoilers as to the outcome? Rob Cavanaugh 10:56 Sure. So, um, maybe I can start just briefly with like, sort of the, the underlying motivation, which was, as a clinician, I was doing lots of treatment with people with aphasia, and questioning whether what I was doing was actually helping them when they left the clinic. And then, so this idea really was sort of formalized in thinking about what generalization was, in my first couple years of Pitt. Diving into some of the old school generalization literature from the behavior modification, folks. So there's some sort of seminal works by Baron Stokes from the 70s and 80s, which Cindy Thompson I think sort of took and crafted into a paper for aphasia in the late 80s. And so part of this paper came out of… Stephen Wilson 11:52 Sorry, one sec. Um, were there any particular experiences with clients that really led you to question whether you were getting that translation to real world, change? Rob Cavanaugh 12:05 Yeah, I mean, I can think of, it's interesting, I can think of people who I really like more so stuck to the book for what the treatment was, like, maybe I was doing ORLA, which is oral reading, for language and aphasia. I apologize later attorney who I think is responsible for that, but I might be butchering that acronym. But which is a treatment that I really liked. And so I remember having a gentleman come in, and he was a former pastor, and I think he had retired. But he really wanted to read scripture, and he wanted to be able to read it out loud to his kids. And so that, based on his ability level, and his sort of language profile, that treatment was just really nice fit for him. And so we did it. And I kind of, you know, I tailor the materials, and I tried to connect it to the goals that he had, you know, sort of trying to make those steps that we do as clinicians to personalize what comes out of research. But, and, you know, he would tell me, like, Oh, yes, it's helping, but he was also this very nice guy. And I don't think he would ever tell me that something wasn't helping. And so I just, I never knew, like when he went home, like, oh, was this thing he practice actually helpful if you pulled out the Bible and wanted to read to his kids. And then, you know, I have, I can think of other cases where I didn't really stick to the book at all, like I can think of, I had someone come in, and they were working as an usher at a theater. And they were, they were retired and had had a stroke and had mild, moderate aphasia. But they needed to be able to read somebody's ticket, understand where the seat was, and then, you know, greet them and right, and then guide them to their seat. And so we just drilled, you know, I basically made a bunch of fake tickets, and we like drilled reading them, right? Through numbers and things, which was totally on evidence based. Like that's not there's not a research paper that says, Good, do that just seem logical. And maybe you can think about effort for retrieval, like we can fight back, you know, in a backwards view, look and find a treatment that I might have been doing inadvertently. But that seemed really successful. That was basically doing what they wanted to do. Right. I was just providing the scaffolding and the cueing. And so I think these two contrasting experiences are making me think about what is in our research literature, and like how well does that generalize to the goals people have and like their everyday communication contexts? Stephen Wilson 14:54 Yep. Rob Cavanaugh 14:55 And so then when you think about sort of the really popular treatments in aphasia, so semantic feature analysis, probably being the most popular, if not in the top three. If I drill the semantic feature analysis over and over and over again, how is that going to help someone in their everyday context? Stephen Wilson 15:18 Yeah. Rob Cavanaugh 15:19 At Pitt and at at the VA, VA Pittsburgh, there have been a number of studies focused on semantic feature analysis. Audrey Holland right came through the University of Pittsburgh, with an early study in the 90s, using SFA. And Mary Boyle took off with this. And SFA is all around me at the University of Pittsburgh. Stephen Wilson 15:19 Yeah, let's talk about SFA. Because most of the people who listen to this podcast are not speech pathologists. So, I would say that the actual mechanics of it will be unfamiliar to almost all of our listeners. And I'd love to just kind of, for you to lay out like, what does the session involve? Like, what are you doing with the participant and what are you hoping to accomplish by it In principle? Rob Cavanaugh 16:04 Right. Yeah. So, maybe we can, we can just touch on the sort of, I might call it the original version of SFA, that sort of gained popularity in the 90s. So, we have a list of trained words, which we have decided are important for treating and I guess maybe the actual mechanics from a, from a clinician point of view is I have a list of words, and we're going to practice these words. And so if I have a word, and it's like, let's say something simple, so ‘knife’, like a kitchen knife, I might present somebody with a picture of a knife, and I would ask them to name it. And then we're going to go around this sort of hierarchy. And I'm going to ask them to produce sorts of different semantic features for knife. So, what is it used for? What does it look like? Can you, there's a personal association feature like, can you tell me something about how you use this, right? Or it might not necessarily be in that use category, but, so we generate semantic features around our target word. After generating five semantic features, we'll name it again. And that's usually it, we'll move on then to the next word. Stephen Wilson 17:22 Do you generate features, even if they named it the first time through or only if they failed to name it? Rob Cavanaugh 17:28 Usually, at least in the research literature, you would generate features regardless. It would not matter. Yep. And we also might be treating multiple words within a semantic category. So we might be training multiple kitchen utensils, for example. This idea being sort of in the classic version, if we train multiple items that fall in the semantic category, you know, knife, and plate, and so on. And we generate all these features in the semantic space, that is like things that you find or use in your kitchen. We're, we're doing some version of spreading activation throughout this semantic space, or strengthening connections between, like, I'm thinking about the Dell two-step model, although I guess, in my latest iteration of conversations, this hasn't really been out in the literature. But you know, there are these connections that you might be strengthening, right, when you generate all these features. So we might train multiple words within the category. And the idea is that, oh, we've strengthened the semantic space for kitchen utensils. And so even we, we didn't train fork, but we worked in that semantic space. And now you're better also naming fork, in addition to knife, right? That's sort of classic of what SFA is doing. And so we might not need to train like, all these words that somebody has lost. Maybe we could train like a subset of them, strengthen semantics, and we get a bunch of other words for free. Stephen Wilson 19:04 Yep. So yeah. So it would ideally generalize within category because of training those shared representations, between semantic features. And, I guess certain, I mean, only the links to certain lexical items are trained, but you gotta hit a lot of other lexical items in the process of generating the features would be the idea? Rob Cavanaugh 19:23 Right. Yep. Yeah. Stephen Wilson 19:25 Okay. Rob Cavanaugh 19:26 Yeah. So in SFA, we, we might do this, you know, over and over and over again at VA Pittsburgh, the ongoing in the past clinical trial has about 60 hours of SFA. I think, as I recall correctly, the trial that came out of the University of Washington a couple years ago, also was in on the order of 50 to 60 hours. They were comparing SFA and Phonomotor Treatment, phonological treatment. The original SFA studies weren't quite so high in their dose or intensity. That is the treatment. That is it. There's, there's kind of people have added to it. At VA Pittsburgh actually, there's a, I've skipped a step, which is, after the second time you try to name the word, we actually ask participants to put the word in a sentence with some semantically rich content, perhaps from features, they've generated the idea is that this might help generalization. But that was certainly not in the, in the original paradigm. Stephen Wilson 20:27 Okay, so if they've like for a knife, or if they've generated things like, sharp and you know, cutting edge use cutting vegetables, they might say something like, in my kitchen, I like to keep my knives sharp, so I can cut vegetables. Rob Cavanaugh 20:44 Right. You would be a stellar patient. Yeah. Stephen Wilson 20:46 Yep. It helps to not have aphasia. It makes it easier. Rob Cavanaugh 20:49 Yep. Stephen Wilson 20:50 Okay, so that's the, you know, as you say, so you're saying that's a very popular treatment. I mean, I think I would agree with you. It’s probably the most popular treatment these days. And very…. Rob Cavanaugh 21:01 Stacie Raymer just came out with, I don't, I don't know, I can't remember exactly what the design was, but then, she asked a bunch of clinicians what they do, and I believe SFA came out of the top of that list. Stephen Wilson 21:13 Yeah, it doesn't surprise me. Rob Cavanaugh 21:15 It's not just research where SFA is quite popular, it’s also in the clinic. Stephen Wilson 21:19 Yeah. So, awkward question, but like, does it work for like, before we even get to your question of how, it how it's going to translate to any kind of real life physicalist production, does it work for its intended goal of improving naming? Rob Cavanaugh 21:41 Yeah, so the evidence, if you look at, there have been a couple different systematic reviews and Yina Quique published a meta analysis looking at all the different single subject design SFA studies, that we see robust improvements and naming for treated items. So these are the, these are the actual words that we're practicing. We see some evidence for generalization to semantically related but untrained items, right? So these would be the words we think we might get for free. So it is certainly seems to be person dependent. But on average, there does seem to be some effect. It is smaller than the treated items. And I I will also maybe plug there's a great paper by I believe it's Lyndsey Nickels and in 2002, who I think provided some compelling arguments pushing back against the generalization account to untrained items in SFA. So not, I don't think everybody is totally on board with, it generalizes to untrained words, but, but I think … Stephen Wilson 22:50 But you're talking about untrained semantically related words right now, yeah? Rob Cavanaugh 22:54 Correct. Yeah. Yeah. Stephen Wilson 22:55 And so what was Lindsay's argument? Can you recap her argument? Rob Cavanaugh 23:01 I, not as well as she wrote it, perhaps. But I believe that just as we're doing a lot of probing of these trained and untrained words, in these single subject design studies, right? It’s pretty hard to disentangle the effects of repeated probing for words that were not training. From the generalization effects we might see in our treatment, particularly because these generalization effects are quite small, and are probably similar on an order of magnitude to what probing effects might be. Stephen Wilson 23:33 Okay. So just to kind of make it concrete, like your training knife, constantly, you're 60 hours, you're going to talk about knives a lot. And perhaps it's not that surprising that, that there is a demonstrable improvement in people's ability to name knives. But you're not training forks, but you're asking about forks constantly in your research design, and like that's actually giving you more practice with forks than you might have otherwise had. And so, your fork naming might also improve and it's not really because of the SFA. It's not really a knock on effect from training knife, it’s just because you've probed fork a bunch of times. Is that the gist of it? Rob Cavanaugh 24:07 That is, that is my recollection of the argument. And I think in certain study designs, that is, we don't have a way of necessarily statistically isolating those effects very well. For a number of reasons. That is a whole other podcast and paper. Stephen Wilson 24:26 Yeah. And does it generalize? Okay, so we're talking about semantically related, does it generalize also to unrelated words, and would you expect it to? Rob Cavanaugh 24:37 We would not expect it to. So that is, classically, the control condition is that we have three sets of words in SFA, in our research paradigm that we're going to train for that we're going to we're going to probe treated words, semantically related untreated words that we hope will see generalization to and then we should have, provided we're controlling for all sorts of great psycholinguistic variables, across our sets, untreated untrained words, and we should see no movement in those words. Or perhaps the argument is, if we're seeing improvements in our related untrained words, it should be greater than the improvements in our unrelated untrained words. Stephen Wilson 25:19 Right. Because some people, I think, and I think some people, I think, Sharon, and how do you say her name? Rob Cavanaugh 25:26 Antonucci. Stephen Wilson 25:27 Antonucci has argued that the, the mechanism behind SFA would, is more that patients learn to self cue, right? So when, so you're talking about this new dissertation, right? So if I can't, if I'm having trouble naming knife, but I've got this, practiced generating semantic cues as part of my SFA treatment, then I might start thinking, okay, it cards sharp, and that actually kicks off my ability to actually name knife, right? So, under that explanation, it ought to generalize to be, to semantically unrelated untrained words as well, right, becauseI of you, because it's the self cueing strategy that you've learned, like, what do you think about that? Rob Cavanaugh 26:04 Yeah, I think that's plausible, and probably in my clinical experience, a more likely source of generalization. I, I'm not, in my, in my clinical experience, where I've tweaked and tailored and, you know, turned SFA into something like SFA plus. That is, that has been my experience that that is probably what's happening. In, in, alright, it is something about strategic communication, right? Somebody realizing how to use their system really well, to navigate to their intended target, or perhaps an equally useful target when they can't find what they want. Stephen Wilson 26:45 Right. Because maybe you don't need to say knife. Maybe if you just say cut, cut sharp. You're listener understands you and you can move on. Right? Rob Cavanaugh 26:52 Right. Yep. Yeah, I think that's a, that's and we we laid that out in the dissertation as this competing mechanism, right? Stephen Wilson 27:00 Yep. Rob Cavanaugh 27:00 Perhaps it's not a thing that's, you know, quote, unquote, restoring the underlying semantic impairment. But it's system calibration, right? It's it's people figuring out how to use the system that they have in the most efficient way possible. Stephen Wilson 27:13 Uhum. So, so what do you think is the, how much evidence is there, that at a group level, does it, does it generalize to untrained, unrelated words? Or do you, are you, would you question the empirical claim there? Rob Cavanaugh 27:28 I don't know that. I don't know of any evidence that SFA generalizes to untrained, semantically unrelated words, when we're doing confrontation, or like competition picture moving. I am not aware that that's the case. I could be wrong. This is a good question for Gina. But, but by and large, I think we expect that effect to be known, I think it has been. Stephen Wilson 27:52 Okay. Okay. So you'd really have to use it quite a, you'd have to train a lot of different, like to have real world impact, you'd have to train a lot of different semantic domains, wouldn't you? Rob Cavanaugh 28:04 Right? I mean, again, you know, in the research context, where we get our hands tied behind our back a little bit in terms of needing to establish semantic control, or sorry, experimental control, not semantic control. But, but yeah, in the clinic, we, you know, clinicians might be a little all over the place picking all kinds of targets that might be useful. Maybe there's a breadth versus depth with, you know, within semantic category versus spreading across them. But clinicians also don't have anywhere near the hours we have in clinical research to do the treatment. So, that's a whole… Stephen Wilson 28:43 That’s a whole other thing we can talk about. Okay. So in your dissertation, you've got people who are being treated with SFA as part of some ongoing clinical trials. Can you tell me about the patients who are taking part in those trials and the nature of the data that you had to workwith? Rob Cavanaugh 29:02 Right. Yep. So, I, my dissertation focuses on 60 people with aphasia, across two clinical trials at VA Pittsburgh. These are those intensive SFA trials, where people are getting 60, roughly 60 hours of SFA over three to four weeks. It's a lot of SFA. If you've ever done SFA, or, I mean, you can just imagine doing this drill over and over again for 60 hours across three weeks. Like, it's a lot. Stephen Wilson 29:34 Yeah. Rob Cavanaugh 29:38 The inclusion criteria for people is pretty generous. We're, so this is Will Hula and Mike Dickey, I should give them credit because this is not my study. This is their study. And Pat Doyle before he retired. Anyone with a T-score above, I think 40, on the comprehensive aphasia test on the auditory comprehension sub-tests, so, we're basically looking for some evidence that, they need some aspect of comprehension, right? We need them to understand the treatment. But the bar isn't all that high. We need some naming. Like, just a little bit, right? So if somebody can't name anything, they're probably not going to be enrolled in this study. And then the studies are also fairly open to including people. I don't remember what the upper tier cutoff is. But I want to say it might be, so the comprehensive aphasia test, like a T-score mean of 50, standard deviation of 10, I want to say that they might have needed to be under 70 to qualify, but don't quote me on that. Because I wrote this, and it's a blur, (Laughter), as you know. But, but I think the fair takeaway is, we, two studies cast a wide net, for including people of, of all different aphasia severity. Stephen Wilson 30:56 Yep. Okay. And, and I know that they weren't really designed to, they didn't compare SFA to a control condition, right, there are more looking at the effects of like, how many features were generated and stuff like that, in terms of the efficacy? So do you have efficacy data on naming from those trials? Or is that, was that not really the purpose of them? Rob Cavanaugh 31:17 Yeah. So the, the original purpose of the trials, well, they're sort of, they're sort of multiple pieces, there's, you know, there's there's a, there's a sort of neuroscience oriented piece that is more focused on imaging. You're right, there's this practice related predictors piece, at least in the first trial, where there's this question about, sort of, what is it about SFA that’s associated with improvements? And we do see, in these trials, like, very, very robust improvements in trained items. At least in the first trial, I don't think that they have published any results from the second trial. In fact, I don't think that they can even look, because the study authors right now are blinded to a condition, not a trial. But yes, we are, we do see robust improvements in trained items. And I believe we do see some improvements in semantically related untrained items, though, again, smaller in magnitude. Stephen Wilson 32:21 Okay, and then, so it's got that kind of near effect on the thing that's directly trained, then, obviously, your big question is, does it generalize to more real world meaningful outcomes? So, can you tell us about that and how you and what you decide to look at to investigate that? Rob Cavanaugh 32:42 Yep. Yeah. So this, the the dissertation study came out of looking at so, so this study, like many, many other SFA studies, in fact, I think, many other Anomia studies in general, collected monologue discourse samples from set of discourse stimuli, which people now I think often referred to as the, the Nicolas and Brookshire protocol. So, these are the Nicholas and Brookshire articles from 1993 and 1994. And so, so both, both of these studies, the first trial that had 44 people and the ongoing trial, collected monologue discourse data. These discourse stimuli are, so some are sort of, like, open question like, ‘Can you tell me what you do on Sundays?’, and others are responses to picture stimuli, like a cookie theft picture, sort of this hodgepodge of different discourse stimuli that have appeared in Nicholas and Brookshire , I think, if I understand right, sort of assembled them into these sets, that can be used for research. And so, both of the sets happened to be used in the study, and so, I don't know what the timing was, I think if I recall correctly, it was pre-pandemic, but not by much. We were looking at the discourse outcomes for the first study, and seeing that, this measure that they were already scoring, called discourse informativeness, was improving to a small extent, but, but at least statistically reliably so, after the first SFA trial, and I went, Oh, that's interesting. I wonder why, why that's happening? Stephen Wilson 34:38 Did it, would it surprise you to see that preliminary observation that it might have an effect on discourse? Rob Cavanaugh 34:46 Well, I'm a skeptical person. So I think I was surprised. Stephen Wilson 34:49 Yeah, I would have been surprised too. (Laughter) Rob Cavanaugh 34:52 As a clinician, I think it would have led me to be a little surprised. But… Stephen Wilson 34:57 What is this course informativeness for those of us that are not measuring that all the time? Rob Cavanaugh 35:02 Right. Exactly. Yes. Good question. Yeah. So discourse informativeness is the percentage of correct and relevant words to the total number of words produced in discourse sample. So you can imagine sort of, if I, if I'm just going to like trying to, like de-jargonize this, it's like all of the words that make sense in the context and seem right out of all of the words produced in the sample. Stephen Wilson 35:27 So it's like what percentage of the words that you produce are correct and informative and on target, on topic. Rob Cavanaugh 35:33 Right. Stephen Wilson 35:34 Yep. It's pretty. It's a pretty, very widely used measure of discourse, integrity in aphasia. Rob Cavanaugh 35:41 Yeah. And I think, the argument that I've made in my dissertation is that, it is sort of the status quo discourse outcome measure in Anomia treatment studies. If, if there's going to be a discourse outcome measure collected, it's probably going to be monologue discourse, and it's probably going to be discourse informativeness. I think that's changing, but historically, I think that's the case. Stephen Wilson 36:05 So this preliminary data that you had, how many patients was that based off? Rob Cavanaugh 36:09 That was 40, that was the the 44 people who completed the first, the first trial at VA Pittsburgh. Stephen Wilson 36:16 Okay. Rob Cavanaugh 36:17 And it, it wasn’t huge. It was about a five percentage point improvement in discourse informativeness. If I remember off the top of my head, something like, on average, the group was 42%, informative, pre-SFA, and 47%, informative post SFA, something like that. Stephen Wilson 36:33 Okay. Rob Cavanaugh 36:34 So, not a huge change on average, but this is an average of a very heterogeneous group. So, it’s not nothing. Stephen Wilson 36:45 Yeah. I mean, if it's going to be statistically significant than it's worth, you know, it's not nothing. So, okay, so then in your study, you got to look at discourse informativeness in a larger sample of patients, and also some other discourse measures, too, right? Rob Cavanaugh 37:04 Yep. Yeah. So the idea was, this is interesting, we should poke at it. Why don't we find, let’s do it in a theoretically motivated way. Where, you know, we don't want to go fishing, right? So, let's think about how we could dive into why this might be happening in a way that, you know, sort of answers maybe questions about what SFA is doing. Is SFA generalizing and how, right? We’ve talked about two different potential mechanisms for SFA. This sort of restorative mechanism that's focused on semantics, and this compensatory mechanism, where we think about compensatory communication use. And so, the idea for the study was, well, discourse informativeness is a little bit tricky, because it can conflate like relevance and correctness, which is a little bit challenging, right? But it's also only really one level of, it’s like this, it’s sort of a hybrid discourse measure, it’s combining aspects of lexical retrieval and discourse and pragmatics. And so, the idea is, well, why don't we pick a couple of different discourse measures at different levels of discourse, to try to understand how SFA might be improving monologue discourse and these outcome measures. And this was motivated, I, I, my credit goes to one of my committee members to Davida Fromm, for turning me on to this literature, this idea of multilevel discourse analysis. So, this is sort of fundamental to the, to the dissertation. So, this is the idea that, sort of you, we might have different measures of discourse, from the word level to sentence level, to pragmatics. But these levels are all connected. They're not really isolated. And there's a, there's a study published by, by Marini in 2011, which kind of suggested that, well, you know, if somebody has this underlying linguistic impairment, and we can improve the underlying impairment, that will kind of trickle up through the system, right? We fix the, we fix the problem and the remainder, the other aspects of discourse are kind of the symptoms. And so this, these improvements will sort of percolate throughout discourse. Or, maybe our treatments are compensatory and so we should really focus them on these aspects of discourse, maybe like informativeness, where we can get people to be more in tune with pragmatics. But that doesn't fix the underlying system. It just helps people use their system better. And so, we wanted to pick measures that let us see patterns of improvements across multiple levels of discourse, to see if SFA is doing something at this sort of, core linguistic by semantic level, and maybe it's trickling up to discourse informativeness. And that's sort of would be like, the core motivation for Anomia treatments and aphasia, right? If I can fix naming, I can fix everything else. Maybe the like, that's the, that's the straw man. That's the, that’s it. Stephen Wilson 40:26 The strongest. Yep. Rob Cavanaugh 40:25 We don't quite think about it like that. But that is like kind of the idea, right? So, that could be a possibility. Maybe that's why we see improvements in informativeness. Or, as you said, there's this compensatory idea. So maybe people just, they are not off topic as much, right? They're producing sort of, they're not producing inaccurate words as much. Maybe they're taking their time more, maybe they're able to navigate to what they want more, they're self cueing more. So we're seeing some improvements in informativeness. It's not really related to doing something to the semantic system. So, this was the idea. And you know, in hindsight, and we can talk about there's sort of some important limitations to how this actually played out. But this, this is where we started. Stephen Wilson 41:13 Okay, so you've got a bunch of, yeah, a bunch of different discourse measures. You're hoping to see that overall effect on discourse informativeness. But then you have specific hypotheses about which of the other measures, would also move along with it, depending on the mechanism, whether it's a fundamental like strengthening of those spreading activation links versus the learning of a compensatory self cueing strategy? Rob Cavanaugh 41:38 Yep. Right. So we, so myself and another research speech pathologist, Jen Golovin at VA Pittsburgh, went through and scored, four time points of discourse samples for 60 people, it took, it took, it took some time. Yep. I would not, I would not recommend doing that, although it was the pandemic. So, it's not like I really had a whole lot of other things to do, but it was, it was a lot. Stephen Wilson 42:08 Yeah. How long did that take? I mean, I mean, I think a lot of us have been through this, in our career at some point where we've just done some, some brutal legwork through data. Like, how long were you at it for, how many hours a day? Like, how did you do it? Rob Cavanaugh 42:22 Yeah, I, you know, I'd have to go back, I have to go back like, look at the files to see when the time stamp started moving, I would wager that was at least six months of work at, at 10 to 20 hours a week. That would be my, off the top of my head guess. I was so fortunate that I had a dissertation fellowship from the NIH. So I, you know, and Will Evans at Pitts was great. And basically said, great, you've bought yourself out of the lab, you know, stick around for the stuff you're interested in, but like, otherwise want to, you know, go off and do the thing you proposed. Stephen Wilson 42:59 Yeah, huh. Rob Cavanaugh 43:00 But yeah, that was, it was it was great, because I got, you know, this sort of firsthand experience and thinking about all of these discourse samples you go through one by one. You know, for, for the time that, that's interesting, which is, you know, not all of it. (Laughter) Stephen Wilson 43:16 Right, no. But you do learn a lot when you've, when you're, faced the data up close and personal like that, don't you? Rob Cavanaugh 43:23 Yep. Yeah. For and right, and I think that informed a lot of what I, how I interpreted the results. I don't think I would have the same interpretations if I had just shot that out to somebody else. Stephen Wilson 43:35 Yeah. So, are we ready to tell our listeners what you found? Rob Cavanaugh 43:41 Yeah. Well, I feel like we're missing a small key piece of information. Stephen Wilson 43:45 Okay. Rob Cavanaugh 43:46 Which is, which is what are the measures that we've looked at? Stephen Wilson 43:51 Sure. Yes. Rob Cavanaugh 43:51 So, so, we, we… Stephen Wilson 43:53 Well, we've talked about one of them, but we didn't talk about the others. Yes. But yeah… Rob Cavanaugh 43:56 So, so, and I, and I, for full disclosure, we sort of proposed slightly different measures initially, and had to shift for graduating on time reasons and some other concerns I had about psychometric properties of some of the measures, but we counted up all of the semantic errors in every transcript. So, we wanted the rate of like, how often do people produce semantic errors sort of controlling for the sample length, right? So, this, this is like really getting at the idea of lexical semantics. If we think we're improving the semantic system, people should be making, like frank semantic in discourse less often. We also looked at lexical diversity. So, where people producing more words, we use them, a measure called the moving average type token ratio, which is essentially a type token ratio where we're again trying to account for sample length. This sample length is, is very variable thing in discourse that is really orthogonal to what you are interested in. I also wanted to look at grammatical complexity. And initially this was going to be focused on predicate argument structure. But it turns out that, that didn't, was not going to be a feasible measure for us to finish, if I wanted to finish my dissertation within a reasonable number of years. And, and also sort of the outcome measure you get from predicate argument structure, I think was going to be statistically challenging to look at. And so, we, we, the compromise was mean length of utterance, which I think is a pretty good proxy for grammatical complexity, is not perfect, but is certainly close. Stephen Wilson 45:37 Oh, yeah. I think it's a good measure. I think it's a very robust measure. Rob Cavanaugh 45:41 Right. And, and it's faster, because if you mark utterance boundaries, it's quite easy to calculate. Stephen Wilson 45:50 Yeah, I mean, you know, utterance boundaries, as I'm sure you know, not the easiest thing in the world. I mean, there are some icky questions that come up when you're trying to decide what's an utterance and what's not. But if you have some principles, then you know, you can most of the time you can get to an answer. Rob Cavanaugh 46:05 Yep. Yeah. And, and, you know, we, we started off with a codebook. And we had a set of rules, which we sort of refined, and, you know, you find all of the gray areas write as you go. Stephen Wilson 46:16 Yep. Rob Cavanaugh 46:17 And you say, well, we don't have a rule for this right? Okay, let’swrite it. Stephen Wilson 46:17 Your codebook gets longer and longer. Rob Cavanaugh 46:22 Yep. Stephen Wilson 46:22 Yeah , it was funny. I've been, I've been teaching assessment to students here at UQ recently and like, they keep on asking me questions like, ‘What would you do when the patient does this? What would you do in the patient does this?’ I'm like, it's not defined in the, in the, you know, in the lab. Like, either just have to make it up. Like for clinic, you just make it up for research, you'll end up with like a 300 page addendum that tells you how to actually scored. But it's, you know, the pay, and that whatever you do, the patient is always going to do something that's not in your rulebook. Rob Cavanaugh 46:52 Right. Yeah, andI think, right. And as long as you're consistent, right? If it happens, again, you treat it the same way. And you're transparent about what you did then, but you know, what else are you gonna do? Stephen Wilson 47:02 Yeah. Rob Cavanaugh 47:03 And so, right. And so we kept discourse informativeness, being the measure that changed, we wanted to, we're adding an additional 16 People from this ongoing trial. So, we have a total of 60. So, we wanted to keep that measure, in case it changes. And then, and then sort of there was going to be this fifth final measure of global coherence. It's interesting, every person I talked about global coherence, just sort of shook their head, and were like, oh, I'm sorry. And, and sort of global coherence being like, how well does the sentence at hand relate to the topic? And I guess, on the sort of several global coherence scales, these are four points or five points scales, the reliability of how well people can distinguish between the middle sort of, the two and three, or the two, threes and fours on that scale is, poor. This is a nice way of putting it. And I was essentially told, it's probably not worth going down that path. Stephen Wilson 48:03 Yeah. Rob Cavanaugh 48:03 So, but in full transparency, we set out with predicate argument structure and global coherence. We did not do those. We dropped global coherence. We looked at MLU. We didn't, we didn't analyze the data. Right? We didn't make that change. After seeing data. That change was made before looking at the data. Stephen Wilson 48:18 Right. It's very reasonable. Rob Cavanaugh 48:22 Yeah. This is the trials and tribulations of a PhD student. Right? Stephen Wilson 48:27 Oh, any researcher, actually. Right? I don't think… Rob Cavanaugh 48:30 Yeah. Stephen Wilson 48:31 I mean, I don't think it's specific to any training stage. Rob Cavanaugh 48:36 Yeah, fair enough. Fair enough. Right, so, what do we find? By and large, we found that people did not change in these measures. I've left out a small detail, which is, we have treatment entry, and that, are treatment enrollment and treatment entry. So, we have two observations pre-treatment. We have observation right after they finished in the one month follow up. And in the case, for these four measures, the change from, from immediately before the intervention to after the intervention is not, is not large, or statistically reliable. Stephen Wilson 48:35 Yeah. Rob Cavanaugh 49:17 And does not, does not appear to like, even, like, marginally exceed what we're seeing in sort of the enrollment to entry change, right? Stephen Wilson 49:27 Okay. Rob Cavanaugh 49:27 So, we sort of set variability, without treatment. Stephen Wilson 49:31 Yep. Yeah. So yeah, so you've got these four time points two before treatment, one straight after and then one to kind of look at retention or maintenance. And you're saying like any slight gains that you might have seen during treatment were of the same order of magnitude as what you were seeing before treatment even started. So just kind of like capturing natural variability. Rob Cavanaugh 49:52 And if I recall, I think people were might have been allowed to have, you know, whatever treatment clinical treatment they had going on between enrollment and entry. So it's not a perfect control. And that time, the distance and like number of days between those time points is hugely variable. Stephen Wilson 50:09 Yeah. Rob Cavanaugh 50:11 But still, it is there for comparison. Stephen Wilson 50:13 So, okay, so that's, were you disappointed that you didn't see effects on discourse in any of your measures? Or what, was it just kind of like, coming back to reality? Rob Cavanaugh 50:29 I mean, I was definitely, I guess I wasn't surprised. I was disappointed. Because I think, because this is sort of the idea like, oh, we're doing SFA. And this study, we're doing a ton of SFA. Right? Like, we should see these effects if they exist, right? If like, and especially, take the line of argument, if the effects of 60 hours of SFA are big enough for us to care about, they should be popping up here, I think. Stephen Wilson 50:58 Yeah. And what? Rob Cavanaugh 51:00 And they are not. Stephen Wilson 51:01 Yeah, what happened to the effect from the prelim data? Was it just the extra 16 patients that made it go away? It had been a little bit fragile? Or did, was there something different you did in your analysis that was more careful than the prelim analysis? Rob Cavanaugh 51:17 Yeah, a little of both. I think the statistical models we used the second time around, were a little bit more robust. And maybe slightly more conservative, perhaps. But I think, it also is pretty reasonable to think that there is some regression to the mean going on here. There's some intricacies of aggregated binomial mixed effects. (Laughter) Stephen Wilson 51:45 Yeah, we're not going to talk about. Rob Cavanaugh 51:50 But I think it's a little bit of, the modeling approach was a little more careful. And also, there's some regression to the mean. This is we are pulling in 16 people from another study, the studies are very, very similar in their paradigm. They're, they're slightly different. And I can't tell you that that isn't part of the reason, we might be getting different people in this enrollment cycle. Who knows? But, you know, there are different clinicians who are involved, right? There's all kinds of things that, that could contribute to this. But, to me that two likely things are the statistical approaches a little more rigorous, and there may be some regression to the mean, right? we're seeing an effect that by and large, we haven't seen, it makes sense for it to come back to where we started, Stephen Wilson 52:42 Right. So, you believe that there's like, if there's an effect here, that it's very small, at this point, you feel confident that your, your confidence intervals are narrow? If there wasn't, if there is an effect at all, it’s small. Rob Cavanaugh 52:57 Yeah, I think, so the, the effect for the group, so, you know, like, we might distinguish, like how much individuals change being meaningful for them. And, of course, that being difficult to distinguish from sort of, like just variability, right? Between taking, taking the test. But on a group level, and I don't know what the credible interval is off the top of my head, for the, for the like, difference in performance from post-treatment to pre-treatment. But, the bulk of that credible interval is within a range that you would probably not care about as a clinician. You would say, that effect isn't, even if it's, even if it's greater than zero, it’s not, most of the effect is not big enough, that I really care. Stephen Wilson 53:48 Right. Yeah. Okay, so you talk about this, you have this nice turn of phrase, train and hope. You say we shouldn't just train and hope. This, your results show that we shouldn't just kind of expect generalization to real world outcomes without you know, sort of mechanism for it. Can you talk about the train and hope idea and getting past it? Rob Cavanaugh 54:21 Yeah. And, and like there's so much I want to talk about just nested within that question. Like, if I didn't say something like monologue discourse isn't what real world communication looks like, I would probably receive an email from Marion Leaman, as soon as she listens to this, assuming she listens to it, who is a fantastic researcher and discourse expert and telling, reminding me that monologue discourse is not the same as like dialogue, for example, and the conversations we have every day. So, so, in the context that we're looking at an outcome measure that is sort of approxi for this thing we want to improve, which is how people communicate in their everyday lives. The, the idea of sort of the phrase of train and hope came from those seminal papers by Stokes and Baer, and, and these papers from the behavior modification literature, were essentially criticizing, you know, this literature that came out of like, you know, B.F.Skinner, like way back when, basically saying, you can't, like do a task and get somebody to get really good at a task in your controlled research setting, and then just expect that, they're then going to, like, take that out of your room and use it. That's, that's not a reasonable expectation. But that is the dominant paradigm for how people look at generalization. And I think if you look at the Aphasia treatment literature, at least the Knowmia literature, you will see the same thing, which is, we can get people to name words pretty well, right? Those effects are pretty large. Stephen Wilson 56:09 Trained words. Rob Cavanaugh 56:11 Yeah, trained words, right? And in fact, you know, at Pitt, right now, we have this study where we're training people on a couple 100 words, and we're expecting to probably get them to, you know, we're hoping that effect sizes north of 100 words gained that they can name. But that's not, that's not enough. We need, you need a theoretical mechanism for how, you're going to get those, the use of those trained words into the context that the person cares about. So, I think this, you know, when I, when I, we, in my comprehensive reviews, as a PhD student, I went back and looked through some of this literature from the 70s, and 80s, and 90s, in aphasia, like Pat Doyle, at VA, Pittsburgh is doing stuff, where he's testing to see if the things his patients are doing, if they can use them with new clinicians or new listeners, right? Like, this idea has been around for a while, that we shouldn't be making this sort of thing, a part of our treatment. And so, the sort of, the way to combat train and hope is to think, Okay, if I can, you know, produce these effects, and the tasks that I'm going to, you know, rote train, how am I going to get them, How am I going to make this sort of ability useful to someone in the context they care about? Am I going to, are they just going to practice it in that context, maybe we scaffold the environment that they're doing training in, maybe I really need to shake up my treatment, keep them on their toes. Right? There's a, there's a great research project out of the UK right now called Luna, where the treatment is sort of working backwards from a personal narrative. So, let's talk, what's, you know, what's start with a story you want to tell. Okay, we're gonna work on some linguistic aspects of the store using SFA. And then we're going to bring words into the sentence level, and we're going to address how we're using those words, we've practiced an SFA that we've already established that you care about, at the sentence level, and then we're going to practice the storytelling. That sort of scaffold that's really oriented towards this sort of end result, is, is kind of what I would like to see, I think all of us doing, and I suspect is what clinicians do a lot informally, even if it's not so much what you see in the research literature. Stephen Wilson 58:36 Right. It’s like script training with other forms of treatment to bolster the, the steps that go into script training. Rob Cavanaugh 58:44 Right. It’s like if you took the script training, but then you ask people to go practice, you went and practice the scripts with all different kinds of people, so that, you know, the people you practice with said the wrong thing. Right?They didn't, they didn't give you the right next step in the script. So you had to adjust. Stephen Wilson 59:00 Yeah. Rob Cavanaugh 59:01 or right? It's that, that's the, that's not training and hoping that's, you know, sort of training for the, for the use case, right? Stephen Wilson 59:10 Yeah. I mean, what did Pat Doyle find when he looked at the ability to generalize, but beyond even trivial things. Did he find that it made a difference that people you know, underperformed with a different clinician, for instance? Rob Cavanaugh 59:21 You, I, you go back and read that study, because I could not tell you, I could not tell you off the top of my head and I don't want to make you get… Stephen Wilson 59:30 Okay. No worries. Yeah, I mean, I'd love to read it. Because, I mean, I've always sort of wondered like, how much of the treatment effects that are reported in the literature, are kind of familiarity with the interlocutor familiarity, you know, just that comfort with the person that you that, you know, the the room that you know, the situation that you know, like how much of the gains that are reported are just things like that, I mean… Rob Cavanaugh 59:53 Oh, and it's, it's even, it can be even more specific than that, like, when we train words for our Anomia studies we often have desired targets, right? So, you show a picture of a rock, and somebody says Stone, and you're like, that's not right. And then when you get to treatment, you say, that's a rock. And they're like, oh, okay, rock. We're like, great. You've, you've learned something right? You, you're better. But you know, that's not really a… Stephen Wilson 1:00:19 Oh, yeah. Rob Cavanaugh 1:00:20 …an endpoint we care about it. Stephen Wilson 1:00:21 Oh, yeah, it’s good point. So where do you go from here? I mean, do you, did this, make you not want to do SFA anymore? Or do you just think that it needs to be part of a bigger enterprise? Rob Cavanaugh 1:00:36 Yeah, well, I think there's a time and a place for SFA nested within a, you know, holistic treatment program where, where it makes sense. Maybe you are focused on that strategic communication, right? And it might be a great tool, for teaching someone that. I think, I'm a little less convinced that, you know, if you really wanted to, like train a few words and get generalization to lots of other semantically related words, I'm not sure. I think you'd be better off just training, as many of the words you cared about directly, I think you'd be more likely to get the benefits you were looking for. But even still, I hope you have a plan for how you're going to get those words to be useful. Stephen Wilson 1:01:23 Right. Rob Cavanaugh 1:01:24 I think the Luna study and that idea, I think that sort of multi level approach to treatment, not just the analysis of outcomes. I think that will be pretty fruitful. You know, we don't also have to train nouns, right? Like, there might be many good reasons to start thinking about verbs as a better target from, which to build a treatment. I don't know. There's also, I think, a question of measurement paradigm. We haven't really touched on it here, but there are a lot of challenges in measuring discourse outcomes, after our treatments. Stephen Wilson 1:02:01 Yeah, you did talk about that a lot in your dissertation. And it's super interesting to me. I mean, yeah, let's talk about it. But, but I will say before we talk about it, that at the same time, I don't think it's the fundamental, you know, explanatory thing here, because I do think that, you know, discourse informativeness was a pretty decent measure. And MLU is also a really good measure that's really easy to measure. So I kind of think that you did demonstrate, robustly that there wasn't a meaningful change in the discourse. But all that said, let's talk about like, why it's hard to measure discourse. Rob Cavanaugh 1:02:37 Right. Well, and, and I think a good push back on, on, like, the hard interpretation of the null result here is like, SFA, at least in the restorative sense, trains, a discrete set of items, and, you know, potentially related ones. Or if you're an effort for retrieval person, we're doing a lot of effort for retrieval of lots of semantic features. So perhaps that's also useful, right? But if the discourse stimuli doesn't actually elicit these things that we're training, we might not see benefits. Right? So if we're just thinking about how to, how does, you know, what we're training generalized to monologue discourse? This isn't really a strict test of that. Right? Because we haven't sort of confirmed that, oh yeah. Like Nicolas and Brookshire is really eliciting what everybody was being trained on. Stephen Wilson 1:03:31 Right. I mean, in fact, it wasn't right, because it was explicitly, completely separate. I mean, you'd have to basically ask people, hey, tell me about, you know, what, like, you know, getting back to our kitchen utensils example, your discourse sample would be, tell me about cooking a really complicated meal in your kitchen, you know? Rob Cavanaugh 1:03:50 Right. So… Stephen Wilson 1:03:50 And then you might see an effect. Rob Cavanaugh 1:03:52 Yeah, we are actually doing this at Pitt. When I say we, but I've, I've now left, but where we are sort of getting like a core lexicon measure out of all kinds of stimuli, developing lists for training words out of those stimuli, we will train them using sort of a classic Anomia paradigm and now we can look to see if the discourse, if discourse is improving, because we sort of have established a priori that the discourse tasks are eliciting what we're training. So, it's a little bit of a stricter sense. But, you know, the other side of this is like, well, we did 60 hours of SFA. If it, if it really did enough to boost the system, it shouldn't really matter. Right? Like if you, if you spent 60 hours really working hard on something, and we think it's made a like restorative improvement to the language system. It should pop up on a general discourse measure, right? Stephen Wilson 1:04:52 Yeah, I would think so. Although, I guess, like at the same time, like, you know, 60 hours it sounds like a lot, right? I'm sure it feels like a lot. But think about it this way. In your PhD, let's say you're working, let's say 40 hours a week. I'm sure it's more than that. But let's say it's 40 hours a week. 60 hours corresponds to the first one and a half weeks of your PhD. Like, is relearning language easier than doing a PhD? Right? I mean, not really, probably. They may be even, maybe at the end of the day, 60 hours is a tiny amount. Rob Cavanaugh 1:05:29 Sure, it might be a tiny amount in a perfect world where we can deliver hundreds and hundreds and hundreds of hours of treatment. But 60 hours is like the 99th percentile of what people get an outpatient once they go home. Like, it's like, 60 out, if somebody gets 60 hours of treatment, right now, at least in the US of outpatient, we've done really well, like they are a VIP in our outpatient clinic if they got 60 hours. Stephen Wilson 1:05:58 Yeah, no, I'm not suggesting that, that the solution would be to do more hours in the study. I'm wondering, whether, you know, by analogy, like, whether we are deciding whether, whether you 10 hours, or 60 or 300, perhaps that's like saying, you've got to clear all the sand from this beach, and you're either going to use this small shovel or this big shovel. You know, like, I'm just a little worried that that might be why it's not working. Rob Cavanaugh 1:06:25 Because, because we should be doing it. Maybe if we did 120 It would work? Stephen Wilson 1:06:30 No, it wouldn't, we'd need to do 120,000 or something, you know, I mean, like the, the, the numbers are not important. But that, you know, like, you know, 60 is a lot more than 10. But like, what if the real number that's needed is so vastly more that like, it doesn't actually matter that much if it's 60? Or if it's 10? Rob Cavanaugh 1:06:49 Right? Because we're orders of magnitude? Stephen Wilson 1:06:51 Yeah, um, that's what I'm a little concerned about. Rob Cavanaugh 1:06:55 Right. Well and this is not my, this is your area more than it is mine, but when you think about looking at, you know, imaging, pre and post treatment, right, which I think in aphasia has largely not seeing huge changes after interventions. Stephen Wilson 1:07:09 I mean, this is something that we've studied, Sarah, and I, Sarah Schneck and myself, I can say, with confidence that there hasn't today been a convincing demonstration of any neural changes following treatment. There's been many studies that have that have investigated that and that have reported changes, but none of them have been statistically robust. In my, in my very a…(laughter) Rob Cavanaugh 1:07:32 if you, yeah, if you had 120,000 hours of SFA, before or after, you know, between your imaging, would you see a change? Stephen Wilson 1:07:43 120,000 hours? (Laughter) Well, yeah, I don't know. I mean, (Laughter) you probably would. But it wouldn't matter if it was with SFA or not, you'd also have an effect if there was 120 hours of conversation. So yeah, I don't know. I'm just, you know, I'm just speculating here. Rob Cavanaugh 1:08:08 I will, I will happily speculate with you anytime. Yeah. Stephen Wilson 1:08:12 So I know that this is, I mean, just to kind of wrap up our, our talk. I mean, like, this is something that I know that you're working on now. Right? So you're really interested in dosage and, and the disconnect between research dosage and actual clinical practice, you want to talk about what you've been doing since you finished your dissertation along those lines? Rob Cavanaugh 1:08:30 Yeah, that would be great. So I am I am, I've shifted gears a little bit out of psycholinguistics more into health services, which is a little bit sort of back towards the motivation of why I left to do research, which was sort of thinking about like, how do I do the research as a clinician, like how well can I do this? And as the sort of PhD process went on, and as I keep thinking about it, it's like more and more coming back to like, I don't think this system is set up well, for chronic conditions like aphasia in the US, for us to be successful implementing our research. And the first steps in this are sort of characterizing what usual care looks like, right? Like, how far off are we? Not this is like the fault of anyone, especially clinicians, there are system level factors and their clinician level factors and their clinical level factors, their patient level factors, right? Do people have transportation to the clinic? Do they have insurance on? Stephen Wilson 1:09:28 Yep. Rob Cavanaugh 1:09:29 And so, where, where I'm hoping to take this work is, sort of characterizing what what the usual care looks like in the clinic. And thinking about, well, what, what could we do differently? Or what sorts of data or studies can we use to advocate for care that actually aligns with the chronic condition that is aphasia? Because it just, this, the way this system is set up now, I think we know we can have conversations about SFA all day long, but the reality is, people are getting seven to 10 hours of treatment in outpatient once they go home in the first year, it doesn't matter what treatment you pick in seven to 10 hours, the effects are not going to be huge. Stephen Wilson 1:10:12 So even if you had shown an effect on this close with 60 hours, you would still have a big hurdle in front of you in terms of okay, then well, how do we get people 60 hours? I mean, that's a whole, another question. Rob Cavanaugh 1:10:24 Right. Yeah, I mean, you know, you hope like apps, right? Like at this, at this point, if you want 60 hours, apps are probably are best. We as clinicians have to give home practice that is, you know, hopefully, true enough to the treatment, that it's doing what we're doing in the clinic, not to mention all of the counseling, and the goal setting and this holistic treatment, right, that we should be providing. Like if we were just doing SFA in the clinic, we'd be doing our our patients a huge disservice. Stephen Wilson 1:10:50 Yeah. Rob Cavanaugh 1:10:50 So, yeah, that's, that's where, that's where I would like to, to take things. And so like my research right now, I'm thinking about sort of the fundamental problems of this, which are as simple as well, how do I identify somebody with aphasia in a health record or claims database? How can I even be confident that the people I think I want to study that, that, you know, these are the people my cohort? And, and because our field doesn't do a whole lot of health services research, I think this is where we're going to have to start. Stephen Wilson 1:11:23 No, we really don't. I mean, I think it's, I whenever I've seen medical records research related to aphasia, it's like the first hurdle is like, I mean, a bunch of them have PPA. Right?Like you're you're just as likely to see aphasia in the medical record, as somebody with a neurodegenerative, as a stroke. And like, a lot of the times, they'll just use different words, they'll, they'll, won’t mentioned at all, because they are so taken by the motor impairment or, yeah, it's not easy to extract from medical records. Rob Cavanaugh 1:11:23 Right. Yeah. The best, the best I can do right now is something like, well, this person has no ICD records of a stroke for six months, and then all of a sudden, they have some ICD codes pop up for stroke, and also two different providers punched in an aphasia diagnosis. And so I'm gonna take a good guess that this is probably somebody with aphasia. Stephen Wilson 1:12:12 Yeah, that's pretty, pretty decent. Although you wonder, like, then, did they get better? You know, like, we know, people get a lot better in their first year, like, how are they doing after you know? Rob Cavanaugh 1:12:23 And, and who put in that code? Right? Because I think that matters a lot. Stephen Wilson 1:12:28 Hmm. Yeah. So you're enjoying that new direction? Rob Cavanaugh 1:12:33 Yeah, it's, it's a, it's a, it's a learning experience to really shift to health services, and you know. I'm, the center I'm in is really this intersection between health services, epidemiology, biostatistics. It's sort of where these, these different things come together. And this, these are not areas that I've been classically trained on. This is, I'm learning by doing, which is always fun. Stephen Wilson 1:12:57 Yes. It's, it's fun. And it's fun to keep on taking on new things and things you don't know how to do. Rob Cavanaugh 1:13:03 Yep. Stephen Wilson 1:13:03 Yeah. And so I guess the dissertation, you're working on a paper or papers to report those results? Rob Cavanaugh 1:13:11 Yep. I'm hoping to submit this in the next, you know, six months. (Laughter) It is, it is getting there. Yeah, my goal was to submit, you know, a single paper, write the results. There's an aim, there is a whole aim to about cognitive predictors of Stephen Wilson 1:13:27 I know. Yeah, I read it. It was too complicated to talk about in the podcast, I thought. Rob Cavanaugh 1:13:36 I think it's, you know, it's predictable based on our conversation. But, yeah, I think I will bundle those as a single, as a single paper. Hopefully, maybe taking it to ARCS, down your way. Next summer. Stephen Wilson 1:13:51 Oh, that'd be great. Yeah. In Brisbane. Brisbane. Rob Cavanaugh 1:13:55 Yeah. Stephen Wilson 1:13:56 Yeah , that'd be fantastic. Yeah, well, best of luck with getting it written up. I'm looking forward to seeing the publication. I honestly don't think you had that much work to do. I mean, it's like, it's, you know, just cut off the acknowledgments, and you're good to go. Rob Cavanaugh 1:14:11 I feel for the reviewer who attempts to read the supplemental materials that I've carved out for them at this point. Stephen Wilson 1:14:19 Yeah, it's all good. All right. Well, thank you very much, again, for taking your Sunday night to chat with me. And I really enjoyed reading your work and talking to you about it. Rob Cavanaugh 1:14:31 Right. Well, thanks for having me. Stephen Wilson 1:14:32 All right. Well, I hope to see you in Brisbane mid next year. Rob Cavanaugh 1:14:36 Yep. I hope so. Stephen Wilson 1:14:39 All right. Take care. Okay, well, that's it for episode 28. Thanks very much to Rob for coming on the podcast. I've linked Rob's dissertation in the show notes and on the podcast website at langneurosci.org/podcast. Thanks a lot to Marcia Petyt for transcribing this episode, and the Journal of Neurobiology of Language for supporting some of the costs of transcription. Bye for now and I hope To see some of you in Marseille next month if you can make it to the conference. Take care.