By
Juliette Price:
My rule is always you have to start small because if it goes wrong, I want you to take all of those students out for lunch. You can’t take 14,000 students out for lunch, but if you start small and it’s 5 or 10, if it goes terribly wrong, you just take them out to lunch, say, “So sorry about that. We were trying something new. Hope I didn’t bother your year too much. Lunch is on me.” You shrink risk, and that’s really important in testing.
Alec Patton:
This is High Tech High Unboxed. I’m Alec Patton, and that was the voice of Juliette Price. We recorded this interview during the 2025 Spring Improvement Summit in San Diego. For the past three years, Juliette has been the Senior Improvement Science Coach at the National Association of Higher Education Systems, or NASH for short. At NASH, Juliette facilitates the transfer success Networked Improvement Community. Because Networked Improvement Community is a mouthful, we’ll be calling it a NIC from here on out. The transfer success NIC focuses on students transferring from two-year colleges to four-year colleges, and it has been a hard decade for transfer students. Nationally, transfer student enrollment has dropped by 16% over the past nine years, but nearly 90% of the universities in the transfer success NIC have increased transfer student enrollment over the past two years. Here are two specific examples. In Kentucky, Western Kentucky University has increased transfer student enrollment by 57%, and in Pennsylvania, Kutztown College has increased transfer student enrollment by 72%. Now, here’s my interview with Juliette. We start by talking about the transfer success NIC’s problem of practice.
Juliette Price:
So our problem of practice that we are really focused on is what we generally refer to as transfer student success. So transfer students, these are students who start at a two-year community college and they’re trying to transfer to a four-year to complete a baccalaureate degree. Broadly speaking, in America, 80% of community college students want to get a four-year degree, only about 25% of them will ever end up at a four-year institution, and only 13% of them will ultimately complete that four-year degree six years after they got to the four-year institution. So we’re talking about crazy amounts of waste in the system in efficiency. To give you a sense of the scale of the problem, there’s anywhere from 650 to 750,000 students every year that fall through this crack.
Alec Patton:
Wow. Because the whole thing is that you’ve been going to school for two years?
Juliette Price:
Right. Right. The whole thing is you’ve been going to school for two years, this has been your goal, you were maybe advised or self-selected into attending a two-year institution. People do this for a variety of reasons, but one of the first drivers is cost. A two-year institution brings down the cost of college in an environment today where the cost of college is very high, we’re worried about debt. This is a great choice for students. And so they go into their two-year college experience with these hopes and dreams, and then ultimately, we are setting up the system that is failing them at quite… 750,000 students a year, that’s a huge gamble that our country is making by not supporting these students.
Alec Patton:
Yeah. So at that point, a lot of us would say, “Oh, that seems bad,” and yeah, lots of systems are broken and kind of like, I don’t know, flip to the next article on our phone. And clearly, you all are tackling that. Tell me the story of this, how that started?
Juliette Price:
Yeah, great question. So transfer and transfer issues have been around since the dawn of time. The dinosaurs were around and we were not nailing transferred. So this has been a little bit of a white whale in public higher education because simply put, there’s just a lot of research going on, but there’s not a lot of action research. So I can point to you 10 articles and research papers that say, “We’re failing transfer students,” when you go to Google, or a book, or whatever to say like, “Okay, I’m a campus president, I want to do something about this,” there’s no playbook. And so NASH, because we’re an association of systems of higher education, there was the vision of could we bring in improvement science to tackle this problem from a systems perspective? Because so many of the challenges that the research does point to is this isn’t the student’s fault, this is all about the systems have not been set up for transfer student success.
And so that was a little bit of the inspiration. Nancy Zimpher, who’s a large leader in public higher education for many years, she’s a deep believer in improvement science, and so I will fully credit her to say, “Hey, maybe this is the idea where improvement science can come in and help.” So that’s what got us start down this road.
Alec Patton:
I think I can see the challenge here. So we have a long-standing that transfer from two-year to four-year institutions is a problem, has been a problem. My guess is that pretty commonly, you are going from a state community college to a state four-year institution. So on paper, it’s a small step, but in reality, it might be the different systems that are aligning or not aligned.
Juliette Price:
It’s a great point that you just raised, because I think one of the things about taking a systems viewer systems approach, it’s all about mapping the systems, systems mapping, this is a tool in our toolkit of improvement science. When you zoom out and look at the whole US, is it conceivable that a student is starting their journey at Hawaii Community College and trying to transfer into New York City? It’s possible. Does it happen? Very rarely. What we see is about 70% of the incoming transfer students to any four-year institution is coming from the local regional community college, sometimes, it’s two or three, but for the most part, there’s one or two feeders that’s providing the bulk of those students.
So I’d put this challenge out there too, there’s a lot of talk about the use of AI, and yes, there are huge problems, like hypothetically, should we be able to do course articulations from University of Hawaii Community College System to SUNY? Yeah, but when you think about all those combinations, it becomes overwhelming. I do think AI is going to help reduce some of that, but we can start smaller than that. If we know that 70% of your students are just coming from the institution down the road, you don’t need an AI tool, you need to go down the street, introduce yourselves, grab a team, and start improving.
Alec Patton:
Yeah. I also think there’s a thing that if somebody’s moving from Hawaii to New York and it’s tricky, it’s fair enough, if somebody’s moving from down the street, that’s embarrassing.
Juliette Price:
Absolutely. So we’ve had teams in our collaborative, and they have shared the story themselves, they came to the collaborative who’s two teams, a community college and the four-year institution, they are less than a mile down the street from each other. When I tell you that I walked into this workshop, they were not talking to each other, they had their arms crossed. I was like, “This is it. This is never going to work.” Six months later, they were buying each other matching socks. They are such a team. We can’t get them to invite other people into the team because they’re so tightly-knit. But if they didn’t have this sort of sandbox to do that kind of relationship building, they would’ve just stayed in that standoff.
Alec Patton:
All right. I want to zoom in on that for a moment, because you said the sandbox for the… what did that look like? What happened between arms crossed and buying matching socks?
Juliette Price:
Yeah, it’s a great question. So the structure that we have been using is an improvement community. We use the IHI Breakthrough model, so it’s a very standard process. So we worked through our systems, so we went to, in this case, the state of Kentucky, and we said, “Would you like to join this effort?” They said yes. We helped them pick some campuses to work with using the data, using also just some knowledge of the space. And so they picked these two campuses. We like to have dyad, so we like to have the four-year institution and their feeder community college. And then they come to a workshop and they come in blind, there’s not a lot of prep work. And then what we do is in a day and a half, we go through a very intensive improvement one-on-one process. And I would say, one of the places where the calcification started to crack with that team was when we were doing process mapping.
So we gave our teams, we told them all about process mapping, and then we said, “You’ve got an hour to build a process map. And you don’t have to map the whole thing, but just show me what it’s like for a student at the community college to get over and enroll at the four-year.” And what they started to see was that they had mental models, “You’re the person making it hard.” “No, it’s you making it hard.” But when we broke it down for them and they had to write down every step in the process, I think a lot of assumptions fell away. And I think a lot of shared responsibility was built, was, “Yeah, you’re right. We at the community college make it really hard to get a copy of your transcript. That’s making your life at the four-year institution pretty dang hard to give our students a response. But also,” at the four-year, “you’re not telling us when you change your curriculum. So we’re still advising our students to take Math 405, and you’ve actually taken that out of your degree.”
So they started to realize that there were shared accountability, and that this was a space where no one was going to be mad about things, we were just going to identify opportunities for improvement and get started.
Alec Patton:
Yeah. Why aren’t people getting mad about things?
Juliette Price:
Why aren’t people getting mad about things? I think there’s… well, I will say, we try to infuse an element of joy, which is one of the themes of this convening. We try and have a lot of fun. We don’t try and overcomplicate the work of improvement, we make it really easy. We don’t focus early on the philosophical… Demings, we don’t make people read that stuff, we say, “Come, and in a day and a half, we’re going to give you a set of tools. You’re going to walk out the door with three action plans that you are going to implement in your institutions in 45 day.” And I think there’s a little bit of just the action-focused nature of the work just gets people excited. I always say, “I don’t tell anyone what to do,” all of these teams are there and I’m there to coach them, but they’re the ones deciding, “You know what? This has been a problem in my institution for 100 years, and you know what? But darn it, I’m going to go fix it, and I’m just going to try.”
I think that’s the other thing about improvement science, I will say, it gives people a sense of agency. When you’re starting and you’re starting small, you don’t have to ask for permission, this is your department, this is what you’re in charge of, just try something. You start small. My rule is always you have to start small because if it goes wrong, I want you to take all of those students out for lunch. You can’t take 14,000 students out for lunch, but if you start small and it’s 5 or 10, if it goes terribly wrong, you just take them out to lunch, say, “So sorry about that. We were trying something new. Hope I didn’t bother your year too much. Lunch is on me.” You shrink risk, and that’s really important in testing.
Alec Patton:
That’s a really interesting heuristic to use.
Juliette Price:
Yeah.
Alec Patton:
Have you had that not go great… Have you had groups where… I’m not saying total failure, but at least for the first one, it was like, “Oh, they didn’t really gel.”
Juliette Price:
Sure. And there’s also… this is another big part of building the sandbox, you have to be clear that failure is expected, and when it happens, there won’t be consequences. So the first 45-day cycle of testing, yes, some groups, actually this group that I’m talking about right now, came in, arms crossed, they had phenomenal results almost within a week of going home from the workshop. So of course, that’s a little dopamine hit. They were in, they were caught up, they loved it.
Alec Patton:
How do you get phenomenal results in a week? What do they do?
Juliette Price:
Okay, this is a great story. Okay. So our workshop was in July. So that’s a transition point, students have accepted their offer of transfer, they’re trying to get enrolled for fall courses. When you transfer, you are an upperclassman, because you’re coming in as a junior status. In most four-year institutions, the undergraduates are advised by what we call professional advisors. These are not faculty members, they’re just helping people get through general ed. When you become an upperclassman, you are advised by faculty, because you are now in usually a very discreet program, you want to go into engineering. Now, we do that faculty advisement because it should align to a career, it should align to… “I want you to be in a course, like a field study,” whatever. So it’s faculty. Well, let me tell you what, faculty are not hanging around their office in the middle of July, they’re out on a sailboat somewhere.
And so this team said, “Hey, that’s a pain point in our process.” Over the summer, the students get accepted, we say, “Welcome, come on in,” and the first task we give students is meet with your advisor. So what does a student do? They email their advisor. No response. They follow up. No response. That person’s sailing out in Antarctica, doing whatever. So we had actually been giving students these prompts to do a thing, and the system was not set up to respond to that. So this team said, “Okay, that’s the first test of change we’re going to do. We’re just going to pick some people, pick some non-faculty who are around the campus, and we’re going to have them call the transfer students who are currently not registered for classes.”
Remember my rule? They picked 10. They said, “Sure, we’ll try it with 10 students.” They went back that next Monday, they said to some of their professional staff, “Can you call these 10 students and just walk them through, ‘You need to get registered for class.’” Guess what happened? The professional staff came back to the team and said, “Can we do another 10? That was so great to connect with a student, solve a problem for them, can we do more?” So even though I said start small, very quickly, they ended up doing 80 or 100 calls because their staff was requesting that. Now, when’s the last time your own staff says, “I want to do more work”?
Alec Patton:
In July.
Juliette Price:
In July, while these other guys are off on a sailboat. But it was so effective. And I think that is part of the magic of this process, it gets people jazzed, because people are not professional advisors because it’s the most glamorous job in the industry, they’re doing it because they want to help students.
Alec Patton:
That’s super cool. So that was an example of that going super well.
Juliette Price:
Yes. When does it not go well?
Alec Patton:
Yeah.
Juliette Price:
Okay. So one of the things we often run into with teams is that with whatever process that they’re choosing to improve, they have what I refer to as the crusty dean on campus, and somehow, this always ends up being the dean of education or the dean of engineering. It’s like the person who’s been there for 100 years and it’s like, “No, this is the way I like to do things. This is how I want the process to run.” They are not your early adapters. Unfortunately, there’s this weird thing in our heads of like, “No, I’m going to get this guy. I’m going to make him change.” And so sometimes, they have a great change idea and they will try and test it on the crustiest dean. They’ll go to the dean of engineering and say, “No, no, no, you really need to change your process and do this.” And sure enough, it doesn’t work, the person refuses to change. And we’ve definitely seen a lot of that. And what these very short PDSA cycles help us do is 45 days is not a long time. So you just come back, “Hey, that didn’t work.
My test didn’t even really go off because this guy stood in front of me and said, ‘Never.’ ” This was the case of one of our campuses in Pennsylvania. I told them not to do this, they did it anyways, the guy said, “No, no, no, not going to change.” They came back, and in the next PDSA cycle, they started with a very friendly dean, and then they engaged in what we call sequential testing because it worked, the intervention worked. So then they went out and they recruited two more deans, and then two more Deans, and then two more deans. And by the time… this is a campus that has 28 programs, by the time they got back to this last holdout, they were ready, they had their battle armor, they had 65 slide deck, they were like, “We’re going to get this guy.”
The guy didn’t even need all that. He had started to hear organically from all the other deans that the intervention was working, was making their lives easier, and was enrolling their students. And so you don’t have to win over the crusty dean, because you’ve created this container of everyone else is doing it. The guy was like, “Oh, yeah, I’ve been waiting for your call. Sure, I’ll do it that way. Not a problem.”
Alec Patton:
Let’s talk through your model.
Juliette Price:
Yeah. So I think as I talk about this container, sometimes, I call it a container, a sandbox, I think part of why improvement resonates with people is we’re creating a container that feels very different from their day-to-day job, where it’s like you come to work, in an ideal scenario, you have a job description, you do those things, you’re not always really sure if it’s working. And I think this is actually what creates a lot of burnout both in the healthcare industry and also in education, is the work is hard already, and then to not know if it’s getting you to where you’re going… And so what we try and do is create a sandbox where it’s highly structured. And so as I mentioned, the teams come in, we recruit them, we work with the systems to identify who is a good fit here, and then they come to this first workshop, two days, very intense.
They leave with three action plans. These are time-bound, easy to do, this gets back to the small group of students, and then they go back to their home campuses, and by the way, they realize, “Oh, that thing I thought we do, we don’t.” And so they immediately have to decide, “How do I change my action plan?” And that already is different. I talk about improvement science sometimes, it’s a technology, and I think sometimes, that gets lost. The technology of change today in higher education is what we like to call, and this is in big air quotes, “strategic planning.” It’s like the president decides that transfer is a problem, “Okay, great, perfect. We’re going to put together a committee.” And on the committee are his four… his or her, four AVPs, associate vice presidents. These people do not work in transfer. They may not actually know how transfer processes work, but they’re going to go to a meeting, lunch is going to be catered, and we’re going to walk out of there with a 10-point plan.
And then the 10-point plan gets distributed. It’s like, okay, campus, everyone’s doing this differently, that’s what we’re used to in higher education. And then guess what happens? The people closest to the work were not consulted. So the ideas that these folks have are not the right ones, because they didn’t ask people closest to the problem, including transfer students themselves. Like, “Hey, where are you getting stuck in all of this? Why aren’t you registering for classes?” So you put this out into the universe, this is the opposite of starting small, they go big. And by the way, when you go big, especially if you’re a president, you can never say that it failed, because by the way, in the interim, someone’s written a press release. So press release has gone out, president X, Y, Z 10-point plan to fix a transfer.
Alec Patton:
It’s the exact opposite of being able to take everyone out to lunch.
Juliette Price:
It’s the exact opposite. And so you can’t say it fails because are you telling me you’re smarter than the president? It creates this dynamic of like, “Well, whatever’s in that 10-point plan, it must be the greatest thing on earth.” The other thing that happens a lot in higher education is people… I’m a big yoga fanatic. In yoga, we say, “Keep your eyes on your own mat. Don’t look around so much. You don’t know what that person’s ability is. And so why are you getting all bent out of shape literally about what this person can do, and that you can’t keep your eyes on your own mat?” And so in higher ed, there’s a lot of this, people will look over and be like, “Well, this is what they’re doing,” and they’ll grab the idea without the context.
Alec Patton:
And then you’re trying to do a hands down?
Juliette Price:
And then you’re trying to do hands down and that doesn’t work well. So we have so much of that. And so the 10-point plan, it’ll be like, “Well, George is over there doing that. Let’s do this. And Alaska, I heard about it at a conference, and I didn’t follow up, and I didn’t get the slides, but I think it was…” You end up with this mishmash that gets “implemented,” put that in quotes, implemented on the best day, and then no one can speak out against it. And what happens? There’s failure that you don’t understand. There’s user feedback that you’re not capturing. There’s all this stuff that’s happening in the ecosystem, but again, you have this friction of you can’t say it’s not working. Ultimately, and this is that big McKinsey study, 70% of change initiatives fail. Well, of course, it was going to fail, I could have told you that. But I think the more important part in all that, honestly, is it’s what you’re doing to your workforce.
Because guess what? You start hearing stuff from your staff like, “Well, that’s a bad idea.” Was it a bad idea, or did it not get implemented? There’s no opportunity to have a nuanced conversation. You throw the baby out with the bath water. You mostly also hear this terrible phrase, which is, “I’ve outlived this bad idea, and I will outlive the next one.” And what you’re doing is you’re burning out your staff and you’re contributing to the calcification of an organization. So this technology of continuous improvement puts that on its head. It says, “Okay, here’s a problem, transfer. Let’s get everybody who does transfer 24/7.” Because by the way, those people know where the pain points are. And suddenly, you empower them, “No, you tell me what we should do. You give me the change idea and then we’re going to go test it.” You try it on five students, you could take them out to lunch if it goes terrible, but if it goes well, we don’t jump to full scale. We engage in something called sequential testing.
So let’s say you had an intervention and we’re going to try it with the school of business. It works well. Okay. But the school of business is inherently different from the school of architecture. So if you just take something to scale, it’s going to break. In sequential testing, we go, “Okay, that was interesting. I saw some success. Now, let me add the school of architecture. Let me see if the reliability of this intervention holds.” And you keep going, and you keep going, and you keep going to the point that when you’re done with sequential testing, you’re at scale, you’re at scale. And now, it’s just how the organization functions. There’s some work to be done about holding that and creating policy, but you’ve created the tsunami of change under people’s noses. And there’s no 10-point plan, and you haven’t pissed anybody off because you did it with them.
And the most important thing back to that person who says, “I’ll outlive it,” you get a whole team of people on your campus who says, “That was awesome. I want to try. Can I lead the next team? I know of a problem that’s been hiding over here for a long time.” You get this flywheel of people who become converts. So I tell you a good story about how this happened. So Texas A&M system is in our NIC and they’ve been in it for the longest time, and we were out to dinner together with some of the leaders of the campus teams and the system lead. And we’re sitting at dinner and the campus team starts telling this story, and I nudged the system lead and I said, “Listen.” This person was saying that she had walked into the physical advising center on the campus… Now, Texas A&M College Station, 80,000 students, this thing’s big, and she saw this huge line that went out the door. People were waiting on advising appointments.
She went to the student at the end of the line and she said, :How long have you been waiting in line?” And the student said, “One hour.” And she decided that was unacceptable. And she’s telling the story at the dinner table, she’s like, “Yeah, I brought a group of people together. I put together a name statement. I drew a driver diagram. I did a process map. And we’ve been testing, and the average wait time in our advising center today is seven minutes.” And I turned to the assistant lead and I said, “Did you know this?” And he said, “No, I had no idea that was happening.” That’s how I know it’s spreading. I always tell my teams, “I might get hit by a bus tomorrow.” It doesn’t matter that I know this stuff, it matters that we’re building improvers who are so confident in their skills that they just do this now.
And they don’t even bother telling us about it, which is fine, because they’re just fixing shit… oop, they’re just fixing things. And that’s exciting. It’s exciting for them. Now, it means my system lead doesn’t have to be chasing people around. That’s the ultimate goal of this model. And that’s why we start with the workshop, we give them 45 days. We come back together in a virtual workshop, we unpack what we learned. The teams express their failure, they express their joy, we’re in community so everyone can say like, “Oh, my God, that happened to me too. I have a crusty dean.” We have that cathartic moment, and then we go back into planning, three more action plans, 45 days, same thing. At the end of 45 days, we look at the data, we come together at the mid-point in the year, that’s a physical convening, we found that that’s really important for people to be able to spend time with the other people at other campuses doing this work. We learn a lot, and then we go right back to testing.
So we do four cycles in 12 months, minimum of three action plans, a lot of people end up doing four or five because they just got so many ideas that they want to test. And what that gives us is over two years of running this cycle, we’ve had 252 tests of change. In higher ed, this is a sector that just moves at a glacial pace. I had someone tell me 45… you can’t mow the lawn in 45 days, because you got to go through the lawn cutting committee. It just doesn’t move that fast. And so for us to be on the other side of two years and say, “Yeah, we tested 252 times, we now have this change package. We now know the interventions that worked at a research one with 80,000 students, and it worked at a small regional college with 7,000 students.”
What that tells us is that the intervention has a high degree of reliability, and we feel confident saying to the field, “Try this. It’s probably going to work there.” Instead of this, I went to a conference, I heard the guy in Alaska tell me, it’s like, “No, this has been tried in lots of different contexts.”
Alec Patton:
Yeah. Okay. I’m always really interested in this, the change package thing, because I think attention that I always feel is part of what makes improvement powerful is that experience of testing things out and trying things out. And so I’m always like when someone says, “Oh, yeah, and now, we’ve learned all these things, so here’s a change package.” I’m like, “Yeah, well, but would you have implemented that if somebody else gave it to you?”
Juliette Price:
It’s a great question, and it’s a real tension. Where we have moved and we’re testing, we’ll see if this works, is we’re saying to newer teams who come into the NIC, “Of your three action plans, pick two from the list,” because that gives you a little sense of, “I’m going to get something out of this. I’m going to get a little hit.” But we really want to continue innovating, and we don’t… I think this is a tension, is continuous improvement is about improving, and sometimes people think, “Well, where’s the innovation in that?” They are two things that need to live harmoniously. Our change package is a start. These are four concepts, we’re probably going to add another two very shortly. It’s not like if you do these six things, the whole world is… but it gets that flywheel.
And I will say, different teams react differently. Some teams are like, “Thank God, I didn’t know where to start. I’m feeling really nervous. I want to go to the change package immediately.” Other people, not so much. And we coach people, but ultimately, to what you’re saying, is we want people to become improvers. If you don’t want to use the change package, I’m not going to rake you over the coals, I’m going to tell you that you’re likely to see results. But maybe you looked at the change package and you say, “I already do these things.” Great. Move on, innovate, keep learning.
Alec Patton:
You can also fail three times and then say, “Maybe we took another look at that change package.”
Juliette Price:
No, totally. And we have had teams where they’re like, they just believe that the intervention is eventually going to work and they’ve tested and tested at some point. Humans aren’t dumb, we just go, “Okay, I thought that was a good idea.” It is so clearly not. And so yeah, everyone learns differently.
Alec Patton:
Yeah. Yeah. And I think that point, the thing I think about a lot, and that I say a lot, when I’m teaching, is this is an invitation, not a requirement.
Juliette Price:
Yeah. Yeah.
Alec Patton:
You have it… if you want this, if this is useful to you, use it. I think it might be useful to you. If it’s not useful to you, don’t use it.
Juliette Price:
Yeah. I think that’s a really good point. In the work that I did in K-12 schools, it was a time when there was so… that phrase got thrown around so much like education reform, even just saying the word gave people… they’re like, “Oh, I need to be reformed.” There was this general vibe of we will tell teachers what they’re doing wrong. And I think that some of that work in K-12, the predecessor was the district being very close to receivership. And that was a tension, because I do think it’s hard to start from a place of like, “I’m going to be told,” but you’re right, it’s an invitation. And I think if you create the… again, I’m back to my sandbox, but if you create the sandbox as a fun place to be, a safe place for learning, “I’m not going to make you do anything you don’t want to do,” I think you catch more flies with honey than vinegar.
Alec Patton:
Yeah. I wanted to come back to that. I’m glad you brought off the sandbox because you said this really great paradoxical phrase, which was that it’s a sandbox that’s highly structured, and those are not two things that you normally associate with each other. But I also think, I feel like I get it, I think, but I’m curious, how do you create a space that feels open and creative and also is highly structured?
Juliette Price:
It’s a great question, and it’s something I struggle with. Part of the reason I believe in highly structured sandboxes, I’ve been around the improvement world for a long time, and I think we’ve over corrected in both directions. I have seen models where it is so strict, so… you can’t leave any blanks on your action plan that would negate the whole plan. There are some models that became too strict, there are also some models that became too philosophical. We were expecting people to read Deming, I don’t believe in that. I think you can start with PDSA, and if people catch the improvement bug, they will ask you, “What can I read? Where can I go?” And then you can guide them back to some of the philosophical literature. I’ve seen it go too far in the strict, I’ve also seen it go too far in the other direction. A PDSA without a measurement plan is useless, useless. The whole point of improvement is you have to answer the question, is this change in improvement?
And if you can’t answer that because you didn’t write a measurement plan, you just wasted everybody’s time. So that’s where I’ve landed. And I said this in my presentation, I was like, “This is not the school of anyone but Juliette Price,” is like I have seen both models too strict, too lackadaisical, and I’ve landed at like, this is where I think most teams shine, and people need a little adjustment to either side and that’s fine. My goal in what we’re doing at NASH is to provide an on-ramp. I can always layer on, we, this year, are offering what we call mastery level. It’s a different track in the NIC, where you’re in the NIC, you’re testing, but you’re also spending time with me learning advanced tools, advanced measurement options, we’re getting into the statistics, we’re getting into that… but not everybody can come in at that level. And so can we build an on-ramp? And then as people express interest and want to get to mastery, we’ll get there.
Alec Patton:
Yeah. One thing I think we’ve talked about from various angles is that when you’re starting out, some people will find the change package really helpful and reassuring, some people will want to reject it, some people will take to the structures, some people will find them off-putting. What’s the thing that in that first sandbox session, that no matter what, everybody should be walking away with?
Juliette Price:
Everything… Okay. Yeah. I thought you were going to ask a different question, was like, “What wins people over?” So I’ll answer that question.
Alec Patton:
That too, it’s both. Yeah.
Juliette Price:
The thing that wins everyone over is, again, we try to infuse joy and play. I’m just a big… I learned that from my early childhood folks, is play is the focus of early childhood, that’s how you build those skills, and then we just stop, adults don’t play enough, and I love that. Two things that we always do at the workshop, we will play the Mr. Potato Head game, which if you don’t know, David Williams, who is here today at the summit, you can Google him, you can just go, “David Williams Mr. Potato Head game,” it’s an amazing tool to learn testing, and it’s a way to engage adults in learning why testing is important, but getting them out of their own sector. It’s like we’re not going to build a test about transfer students, we’re going to take Mr. Potato Head, and the goal is to, in the least amount of time possible and the most accurate configuration, put them together.
And how do you go through that? And there’s just something about Mr. Potato Head. I think probably between David and a couple of us, improvement coaches, we’ve bought the most amount of Potato Heads and Hasbro must just be wondering what’s going on. But it’s a way to engage with people in a playful way and they start understanding, “Oh, this is why Juliette wants me to put a hypothesis down,” because you’re right, if I just do testing and I don’t come back to why was I doing it, I’m not learning. And so it builds that muscle. The other game that we play is called 246. That’s another one easily available online for people who are looking for improvement games, it’s another game about testing that just teaches a lot. So we really focus… and I will say I’ve seen the crustiest people who come to these, they love it. They open up. Play just opens up their mind in a different way, and I think that allows them to clue in and engage in a different way.
But I think if the thing that I would say that people walk away are from our workshops, we do do a single word checkout at the end of these workshops. And the two most common words that people check out with was, “How are you feeling?” “I’m feeling exhausted. This has been so much work in two days, very short time period, but I feel so hopeful.” Because I think one of the things, at least in higher ed, is we’ve sucked a lot of the hope out, we’ve sucked a lot of the decision-making away from individual practitioners. So unless you are the AVP, you’re not really allowed to make decisions. And I think some of this is just about telling people, “You have the power to do something differently. Start small. Don’t set the university on fire, but just do something.” And I think people don’t hear that. Just do something. Just try something different. See what happens. And I think that sense of hope brings people through the harder moments.
Alec Patton:
Yeah. I got one final question for you. You touched on sustainability and policy and then moved off it, I want to move back onto it.
Juliette Price:
Yeah. Sure, sure.
Alec Patton:
I think that’s the great fear, is that you do this, and you set things up, and then it just goes?
Juliette Price:
Yes. Yes. I think that is something that our field collectively as improvement has not spent enough time thinking about. One of the biggest differences between healthcare and education, and I say this to my teams all the time, is in healthcare, if I run a session and you come up with an action plan and you go back to your hospital or clinical setting, when something goes wrong, there is a unit at a hospital or at a clinical setting called the quality unit. There is a chief quality officer in every clinical setting in America, and that person is usually a very advanced quality improvement practitioner, and there’s probably three or four depending on how big this thing is, or sometimes just one, who can help you. You bang your head up against the wall, just go talk to them. In education, we don’t have that. And I am very concerned about that because… and this is why we’re working through systems is our hypothesis, is it’s probably unlikely that every campus in America, in five years, will have a quality department.
Could it be potentially possible that every system office has a quality department? Can we, in an era of not a lot of resources, these are public universities, there are state and federal funding is getting cut, probably doesn’t make a lot of sense to fund these centers at colleges individual… can we fund it at a system level? Can we have a team? Whereas the campus teams are learning and running tests, they can come back and get some senior coaching. Right now in our model, we provide that coaching. And if you ask anyone in the model, the most valuable thing is the coaching. That is what they want, that’s what they want more of, they’re coached at the workshops, they’re coached in between, and they also know that they can just call or text me and get answers. It’s that just-in-time coaching that I have seen lead to jumps in capacity.
So one of the things we are taking on, and I’ll just be honest, we’re not sure it’s going to work, but we need to make sure there is sustainability. I love all my teams, I love my team leads, I love my system leads, but they could all take other jobs, and what will happen to the work? And so to me, I look at healthcare and I say, “Aha. There’s capacity at every clinical site, we got to replicate that.” There’s also credentialing, we were talking about this with Lloyd Provost. When you are hired at a clinical site, they will ask you what kind of continuous improvement training have you had, if you’ve had it? Nurses and physicians get it through their professional programming, others don’t. And so it’s like, “Okay, not a problem, we’ll put you through our own. We’ll get you into that four week online class,” whatever. We got to start doing that in higher education.
As you hire people, “What’s your exposure to this?” It’s okay if it’s zero, that just lets me know, “This needs to be part of your onboarding.” The last thing I’ll say is accreditation. Continuous improvement in healthcare is tied to accreditation. We need to start having a better conversation in higher education. We are accredited, institutions get accredited, how do we align that more and how do we make that part of accreditation more robust? The last thing I’ll say is incentive structures. Part of the reason this matters so much in healthcare, and I’ll just take sepsis… or readmission rate, let’s take that, so a readmission is you came into the hospital, fixed you up, sent you home. You’re back within seven days. That’s a national measure that we all measure ourselves on, and CMS at the federal level will institute a fine at the hospital if your readmission rate is too high.
That is one of many financial incentives that the healthcare industry reacts to. I always say, in higher education, you get paid for getting a student in the door. Imagine if you got $0 when the student walked in the door and all of their tuition funds only hit the bank account of the college when they graduated. Wouldn’t we have a different system? And so I do think that’s obviously radical, but how do we start thinking about getting ourselves there? How do we start looking at models where you’re not getting that financial incentive to just intake, intake, intake, you’re getting incentives to retain, to graduate, and do all the things you need. So I like to look at healthcare and education. It’s not like an equal like, “Oh, we’ll just pull stuff over,” but I do think that there’s a lot of inspiration we can find.
Alec Patton:
Yeah. All right. I’ve got a final plus question here, this is just raised for me and it may just be too big, but when you said about the quality departments, on the one hand, there are lots of different ideas about how healthcare should happen, but there’s this sort of a basic like, we’re all fairly clear on what a sick person… the characteristics of a sick person and a healthy person, and what we want to get to in a way that we aren’t as clear about an uneducated and an educated person. I think a lot of people, who are listening to this, would be with you up to the point we’re like, “Quality,” and they’d be like, “Well, wait a minute, quality according to who?” And how do you think about that?
Juliette Price:
No, great question. So we closed a restaurant last night talking about this exact question because it is so big. Two of the other structures I point to oftentimes in healthcare is, one, we have a unified measure set. So these are called HEDIS measures. HEDIS measures are a set of measures that have operational definitions that every clinical setting… everyone knows them because this is how the whole system runs. And when I say system, I mean health insurance gets measured and paid based on how they score against these measures. So that drives the health insurer to push on the provider, “Hey, provider, A, B, C, D, last year, your readmission rate was really high. You’re bringing down our average. What can I do? What happened? And in worst cases, we’re going to penalize you.” But it creates this very clear narrative that everyone’s on the same page. I often tell… I advise some health startups and they’re like, “Well, this delivers an outcome over…” And I go, “If it doesn’t map to a HEDIS measure, I’m not interested,” because that’s how the industry runs.
We are very clear that bringing down A1C rates in diabetics is what we’re here to do. If you’ve got diabetes, that’s what we’re worried about. And by the way, that cut point is very clear. A high blood pressure, it’s a number on a scale, you don’t have to… well, I’m not sure that’s high blood. It’s just there. We don’t argue about that as much. Now, there are movements about improving the measures, and I’m not downplaying that, but today, you can just get started. You can just say, “Okay, these are the measures everyone cares about. We’re going to work on these.” So there’s that. I think in higher education in K-12, we got to get better at that. And one of the things that we’re going to do at NASH is we’re going to put out what we’re going to call NASH measure sets.
We’re going to start with the transfer, because what we’ve learned in all of our testing is it’s useless to just look at retention rate because that’s a once a year lagging indicator. But I do really care about, halfway through the semester, what’s the drop, withdrawal, fail rate in every single class. Because that’s going to make me go, “Why is Professor Juliette got all these DF’d up? I’m going to go over there and see what’s not right. Bright spots, I’m going to go see what’s going on over here because this person’s got a very load…” So we need those measure sets. And so we’re going to take a lot of the work that we’ve done in the NIC. We have a set of measures, they’re probably not perfect, some of them are probably wrong, but we’re going to start putting them out to the field and saying, “Hey, this is what we think works. Will you be brave enough to look at your numbers and tell us how you’re doing?”
Lloyd Provost shared this yesterday when he was working all those years ago, primary care access, what we now call first available appointment. He had pulled together a NIC, this is in the ’90s, but they were like, “Oh, well, most everybody brought their data. Well, my first appointment is three months out.” Then they talked to some other people, “Well, I got one in 10 days.” Oh, wow, that’s an improvement. Everyone said, “Well, let’s get to 10 days.” And then one guy in the audience raised his hand and said, “Mine is zero. I have what is called open scheduling. I always have same day appointments for my patients.” The point of improvement and convening for improvement is to highlight variation. So imagine if just that one person was out there with three months, they would’ve said, “Well, this is just how it is. This is just how the industry does it.”
But it’s because you have that other person who said three days, and then that one guy who said it’s zero, that accelerates a field to get to change faster. But you can only do that if you’re clear on the outcome, that care, deliverance of care, next appointment is now what we call it, that’s the measure that you’re looking at. So we’re going to go bold at NASH, we’re going to release these measures, probably wrong, we’re looking forward to it, we’re looking forward to the debate, and we’re looking forward to seeing who’s got better results. One of our systems, again, Texas A&M, they were working on the number of days between an admission application being submitted and a response back to the student. Some of these campuses were at six, nine weeks. Okay. They worked on improvement, they got down to three weeks.
Huge improvement, half the time, let’s celebrate. We added another campus to that team, 72 hours. All right, we’ve got some work on our hands. So again, it’s all about bringing more people in to say, “What are we testing again?” So the measures are super important. The other thing I would also say about the healthcare piece of this is there is a culture of this, but this is where I remain… I remain very optimistic. Again, back to Lloyd Provost’s talk yesterday, he said it takes 25 years to build a movement. Okay. So where are we now? We’re 10 years in, we’re 12… However you want to measure this, we still have a ways to go, but I’m heavily optimistic that we can build that same culture in education.
Alec Patton:
Yeah. And I think a really important point here is that when you talk about education, people think about what the students are discussing in class. And yeah, that can be contentious, there’s a lot of different… but whether kids can successfully transfer from a two-year to a four-year college, that is not controversial.
Juliette Price:
No, no. I think that… this is hard to say in this moment, but I actually think most things are not controversial. Who’s going to say, “I don’t think third grade readers should read.” Most people… you’ve probably got a couple on the crazy end, but most people are completely convinced that’s a good benefit to our society. And I think we have more in common than we have… that we have not in common around all of these topics. But again, I think this is what makes me come back to the measure set, when the measure set is set, when it’s built from the industry, you don’t have to have all these discussions, it’s mostly, we’re just heads down, we’re working on improvement.
Alec Patton:
Awesome. I think that’s a perfect spot to end it. Juliette, thank you so much.
Juliette Price:
Thank you. Very excited to be here.
Alec Patton:
High Tech High Unboxed is hosted and edited by me, Alec Patton, with additional editing by Katie McMurrin. Huge thanks to Juliette Price for this conversation. Check out the show notes to find out more about NASH and the transfer success NIC and more about continuous improvement in general, including David Williams’ Mr. Potato Head game. Thanks for listening.