In this fireside chat, hosted by Kuldeep Kelkar, Senior Partner, UXReactor, Kristin Sjo-Gaber, the leader of the Design Research and Strategy team at athenahealth, discusses the democratization of research and its impact on organizations. She highlights the benefits of involving designers in research and the challenges of maintaining quality and volume. Kristin emphasizes the importance of aligning research with business goals and using metrics like the Standard Feature Perception Instrument (SFPI) to measure impact. She recommends hiring research coordinators and involving researchers in early discussions to ensure strategic decision-making. Kristin also provides tips for new leaders in the field of research and design.

——————— ——————— ———————

Full Transcript

Kuldeep Kelkar 

Today we have a special guest, Kristin Sjo-Gaber. Welcome, Kristin.

Kristin Sjo-Gaber

Thanks, it’s great to be here.

Kuldeep Kelkar 

Well, thank you for being on the show. Why don’t we get started? Tell us a little bit about yourself. Introduce yourself for the audience, please.

Kristin Sjo-Gaber

Sure, yeah. So I’m Kristin Sjo-Gaber. I currently lead the Design Research and Strategy team at athenahealth, which is a health tech company. We design all the different kinds of software that you can expect to find in your doctor’s office. People who are checking you in at the front, all the electronic medical records your doctor keeps, the billing software, anything related to sort of the successful running of a practice is really what our company focuses on. And what else can I tell you?

Kuldeep Kelkar

Well, tell us a little bit about your role. What do you do within the world of user experience?

Kristin Sjo-Gaber

Sure. So my background actually goes back to product design. That’s really where I started my career. So I went to school for industrial design. When I graduated, I joined a consulting firm called Radius Product Development and there I was a product designer. So I worked on paper shredders. I designed a military grade phone. I did mascara ones. I just did all kinds of… different kinds of products. And the really interesting part to me in the work was that upfront research, design strategy, kind of what problems are we really solving. So I had the opportunity to go from there to join a company called Continuum, which is an innovation consultancy in Boston. And I spent about 9-10 years there. And that was a really formative part of my career. So I was traveling all over the world. I was on a team with design researchers, design strategists, business backgrounds, design backgrounds. We worked across all kinds of industries. It was just a really wonderful experience for me to understand sort of how to think about design strategy and how to approach problem solving from a business perspective. And then I made the shift over to Athena about… it’ll be seven years this year, seven years ago after the birth of my son. I thought consulting life is too crazy with the little one running around. So I decided to come over to Athena. And at that time, I joined what was then a strategic design group. So we were a small little team, sort of operating in a similar model to what I was doing in consulting. We’re really just kind of taking on these ambiguous projects. And over time, my role has really evolved, where I’m doing sort of much more research enablements today in addition to those strategic projects. So today I lead the design research and strategy team. So we are a small centralized team within the R&D org. And we really have sort of two responsibilities. We both lead strategic research projects, and we also enable research for designers and product managers who are leading their own research. And that enablement comes in very many forms. Happy to talk about any of them, all of them, but that’s kind of the role. 

Kuldeep Kelkar

Yeah, this is a very, very interesting journey from consulting to in-house and now almost seven years. And so your current role, as you described, is executing and delivering high quality research, as well as enabling the organization to conduct research. So this is talked about a lot in the industry around democratization of research or scaling research. So tell us a little bit about why this role? How did this role come to be? How long have you been doing this? And then we’ll certainly get into the pros, cons, what works, what doesn’t work. But how did you get started or how did the organization get started into this?

Kristin Sjo-Gaber

Yeah, well, to be honest with you, the sort of decision to shift into a more democratized, like decentralized version of research happened even before I joined. But in many ways, our team is a response to the way that the company has decided to integrate research. And so it used to be that there were dedicated researchers whose responsibility was to conduct the research and the designers were really squarely focused on design. Then there was a decision and like I said, I wasn’t there for sort of the ins and outs of it, but to really generalize the designer’s role so that they could do the research for their own work. And I have to say, I know there’s a lot of sort of debates about how this goes, but I really do think that there is a real benefit to having continuity from the person who is doing the research and learning about the topic to being the same person that’s going to execute on solving the problem, and just your ability to internalize what you are learning and then apply it to your own designs, I think is really important. So that was the decision. So the way that our team functions is, when you see designers come in, because they are generalists, there is a wide range of experience with research. Some people have a lot of research experience, some people are much newer to research. And so our team is really there to do things like, you know, some of the sort of obvious stuff, like provide coaching, have templates, provide materials about best practices. But we’re also doing things about just the throughput and the volume of research that’s going out because, you know, of course, there’s like tons of research happening all the time. So we’re doing things to really streamline those processes, you know, like setting up survey templates, accelerating the way data is connected to surveys, thinking even about sort of the quality of our surveys and making sure that they’re gathering the data that teams really need to make decisions. So there’s just lots of ways that we’re plugging in to make all the research that’s happening continue to be efficient, to be high quality, and to have an impact.

Kuldeep Kelkar

Yeah, so that’s interesting. I mean, I have plenty of questions having been involved with lots of democratization efforts across the industry in my consulting role. So,  we’re great to hear. I mean, tell us a little bit more about… as an example, you said, hey surveys and templates or surveys, but I guess to back up a little bit, some have a research background as you said, some don’t. How much training is required or how do you… assuming that training is required, what’s the cadence, what’s the frequency? How do you set this all up for the larger organization? And are there enough resources to be able to support a wider organization?

Kristin Sjo-Gaber

Yeah. So actually, this is a part of the when we talk about sort of the decentralization of research, part of what is decentralized is the individual upskilling of each of those designers. So we play a part of that role. Like we are there if there are, for example, we just started to really, to really expand on our upskilling programming. So more talks we’re giving on research, more live project coaching we’re doing on research, but we are a very tiny team. So we cannot sort of have the responsibility to upscale every single designer on research. And that’s really where their manager comes in. Their managers are also people who have experienced doing research. And so a lot of that on the ground training is the responsibility of their manager, and we are there to sort of supplement. So there isn’t a single cadence for the way that designers are trained up on research at Athena. So yeah, I think that’s just kind of part of the decentralized model.

Kuldeep Kelkar

Yeah, I know a lot has been written, a lot has been talked about, about democratization, but there aren’t always those many actual use cases of this working in the industry that people have talked about. So in your experience, in the last, since you’ve been doing this for a few years now, what works? What are some of the challenges? Let’s start with what works. When this really works, how does it benefit the organization?

Kristin Sjo-Gaber

Well, I mean, I think that my perspective on research is that I tend to take much more of an applied business perspective on research as opposed to sort of a more academic approach to research. And so to me, the most important part of getting designers to do their own research is the impact of doing it. And so I am less concerned if it is sort of a perfect textbook example of what research should look like, because learning is learning. Learning opportunities are really important. And if that designer is increasing their understanding of the problems that they’re going to solve or getting really great engagement on some of the ideas they’re coming up with and really good feedback, and that’s helping them to make better decisions about the products that we’re putting out, I think that’s a success. Do I think it’s a perfect example of research every time? No. But I am much more inclined to say I want them to be impacted by the research that they’re doing in a positive way. And in that respect, yes, I think we are seeing the results of, you know, the teams being able to kind of learn for themselves and gather that information.

Kuldeep Kelkar

Got it. So and then on the challenges side, so I’m sure there are some challenges but what are some of those?

Kristin Sjo-Gaber

Well, I mean, and I just sort of alluded to this, you know, we do see times when designers are newer to research or less experienced and the research is not conducted perfectly. And we see it. And so our team sees a lot of that. We have a more macro view of the research that’s happening. And so, you know, it’s disappointing to see research go out that’s not at the level that we would like it to see. And we’re actually doing… we’re doing a lot of quality reviews. So one of the things that we’ve started to really kick up in like the sort of last quarter of last year, and we’re continuing to do this year, are more reviews before research goes out. So reviews of discussion guides, reviews of surveys, like we’re looking through and saying, hey, you know, you might want to go in, like reword this question, or what’s the, what is the goal of this question? Do you need to ask this question? So, you know, we’re doing a lot to raise the quality, but I think that is for sure a challenge. And, you know, I think another challenge is, you know, regarding the volume of research, when you don’t have it centralized, it’s a little bit harder to contain how much is going out. And so I think, you know, trying to really get a grasp on making sure that the research that we’re conducting is a good use of time. It’s a good use of our R&D time, where we’re getting high quality answers to make decisions with, and it’s a good use of our clients’ time because they are a finite resource. And so making sure that we’re really going to them with valuable questions, it’s something that our team really does a lot of work to protect.

Kuldeep Kelkar

Yeah, yeah, yes. Absolutely. Now I’ll use this term, which I’ve heard other people use, which is guardrails. Some people don’t like this analogy, but I think at least the researchers that I’ve spoken with that are involved with the research ops function, which is what you’re describing, both design ops and research ops, some training, some templates, best practices, reviews. In general, what guardrails would you recommend organizations that are embarking on this democratization journey should look at so that high quality research comes out the other side? So any thoughts on those guardrails?

Kristin Sjo-Gaber

Well, two things come to mind. One, we’re actually working on something as part of… so we have something that we’re referring to as our research building blocks, which is a part of an upskilling program that our team is putting out. And with the building blocks, the idea is to not just share best practices, but to put those best practices in action through things like templates, you know, resources, like tactical strategies. One of the things I think that can be really, really hard to do is to go to a talk or go to a workshop. And then here’s somebody who is an expert on a topic and speaks with ease about what it looks like, what good looks like. And then you turn around and you go back to your own desk and you go to try to do it yourself. And it’s really hard, like, where do I go from here? Why am I not getting the same results? And my take is that there’s a delta between speaking about a topic from a high perspective and then literally tactically doing it. And so the discovery building blocks, the idea is that we’re trying to bridge that gap. We’re saying, hey, this is what good looks like. And also here are some very concrete things you can do to push your own thinking forward. So I don’t know if those are guardrails, but that’s one way that we’re trying to sort of do that work.

Kuldeep Kelkar 

No, no, that’s a great tactic, which is the whole “learn by doing”. So you guys can arm the designers with tools and techniques and trainings, but be available when they have a real life use case and they have a real need. That is when the rubber meets the road. It’s all that training, all that templates, there’s always customization needed. And I assume that the designers can lean in on the research organization for those reviews, for helping them improve the research quality before research gets executed.

Kristin Sjo-Gaber

Yeah, and I think one of the things with that is that it’s not just… I see a lot of templates that are focused on like the final execution of the work, like here’s your research plan, like plug everything in, but where I think it’s actually hard is the thinking parts, like thinking through all the elements of your plan is where it gets hard. How big should the sample be? Which segments should we go after? What are our research goals? You know, what is the business need driving this work? So like a lot of the sort of preliminary thinking you have to do to get your work into a research plan. That’s where we’re trying to create more tools for thinking and tools for working with your team to align on. That’s really the focus of those research building blocks. The other guardrail that I was thinking of is something that my team is actually working on right now, which is around quality checklists. So a survey quality checklist, a moderation guide quality checklist. Like, you know, how can we dimensionalize what good looks like? So that way, when we go to a designer and we’re talking to them and we’re saying, hey, you know, we really want to get the quality of your survey up to a higher degree. We need a way to break it down so that it’s easier to see where to focus. And it’s also easier for them to see what we’re reviewing. Because right now, if we say, hey, there’s more work to be done here to get it to a higher level of quality. It’s sort of ambiguous about where we may suggest that they focus. So if we have these checklists, we can now say with much more sort of direct communication, I would focus here and here, go back in and take another look. And we’re all looking at the same things and have an alignment on what those quality markers are.

Kuldeep Kelkar 

Yeah, absolutely makes sense to me, which brings me to this larger topic. And it’s a two-fold question around impact. There’s a lot of talk about or questions that researchers have, even designers, around demonstrating impact or their ability to demonstrate impact given the current condition out into the market or just in general, everyone that does high quality work wants to demonstrate that impact and researchers are no different. So the two parts of the question are how does a small centralized design and research operations team demonstrate impact? So it’s essentially for you and your team. But then how do designers that are working on a whole lot of different things outside of research, how do they demonstrate their impact through research as well? So let’s first start with your team and given that it’s a small team that’s supporting a broad range of designers in a decentralized fashion. So how do you measure impact? How do you demonstrate impact for the team and the world?

Kristin Sjo-Gaber

Well, I mean, I think that’s a big question and I can answer it a number of different ways. The first thing I’ll say is, I think that we determine our impact based on the business needs, right? So, many times whenever we’re taking on strategic research efforts and we’re partnering with a team in the org, a big part of our initial discussions is around their OKRs or it’s around their business goals, because we’re going to align our research work to help them to achieve those goals. And then those goals become the goals for the research. And so I can’t really predetermine what those will be until we’re working with our partners. But this is an area where I think

For researchers, it’s really, really important to be savvy about the business, because we don’t want to just randomly target research, we want the research to aim directly at addressing open questions that we need to solve as a business. 

And so from that perspective, that’s a really big part of how I think about impact for the team. Another thing that we think about is something that we refer to as our SFPI survey. SFPI stands for Standard Feature Perception Instrument. It’s like a mouthful. In some ways, I think we could probably simplify the name, but really what this is, if you’re familiar with the Jobs To Be Done philosophy or sort of position. This is a job to be done satisfaction survey, very short. And we can ask about any feature in our product. We can say to the user, how satisfied are you with your ability to accomplish a particular job in our product?

And the way that we use SFPIs at Athena is that we do basically a baseline where we’ll say, before we introduce an enhancement, what is your satisfaction with your ability to achieve that job? And then after, we introduce the enhancement. Now how satisfied are you? And we’re really looking for that rate of improvement from before to after. And we run that “after” multiple times. We can run it in alpha, beta, when it’s released, in GA. And we’re really trying to look for that arc. And the thing that I think is really powerful about the SFPI is our ability to quantify the user experience by focusing on satisfaction with their ability to do that job. So this is a measure we really like. It’s something that’s been widely adopted by the product teams. It’s something that we use when we’re having discussions around release readiness and, are we driving the improvements that we want to see? So it’s a really simple survey, but is, you know, just in the way that it’s used at Athena’s has become an important part of making sure that we’re delivering the improvements to our users that they expect.

Kuldeep Kelkar

Yeah, I mean, I’ve heard a lot of different examples. This is absolutely an amazing one because it highly quantifies the satisfaction with the job to be done. And there are, as you very well know, multiple KPIs and metrics. But what I loved about your answer, it doesn’t, even before you get to the perception of satisfaction with the job is, what is the business KPI? What is this project or the engagement or business unit trying to achieve and then connecting the dots from there on to the research world? Any thoughts around demonstrating impact through just volume of research or velocity of research?

Kristin Sjo-Gaber

Yeah, yeah, no, that’s a great question. So we definitely do a lot to monitor volume. I think volume is funny. So on the one hand, we want to see a volume of research, right? Because we want to see that our teams are, you know, making sure they have a very nuanced, sophisticated understanding of the problems we’re solving. On the flip side, we don’t want a high volume of research that’s not at the quality that we don’t want to see. And we also don’t want to see a high volume of research if it is not necessary. And the reason I say that is to go back to what I was saying earlier, we have a finite resource in our clients. And so we really do want to make sure we’re asking smart questions, smart and thoughtful questions that really do move our understanding forward.

And we have a lot of expertise in house, right? Like Athena has people who have been hired from clinical positions. We have people, product managers, designers. If you have been in the industry long enough, you have built up a lot of resident knowledge to just really inform your gut instinct. Now, gut instinct has a lot of blind spots. When you start to overly rely on your instinct, that’s where you might start to make assumptions that may not be true. We wanna test those. So I think when we’re looking at volume, it’s not just a question of how much, it’s also a question of the quality and the sort of expertise that we already have. And it makes it really complicated, it makes it a complicated metric to look at. Because

Good research isn’t one dimensional. The more isn’t always better. 

And so I think we have more to do for our team to really kind of get a handle on monitoring sort of like those different dimensions of quality and quantity and all that. 

Kuldeep Kelkar

Understood, understood. So I think, I have an article called Five Vs to demonstrate impact. What you’re talking about is the volume, velocity, but connected with validity. Was this research valid? Was it required? Could we have found this answer another way? Is this a valid answer to begin with? Was this research valid? And is the sample size appropriate? I mean, you had multiple different examples through the quality aspect, essentially put in this validity bucket. So this is great. Let me ask you, let’s imagine an organization that is maturing in its user research practice, and they might not have a dedicated operations person at this point in time. What recommendation would you have for an organization that is trying to mature, is in the early days of developing research operations, and this might have been many years prior for you, given that you have been doing this long enough. Any recommendations you have for organizations that are getting started with the early days of research operations?

Kristin Sjo-Gaber

Yeah, I mean, the very first thing that comes to mind is to hire research coordinators, like hire people who are specifically dedicated to finding the right users to participate in your research. You know, it’s such a gem of our team and our ability to support research for the organization. And we consistently get good feedback from the designers and the product managers. Of course I want to see high quality. Of course I want to see, you know, like teams doing the right thing, but they have to be able to find their users easily. And that has made our ability to scale up research at Athena just so much faster. So I would definitely say, invest in people who can play that role. And I guess the other thing I would say is, you know, having researchers as much as possible in the early discussions around sort of where you’re shaping the work. Even if the researcher is the designer who’s doing the research, having a research perspective involved early when we’re having discussions around the OKRs and the business goals, as early as you can, is going to be really important because they will bring a perspective on the questions we want to be asking and answering so that we can know if we are aligning with our users in the right way or driving the type of improvements on their experience that we want to see. They’re just going to bring that perspective. So I would say just involve a researcher as early as you can in those kinds of discussions and hire yourself some research coordinators.

Kuldeep Kelkar

That’s fantastic and any recommendations or tips or tricks for designers or researchers who are new leaders or who are starting to become managers or became a manager within the last year or two. If you can remember what your journey was like in those days.

Kristin Sjo-Gaber

Yeah. Well, I mean, and it’s funny because I think about it a lot because I do have a team reporting to me. So I think about this for them. I think the most important thing is that you are not a researcher in a vacuum or you’re not a designer conducting research in a vacuum. But come at it from a strategic perspective, right? Like come at it from the perspective of, I am trying to answer questions so that we can make smart decisions as a business. So, you know, it’s not enough to just do the research. It’s, you want to take that research and apply it. 

I think the most important thing is that you are not a researcher in a vacuum or you’re not a designer conducting research in a vacuum… So from a leadership perspective or a manager perspective, really be a strategist and take that consultative approach where it’s not research for research sake, it’s research to make smart decisions.

Kuldeep Kelkar

Yeah, well said. Perfect. Thank you, Kristin. Thank you for being on the show. Always wonderful to hear from a range of people and your practical tips, your on-the-ground experiences. Very helpful. I’m sure the listeners would appreciate all that feedback. And so thank you to all the listeners. Please connect with us on all social media platforms and stay tuned. Talk to you soon. Bye.

Check out all #uxignite episodes