Here's an experiment for you. Take passionate experts in human resource technology. Invite cross industry experts from inside and outside HR. Mix in what's happening in people analytics today. Give them the technology to connect, hit record for their discussions into a beaker. Mix thoroughly. And voila, you get the HR Data Labs podcast, where we explore the impact of data and analytics to your business. We may get passionate and even irreverent, that count on each episode challenging and enhancing your understanding of the way people data can be used to solve real world problems. Now, here's your host, David Turetsky.
David Turetsky: 0:46
Hello, and welcome to the HR Data Labs podcast. I'm your host, David Turetsky. Like always, we try and find fascinating people to talk to inside and outside the world of human resources to bring you up to speed and the latest on what's happening HR data, analytics, technology and process. Today, like always, we have with us Dwight Brown, our co host. Hey, Dwight, how are you?
Dwight Brown: 1:07
Hey, David, I'm good. How you doing?
David Turetsky: 1:09
Very good. Thank you, sir. And our old friend Kevin Campbell, employee experience scientist from Qualtrics. Hey, Kevin, how are you?
Kevin Campbell: 1:17
I'm doing great. Thanks for having me back on.
David Turetsky: 1:21
It is a pleasure to have you on again, sir. It was fun the first time I think we'll probably repeat that again this time. But for people who don't remember you, can you give us a little bit about your background.
Kevin Campbell: 1:34
So I'm an organizational psychologist by training, and I'm a recovering consultant. So I work as a consultant with the Gallup organization and Deloitte and I had my own practice for a period of time. And now I've gotten more into the tech world. So I worked as a people scientist for Culture Amp. And now I currently serve as an employee experience scientist for Qualtrics. So working on our internal advisory team on helping our customers go from where they are, to where they want to be with regard to their employee experiences.
David Turetsky: 2:08
Fascinating stuff. And that's the reason why we love talking to you, because you've got an incredible background, and you have an opinion, and we like it. So Kevin, one fun thing that no one knows about you until now.
Kevin Campbell: 2:22
I already gave my one fun thing last time and I don't know if I have two!
David Turetsky: 2:28
You must! Come on.
Kevin Campbell: 2:30
You know, it's interesting, because this is something that everybody knows about me and nobody knows about me at the same time. So we've only met virtually, we haven't had the pleasure of meeting in person yet. But if we had, you might be able to see that I have a scar on the back of my head. A good amount of the time, my hair covers it up. And as I've gotten older, fewer people ask about it. Because as I start to lose some of that hair back there just sort of blends in with everything else. But when I was a kid, and people would ask me how I got the scar, I would always say it was from a surfing accident, because that sounded much cooler than the truth. Which was that my mom was a dog groomer when I was a kid. And sometimes those dogs had nasty things like ringworm.
David Turetsky: 3:16
Oh my gosh. Oh.
Kevin Campbell: 3:19
But as I've grown older and wiser, I tend to care less about what people think. So if it comes up, I just tell people the truth.
David Turetsky: 3:27
So it isn't like 666 or like a, like a lightning bolt. So you're not either Damien or you're not Harry. So
Kevin Campbell: 3:35
That would be a great story, by the way. But I might lose, lose some friends in the process of doing that. So
David Turetsky: 3:41
Yes, or you might get a new show on television about? Yeah, you never know. So one of the cool fun things we got to do, especially with people like Kevin, who are geeks like us who really think about the intersection of data and analytics in HR. So we get to talk about how we should think about HR programs. Today, our topic is going to be program evaluation and program effectiveness. And because we're heading into what might be a recession, certainly some kind of slow down, we're going to be talking about how lean budgets are going to be on the rise for HR, and how you can evaluate your programs and program effectiveness. So Kevin, how have we actually gotten to where we are in HR with program spend, what led us to where we are today?
Kevin Campbell: 4:39
I think the biggest thing is we have a tendency to measure the data and the metrics that are easy to collect. And those tend to be output metrics. How many people attended a program that's that tends to be the be all and end all in many organizations, right? Then the outcomes, what are the what are the changes that you see either immediate proximal outcomes in terms of mindset, or intended behavior or long term outcomes and impact in changes to the perception of the workforce or real business metrics. So there tends to be a reluctance to measure the softer elements, or the the harder to measure elements, and they're oftentimes not harder to measure, right? It just takes a little bit more thoughtfulness and, and a little bit more, a little bit more upfront work in terms of understanding what you even want for the program to produce in the first place.
David Turetsky: 5:46
When I was part of corporate HR, one of the things, especially part of corporate compensation, and I wanted to do training classes, I would be asked how many people are going to attend? And then at the end, what they asked is how many people actually attended to and that was the ROI. It was all about, you know, to your point, did it happen? And then how many people actually took it? It had nothing to do with the effectiveness of the training itself, it was always did you actually get bums in seats to to absorb the training? And I was always wondering why we didn't look at other things like were performance evaluations done better? Did people feel more comfortable about their compensation packages, now having their manager sit through training, is that where you're going with this where you're, you know, our measures should be better on the even in the front end to be able to, to hypothesize, we're hoping to get a return on investment in this way.
Kevin Campbell: 6:43
Absolutely. And you know, a lot of my thinking around this comes from my background in social and government program evaluation. Because when you when you only measure the outputs, you could be having unintended side effects of your program that you don't even realize. So a lot of work done in humanitarian aid. As an example, there's there's flaws around measuring whether or not the food or supplies was delivered, rather than measuring the true outcome was, which is whether or not people go to bed hungry, or the if they go to bed satiated, and and when you fail to measure the true outcome and you only measure the widgets produced or the the bags of rice delivered, you could be having things where, you know, there's an unequal distribution of those supplies that you're offering, or, or it ends up benefiting some families more than others, and then those families become ostracized by their communities. And then you've created a whole other problem that you didn't even intend to create through through the program that you have. And the same thing happens in the world of people and culture, all the time, you'd mentioned program evaluation, and the degree to which people improve their their appraisals, or performance ratings. You know, that's that's one program, that often doesn't get evaluated. Because if you ask 10 people involved in the performance appraisal or performance evaluation process, what the outcome of that program or process is supposed to be, you'll often get 20 different answers, not just 10 different answers, you get 20 different answers. So
David Turetsky: 8:29
Usually very strong opinions on it as well. And I think that's, you know, to get back to your other analogy, you know, fertility rates, they get judged, usually, on those social programs, you know, did you did you save people's lives? And did you enable people to feel comfortable with their lives to actually have children? You know, we don't have such dire metrics in HR, right? It's, you know, on the people on the performance evaluation side, it's, you know, did performance evaluations actually get completed, which is, you know, you know, one of the craziest metrics, of course, they're gonna get completed, they have to get completed, they should get completed. Why? Because we want the manager and the employee to sit down and talk about that, and make sure that the employee understands the expectations that the manager has. Well, you know, we've had to get into a cadence, at least annually, where we're trying to make sure that happens, which sounds awful, right? Shouldn't that be something that's built into what a manager does? And unfortunately, we have to train that into and measure to make sure it's actually happening. Are there other examples that we have in HR where, you know, why are we measuring this? It should be it should be happening anyways, like hygiene. You know, I guess it's measuring hygiene. Are you really brushing your teeth at night? I have to ask my
Kevin Campbell: 9:51
I mean, you can go back to things as simple as son every night. is you know, the the metrics associated with the employee lifecycle. You know, and that can go back to your employee lifecycle surveys, but even just the data that you collect along the way. You know, are you are you measuring whether or not somebody was issued a laptop? Or are you measuring the degree to which they were able to actually access all the systems on that laptop and use them effectively on day one. Because a lot of these things aren't objective metrics. And in some ways, the quote unquote, objective metrics are actually less accurate than the subjective experience. To use another social program, a comparison. Sometimes, when you're measuring things, the safety of women and children using the official crime statistics aren't necessarily going to be an accurate reflection of how people feel. A better measure is usually do you feel safe walking around by yourself at night? Right? So to use that same analogy, rather than asking people, or measuring whether or not people received a laptop, it's do you have what you need in order to be successful in this role? And getting that subjective measure that's a more accurate reflection of what you're really trying to get at.
Dwight Brown: 11:22
Well and we so often default to the easiest, what, you know, how do we count the widgets, whereas we forget a lot of those qualitative pieces that you're touching on, right there is how do you dig deeper to ask the questions?
Kevin Campbell: 11:36
Well, I'm gonna say it's a little bit like organizational hydraulics too or Whack a Mole to use another analogy. Because, you know, going back to the performance management piece, you know, what works for one aspect, or one outcome might not work for another outcome. And so if your goal is to get a really good measurement for the purposes of paying people according to their performance, and and having a justification for promotion, and compensation decisions, are the things that work really well for that, if that's the intended outcome might not be the same things that work really well, for increasing people's motivation and drive to develop and improve over time. So that upfront work of saying, hey, what do we really want to accomplish here? Not only dictates the design of the program, but it dictates whether or not what defines a successful program, and how you measure the success of that. So yeah, there's the tendency to do default to easy, but there's also the tendency to just go through the motions of doing things without necessarily thinking about the why behind them.
Dwight Brown: 12:44
Yeah, it's good point.
Like what you hear so far, make sure you never miss a show by clicking subscribe. This podcast is made possible by Salary.com. Now, back to the show.
David Turetsky: 12:59
Some of the organizations that are probably going to be listening today, have gone from having the yearly check ins or the yearly performance evaluations to the weekly check ins, where managers and employees are encouraged, and in fact, prompted to check in once a week to say, am I providing you everything you need? What's your feeling right now? What's your temperature? What's your stress level? And the manager will typically look at it, sometimes they don't. But they they're supposed to have that encouragement for that weekly check in and to me, it's kind of going too far. But I wanted to get your opinion on it. You know, especially when we're talking about measuring program effectiveness. Is there too much feedback? Or is there too many check ins? Or is it just all about employee manager culture in those organizations?
Kevin Campbell: 13:53
I think it depends. And that's always the
David Turetsky: 13:56
It is! It's very good failsafe. Yeah. right answer. Right?
Kevin Campbell: 14:00
But, you know, I think, I've been in organizations, and I've consulted with organizations where that ongoing check in was measured by clicking a button that indicated whether or not that check in happened. But all that's really measuring is whether or not someone clicked the button.
David Turetsky: 14:18
Kevin Campbell: 14:19
So you can measure whether or not weekly check ins are happening, but do that four times a year. Right. So the the tool that you're using to measure the thing doesn't happen have to happen necessarily at the same frequency as the thing that you're measuring. And it's more important that the that it feels relevant. The whatever you're doing to gather that feedback feels relevant, and it's in the flow of what you're doing. I use the travel analogy whenever I go on a business trip, which you know, is happening less nowadays that it has it other other points in my career, but you know, I'll take five or six surveys on that business trip and not even think about the fact that I've taken a survey, there's usually a survey or some sort of feedback that's that's delivered at the airport upon arrival or check in, you know, I'll listen to an audiobook, and I'll usually rate the audiobook at the end, I'll have an Uber or a Lyft ride or a taxi ride to and from each location. And I'll usually rate that and then I'll order you know, Uber Eats or DoorDash, or GrubHub and rate the quality of that. And it doesn't feel like these things are surveys, but their feedback mechanisms that are a very easy lift, because it's very relevant to the thing that I happen to be doing at the moment. And I know that information is going to be used. So I think that's the big thing. If you don't think this information is going to be used, you're it's going to feel a lot more taxing to provide this and and it will be too much. So it really goes back to how quickly and how meaningfully are you willing to act on this information? And what are the internal mechanisms that might make it so that people are just going through the motions, and maybe not giving you the honest, candid response when you're collecting that information.
David Turetsky: 16:07
But Kevin, there's a big difference between the CSAT surveys you're getting from DoorDash, Uber, the audiobooks place, and the satisfaction survey or the employee survey, that feels a lot more intrusive, because you know, your boss is either going to get it. And especially if you're a small group, it's attributed to you. And that affects your day to day where these other things are much more anonymous with millions of transactions happening, which you're right, they may be read, they may be looked into. But at the end of the day, those are micro transactions, whereas the one you have at work you have to live with. And I guess my question back to you on this is if it's not in the manager psyche to take feedback with the with the, with the intent of learning from it, and getting personalized, you know what I mean, right, then that feedback is going to be very different than a CSAT type of survey that you're getting, as a, you know, text message after you've you've had a transaction on the
Kevin Campbell: 17:18
I love that question. And I want to tease consumer side. out a couple of different things in that one is the component of microtransactions. And how oftentimes, the employee experience or HR world oftentimes doesn't collect data on those microtransactions. And, you know, even though Gallup has amazing research that shows that 70% of the differences or variance in employee engagement scores, according to the way that they measure employee engagement are attributable to the frontline manager. Well, that's great. But there's also a lot of things that happen in your day to day experience that don't necessarily link back to the manager that are actually an IT issue, a legal issue, an HR issue. And there's so all of those micro transactions that happen should have owners, and should have a component that's not just operational, but also experiential. And I think you're right, the stakes for those. And I think this is a not a bug, but a feature. The stakes for those are a lot less personal. And I think that's a good thing, right? Because it's really just a matter of how do we improve these things so that people are more effective and can have less stress throughout their day. And then the part about the you know, the more intrusive elements around managers, I think that also goes back to this idea of program effectiveness. Because if you if you're if you're using your you know, employee engagement programs, or employee experience programs are often thought of as the metrics that we're using to measure the effectiveness of things, but there's also a meta evaluation, oftentimes that needs to happen, which is, to what degree are we effectively thinking about and using these metrics? You know, are we are we using it as a means of accountability? Or are we using it as a means of learning? And there's some some really interesting stuff out there around the different mindsets. You know, there's a really interesting book that came out recently by Dan Heath, him and his brother have written some really great things, but there's a book called Upstream. And one of the things that he talks about is the way that data gets used in organizations that move upstream when it comes to problem solving, rather than trying to solve the problem downstream. How can you get to the source of the problem and fix it at its root? And one of the ways of doing that is to regularly Think about the data as a learning opportunity rather than a monitoring and accountability mechanism. So he talked about a huge change that happened in a school district that started really dialing in on its first year freshman students. And the way that they started treating the data was rather than saying, Okay, how are our students doing and reporting that information in aggregate and using it as a way to say, We're doing good, we're not doing good, after the fact, and maybe even doing some finger pointing along the way. Or asking about how can we, you know, change these programs, they would regularly sit down as a cross functional team and say, How is this cohort of students doing? How is David doing? How is Dwight doing? And looking at the metrics in a more ongoing developmental way. And if we can shift our mindset in that way, and we can measure the degree to which our mindset mindset has shifted that way, then we can we can get closer to a place of having programs that are really effective.
Dwight Brown: 21:16
So hard to get out of the task orientation mode that goes with those things. I mean, I personally feel it and have throughout my career, it's like, Okay, I gotta check this box. And without really thinking about the behavior and, you know, looking at a different way of doing things like you've called out just now, it's hard, it is hard to get away from
David Turetsky: 21:40
It's also hard from a qualitative perspective, that. to sit down and talk about how are all these hires that we've brought in over the last year? And what are lessons learned, because organizations, they're so disparate, they're so decentralized, in many cases. And even the people sitting in the room probably have no idea who the others are. And so the qualitative part of it needs to be measured through those surveys that I'm sure you're, you're more of an expert than any of us, Either, either Dwight or myself. But the thing I wanted to ask you about this, though, is, is this a case, Kevin, where we need to start measuring? And I know you're gonna say I'm overdoing it. But we need to start measuring, like the consumer side, in every of the every one of the microtransactions? What was your experience as a candidate? What was your onboarding experience? Did you get your laptop? Did you get your laptop effectively, and get training on how to login? Or how to actually connect to the systems you need to and get the links you need with the username and passwords if we're not SSO. So while I would love to live in a world, in business, where we can get together and talk and do the qualitative discussion, have the qualitative? What an alternative be to actually just measure the crap at HR processes?
Kevin Campbell: 23:07
I think so. I think so. And I think that that could be controversial to say, but I'm willing to say that there are a lot of lessons that HR can learn from your customer experience department, from your marketing department. And the way that those experiences are measured in the customer experience world, in many ways are lightyears ahead of where we are in employee experience. It's a conversational process, I was looking at retail data for a large retailer. And when you look at their survey, it's so responsive, that it doesn't feel like a survey, they have QR codes and kiosks set up everywhere. And it's very conversational. It's only one question, asking about the degree to which somebody would recommend and then in response to that question, it has a tailored response that narrows in on what did we do so well, what can be done to improve? And then the text analytics within the tool picks up on keywords in that open text response and then we'll randomly assign two or three questions as a deeper dive to the free text response. And if you think about it, it's it's not a survey in the sense that there's nothing being pushed out to the consumer, to prompt them to provide that feedback. But there are different channels and portals and listening posts available. So that if somebody wants to provide feedback in that moment, they can do that. So if you think about feedback, as a conversation, it'd be really weird if there was only one person always prompting that conversation, or in a relationship. But you know, it's in the EH world, it's often just a complete push. But what about all the passive listening non survey opportunities that are available through the things that are happening on Glassdoor and Indeed and Yelp and your internal message boards and chat logs, and conversations with internal customer care and IT and HR.
David Turetsky: 25:32
So much of people's employee experience is now being shared outside the four walls. Because of the internet because of Glassdoor because of LinkedIn, LinkedIn, probably not as much. I don't think there's kvetching or complaining on LinkedIn as much as there is on Glassdoor. Because I think Glassdoor is more anonymous, much more anonymous than LinkedIn. And LinkedIn has all your peers, so they know it's you. But even even Glassdoor I think you can reverse engineer who it is. But but that's not my point. My point is that the experiences that employees have, on these platforms, almost never get seen the light of day inside of the four walls. Unless someone in HR, does it on an ad hoc basis, is there a way we could take it, and measure it kind of like RSS, and be able to bring it in, do the unstructured data analysis on the text and be able to try and parse out what's happening where and be able to figure out how we address it?
Kevin Campbell: 26:38
Yeah, the technology already exists to do that. It's just a matter of applying it to this use case. And it's really just a matter of how do you want to structure it in a way that gives you a deeper insight into the information that you already have, and also allow you to generate new hypotheses and new things that you didn't even know. Right? The problem with a tradition, one of the problems with a traditional survey is that you're already defining the universe of responses and questions whenever you have a Likert scale types type question. So those unknown unknowns aren't even in your your field of awareness. But when you're looking at it from that passive qualitative point of view, there can be concerns that you hadn't even considered to ask about, and then you can feed that back into the more structured data set and vice versa, you can come up, you know, sometimes you'll see a weird relationship between data points, and you'll have no idea where that's coming from, or why that's happening. And the passive qualitative component can really add a lot of color to that. But yeah, it already exists. And it's, it's funny, because you know, it before the technology was here, we used to tell our customers, basically, you're doing it wrong when you collect the qualitative data.
Dwight Brown: 28:00
How'd that go over?
Kevin Campbell: 28:02
People like me, you know, the survey consultants, right? We would say, we would say, Okay, well, you know, you're not being specific enough with your open ended question. Or maybe you should only use this open ended question sparingly, or maybe you need to ask more scaled questions. But with the technology, you can be really bad at survey design, in some instances, and still collect really rich data with the sentiment analysis and text analytics,
David Turetsky: 28:28
I think we're getting more mature in the way in which we consume data now. You know, that's what data science is, has kind of grown up to be now, which is, don't just kick out what we used to do before, which was as to your point, it had to be structured or it wasn't good enough. Now, we're taking so much unstructured data and making decisions on it, that we can open the floodgates, use rigor, build our hypotheses, and be able to take the unstructured and structured data and attributed as such, put our assumptions in it. And we're making the right decisions, or at least, we are documenting how we're getting to our right decisions based on that data, right Kevin? I mean, you can be sloppier, but still get better answers.
Kevin Campbell: 29:19
Coming back full circle with with like, how you use the data and what decisions you make. I think too often, we we fall into that cycle of you ask a question of your data, and you go have this structured data collection process to answer that question, you learn something, and then you act on it. And that's the normal cycle. But sometimes it's also you can create a program, create an action, and then evaluate not only the intended consequences, but also the unintended consequences of that action or that program, and then use that to create listening and understanding. So oftentimes, you know, the the decision, I love to give the example of, you know, you find out that employee recognition has a relationship with customer experience. Or you find out that employee well being has a really strong relationship with customer likelihood to recommend and refer other people. So maybe what you do with that information is you create a whole new well, well being initiative across your whole organization, I think that might be a little too much, you know, because you're, you're taking one piece of information, and you're making a really big bet around that. And that's another thing to going back to my social science background, like, no matter how structured or rigorous your data science is, in most instances, that's going to be cross sectional. And even if you're using really sophisticated analyses, like hierarchical linear modeling, and regression, and all of that great stuff, it's still cross sectional correlational data. So you can't make a causal claim about what you're seeing within that data, unless you have a randomized trial, an A B test. So maybe the answer for a lot of this stuff isn't launch an enterprise wide program, it's how do we launch a really smart pilot with an A B test? And evaluate the the effectiveness of that, so that we can have something that's really good to go for the next iteration.
David Turetsky: 31:37
And again, coming full circle with the topic today was we're going to recession probably? And how do we get to a way in which we can justify pretty much our programs, right. And so I think what you just said is the key, if you want to support your hypothesis, with data, run a good AB test run a good pilot, that enables your organization to see could it possibly be that once you run that pilot, that pilot shows you which way to go? It happened before we used to have this all the time, then, you know, while we were in a lot of growth mode, we kind of got away from it. So we're getting back to the stage where we have to prove ourselves, right?
Kevin Campbell: 32:21
Absolutely. And I think it's healthy. You know, it's like, you know, are the cells in our body go through a process called apoptosis, where the, the the cells that could become cancerous, if we're healthy, and we go through some sort of period of of adversity, those those bad cells will die off. And that helps us in the long run, because it can stop us from getting things like cancer. And I think, you know, we have to cut out the cancerous in organizations that they're well intended right there. They don't know that they're cancerous. But there's there's sometimes a lot of fat, a lot of extra extra things that are being done. And I'm of the opinion that you know, better for, for program owners to start working on this stuff. Now, before the mandate comes to you. So that you can actively say, hey, where can we be leaner? Where can we be more effective? Where can we be more efficient. And those, you know, there's a great natural AB test. That happened for a lot of organizations that had certain employee populations that went into and out of lockdown, and went into and out of the office, and had to, you know, experiment with asynchronous learning and virtual learning and in person learning. And those are some sort of natural experiments or natural AB tests that you can use to find out, hey, you know, do we have to spend all this money on live presenters or travel, and if it is less effective to do it one way versus the other, how much less effective is it and how much more effective and for what outcomes so that you can start to have more informed decisions. And you can start to now, the big caveat to that is in most budgets, if you don't spend your budget, you don't get that same budget next year. But if and when the time comes that you have to make the cuts, knowing where to make those cuts surgically, rather than than with a hacksaw will be really, really helpful.
David Turetsky: 34:22
I don't know if I could possibly add to that. I just want to say one thing. In an era where layoffs are a way in which most organizations tend to trim programs out, it's better to have your program evaluated for cut than you being evaluated for cut. And so proactively doing this work could potentially not only save your job, but also those of your colleagues as well. And so it's probably better to do this on a more proactive basis than wait for that knock on the door that Kevin was talking about. Kevin, it was a fascinating conversation. Wonderful conversation. Very timely conversation. And we really appreciate you being here, Dwight, thank you for being here as well.
Dwight Brown: 35:17
Thank you. Appreciate it. And thanks for being with us today, Kevin.
Kevin Campbell: 35:20
Glad to be here.
David Turetsky: 35:21
And Kevin, we're obviously going to have to have you back for yet another scintillating wonderful conversation again.
Kevin Campbell: 35:27
Sort of doom and gloom this time. Gosh, that was.
David Turetsky: 35:30
That's the times dude. That's just the times. And we we, you know, it's again, it's better to talk about reevaluating your programs than having the programs be so expensive that they evaluate you for being the sponsor of the programs.
Kevin Campbell: 35:47
David Turetsky: 35:49
That's called throwing the baby out with the bathwater.
Kevin Campbell: 35:52
Yeah, and also not making dumb cuts, either, right? Whether it's a person or a program, you don't want to, you want to cut the fat out. You don't want to cut the bone out. You don't want to just want to cut the meat out.
David Turetsky: 36:03
Exactly. Cool. Well said, sir. Thank you very much. Take care, and stay safe.
That was the HR Data Labs podcast. If you liked the episode, please subscribe. And if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week, and stay tuned for our next episode. Stay safe