“Three Critical Dimensions and Seven Levels that Turn Diversity & Inclusion Evaluation into Results”
Evaluation is a task that every Diversity Practitioner will face at one time or another. No matter what your role such as Trainer, Consultant, Chief Diversity Officer (CDO), Council Member, ERG/BRG Leader, etc., conducting an evaluation to assess key aspects of your Diversity and Inclusion initiatives is inevitable.
Two Definitions of Evaluation
People do not always agree on one definition of evaluation. Following are statements that reflect two different definitions:
- “Evaluation is the systematic process of collecting and analyzing data in order to determine whether and to what degree objectives have been or are being achieved.”
- “Evaluation is the systematic process of collecting and analyzing data in order to make a decision.”
Notice that the first ten words in each of the definitions are the same. However, the reasons-the “Why!”-for collecting and analyzing the data reflect a notable difference in the philosophies behind each definition. The first reflects a philosophy that as an evaluator, you are interested in knowing only if something worked, if it was effective in doing what it was supposed to do. The second statement reflects the philosophy that evaluation makes claims on the value of something in relation to the overall operation of a Diversity program, project, or event. Many experts agree that an evaluation should not only assess program results but also identify ways to improve the program being evaluated. A Diversity program or initiative may be effective but of limited value to the client or sponsor. You can imagine, however, using an evaluation to make a decision (the second definition) even if a program has reached its objectives (the first definition). For example, for Non-Profits, Federal grants are based on the first statement, that is, whether the program has achieved its objectives, but the harder decision to downsize or change may be a consequence of the second definition of evaluation.
For some, endorsing Diversity Evaluation is a lot like endorsing regular visits to the dentist. People are quick to endorse both activities, but when it comes to doing either one, many Diversity Practitioners are very uncomfortable. In this blog, I want to reduce your discomfort by demystifying some important aspects of designing and conducting a Diversity ROI evaluation by helping you get to know a few Diversity metrics processes that matter in evaluation design.
In both for-profit and nonprofit organizations, organizations possess data (and information) that could help to evaluate a Diversity program or project. These data are the one thing that all evaluations have in common regardless of the particular definition of evaluation you embrace: “evaluation is the systematic process of collecting data that help identify the strengths and weaknesses of a program or project. The data may be as simple as records of attendance at training sessions” or, as complex as “showing test scores showing the impact of a new educational program on increasing students’ knowledge across an entire school system”.
Evaluating Efficiency, Effectiveness, and Impact
We can define Diversity evaluation even more closely as a process. The process is guided by the reason for doing the evaluation in the first place. An evaluation might be a process of examining a Diversity training program, in light of values or standards, for the purpose of making certain decisions about the efficiency, effectiveness, or impact of the program. To carry out this task, you need to understand the concepts of efficiency, effectiveness, and impact. Think of these three terms as the levels of a program evaluation.
“Efficiency” relates to an analysis of the costs (dollars, people, time, facilities, materials, and so forth) that are expended as part of a program in comparison to either their benefits or effectiveness. How is efficiency, or the competence with which a program is carried out, measured in a program? The term itself gives clues to what this is about. Diversity Practitioners would look at the efficiency with which details are carried out in a program. Diversity programs and initiatives often begin with recruiting, gathering materials, providing for space, setting up fiscal procedures, and so forth. In other words, the relationship between the costs and end products becomes the focus of an efficiency evaluation. Although very important, these aspects of efficiency have no bearing on the program’s effectiveness. If the investment in the program or project exceeds the returns, there may be little or no efficiency.
For example, let’s consider an assembly line facility that houses a rather substantial training and staff development department. As part of this department, ten instructors are responsible for ensuring that five hundred employees are cycled through some type of Diversity training every six months, for a minimum of twenty hours of training each cycle. The training revisits the employees’ basic knowledge of their job and introduces new concepts of Diversity that build additional competencies since the last training. The staff development department might work very efficiently by making sure that all employees cycle through in a timely fashion, in small enough groups to utilize the best of what we know about how adults learn. The students’ time on task is often not enough, however, and many of them do not retain much of what was covered in the training. Thus the program is not effective.
The department may be efficient in that it fully utilized the time of each of the available trainers, it stayed within the parameters of the staff development budget, it kept employee “down time” to a minimum, it used materials and equipment that were available, and it completed the Diversity and Inclusion training agenda for the company. Yet there may be an increase in cultural miscommunication incidents and generational conflict across levels because employees make simple, basic mistakes and assumptions about others who are different than themselves. The department’s training was efficient but not necessarily effective.
When you examine the “effectiveness” of your Diversity and Inclusion initiatives, you are asking this question: “Did the activities do what they were supposed to do?” Simply put, an initiative’s effectiveness is measured in terms of substantive changes in knowledge, attitudes, or skills on the part of a program’s or initiative’s participants. Although the right number of participants may have been recruited and the best possible site may have been secured, the effectiveness test is this: Did the activities provide the skills to effectively handle Diversity and Inclusion-related situations! Did the participants gain the knowledge they need to work across generational differences? In another example, the same Diversity and Inclusion department staff may conduct a training session on a new approach to deescalate cultural conflict situations. The trainer may pretest all the employees as they begin their training session. Upon completion, the employees are post-tested and the results compared to determine whether their knowledge increased, decreased, or stayed the same. An increase in their knowledge would be an indication that the training was effective-it did what it was supposed to do. Yet two weeks after the training, when one of the employees was back at her job location, a situation arose in which she engaged in a serious altercation with another employee and failed to use the skills taught. She used her older, more comfortable procedure for addressing communication differences across cultures and caused a problem that put her and her coworkers at the risk of suspension. Here is an example of training that was effective-the worker passed all the posttests-but had little impact on changing the behavior of the employee.
Thus the impact that the program has had on the people or organization for which it was planned becomes an important evaluation consideration. “Impact” evaluation examines whether and to what extent there are long-term and sustained changes in a target population. Has the program or initiative brought about the desired changes: Are employees using the new procedures? Do your employees have more job satisfaction?
Evaluators frequently pay too little attention to assessing impact. One reason is that “impacts” often manifest themselves over time, and Diversity Practitioners have already turned their attention elsewhere before computing this aspect of the evaluation. The actual impact that training in new procedures might have in people’s everyday life often needs time to percolate and evolve. An attempt to collect impact data after allowing for this delay may run into a number of roadblocks such as learner turnover (you cannot find them), job or circumstance change (they no longer are in situations that demand heavy use of the skill taught), or lack of time or resources for the evaluator to conduct these follow-up activities.
Still, program and project sponsors are most interested in impacts. Whether a learner feels satisfied with the training or the training results in knowledge gain means little to a sponsor or employer if the learning doesn’t help the organization.
Evaluating Alternatives
The second philosophical statement that defines evaluation presents it as the process of delineating, obtaining, and providing useful information for the purpose of selecting among alternatives. Thus, it may not matter whether the program was efficiently conducted, effective, or had an impact on behavior or functions, Instead, the value of the evaluation is in its being able to compare one activity to another, one initiative or program to another, or one employee to another so that decisions can be made in the presence of empirically collected data. Diversity Search Committees perform this kind of evaluation. In the course of their work, they describe job candidates’ strengths, outline previous experiences, and acquire other useful information that makes it possible to choose among a number of candidates. A company planning to adopt and purchase a computer system will perform this kind of evaluation on all the systems it is considering. It will select the one that performs the best given the needs and resources of the company.
Identifying Areas to Improve
Finally, there is a third way of defining evaluation: Evaluation is the identification of discrepancies between where a program or initiative is currently and where it would like to be. For example, an organization’s multicultural marketing department may have as one of its goals at least one face-to-face focus group with emerging market customers per year. Currently, its Sales force sees fewer than half of the multicultural customers in a year. Records of face-to-face calls indicate the discrepancy between where XYZ Corporation is currently as opposed to where the organization wants to be.
Personnel evaluations often take on this definition as well. A new employee’s first evaluation may be an example of the first definition, that is, an evaluation against some minimal standard of performance. After this initial evaluation, certain performance goals are set for the employee (either mutually or by the supervisor or team. The next and all subsequent evaluations of the employee are compared with those performance goals or standards. The discrepancies are identified and remediation strategies are developed.
The Critical 7 Levels — Don’t Perform Your Diversity Initiative or Diversity Program Evaluation without Them!
Other levels of evaluation as defined by the Hubbard Diversity ROI Methodology refer to the eventual us of the evaluation data and who might make use of the results
Diversity Return on Expectations (DROEx®), for example, must be based on evidence and impact results. I have found it useful to first to distinguish the “evidence-based, outcome-focused” measures from other types of “activity only” measures. Anyone responsible for implementing Diversity initiatives is also responsible for evaluation. Whether you calculate the impact or not, from management’s and/or the stakeholder’s point of view, “you will always own the ROI of the initiatives you implement”. So, the amount of evaluation that you provide to meet expectations depends on the types of decisions that your organization must make and the information needed to make those decisions. There are 7 levels you can use in the Hubbard Diversity Return-on-Investment DROI® evaluation methodology to effectively demonstrate your ROI impact and show a “chain of impact” to meet customer and stakeholder expectations that is evidence-based and credible:
- Level 0: Business and Performer Needs Analysis
- Level 1: Reaction, Satisfaction, and Planned Actions
- Level 2: Learning
- Level 3: Application and Behavioral Transfer
- Level 4: Business Impact
- Level 5: Diversity Return-on-Investment (DROI®), Benefit-to-Cost Ratio (BCR)
- Level 6: Intangibles
For example, if your only requirement is to ensure that participants have positive attitudes toward the initiative, then a Level 1 evaluation is sufficient. But, if your goal is to determine whether your diversity initiative is having a positive effect on job performance, then you will have to perform a Level 3 evaluation. This means you will also have to conduct Level 1 and 2 evaluations in order to assess the learning performance applications and Job impact at Level 3 (an example of the “DROI® Chain of Impact”). They provide the basis for determining whether participants demonstrated that they learned by putting these skills and attitudes to use (verified by a Level 3 evaluation).
There’s no doubt that we must communicate effectively and demonstrate our value to the bottom-line. Diversity ROI metrics and performance improvement processes help us focus first on tangible outcomes, then on interventions to meet expectations. When Diversity Practitioners focus primarily on the intervention, such as the Diversity content, the method, or the technology, it’s easy to be led astray by current fads, thus wasting valuable time and money. Instead, focus first on the desired outcomes and DROI® analytics to determine what kind of measurable diversity intervention, if any, is necessary to meet customer and key stakeholder expectations.
Conducting glitzy Diversity training or other Diversity activities and implementing fad-based interventions can distract decision makers from what truly counts. The glitz may make things fun, louder and interactive, not necessarily better. Without a clear, data-based front-end analysis of organizational performance gaps, any intervention, including Diversity training and the like, is a guess. Add in sophisticated Diversity intervention technologies without an adequate front-end analysis, including metrics, and it becomes an expensive and often complex guess. Systematic Diversity training design procedures, for example, must include DROI® analytics and metrics, needs assessments, objectives, targeted competency-based design, and multi-level evaluation processes. That framework provides a method to get the coveted seat at the C-Suite table. Why? Because when used properly, that knowledge base can help companies increase revenue and decrease costs using Diversity and Inclusion practices that impact organizational performance outcomes. In other words, you can earn your seat at the executive table by applying what you already know as a Diversity ROI-focused professional. It’s successful because the DROI® metrics and processes you use are solidly based on the behavioral science research results which provide strategies detailing how diverse people interact and what drives their behavior to produce successful organizational outcomes.
If we examine any other professional discipline or field of study, like Medicine, Engineering, Accountancy, Science, etc., we expect that they are able to prove the efficiency, effectiveness, and impact of the solutions, programs and initiatives they what us to support. Why should Diversity and Inclusion be any different if we want our processes to have credibility and support?
Conducting a comprehensive Diversity Evaluation is the only true way to know if your Diversity and inclusion programs or initiatives are delivering the outcome results expected by key stakeholders to meet the needs of the organization. It is essential that Diversity Practitioners master critical Diversity and Inclusion evaluation methods using technologies that are rooted in Diversity ROI science. Why? Because the perceived value and credibility of what we do to be seen as a true Business Partner and Professional depends on it! Are you evaluating your Diversity and Inclusion initiatives and programs at this level?? What’s your department’s Brand and Credibility Image in your organization??