If I had to place myself on a scale from 1 to 6 as an evaluator, I would situate myself at a 3 – developing, but not yet proficient. I bring strong foundational skills in reflection, communication, and collaboration, but I have not yet fully engaged in the core technical work of evaluation, such as designing, managing, or planning comprehensive evaluations. What stood out most in my self-assessment was a clear pattern: I am stronger in the competencies that reflect who I am as a professional – reflective, collaborative, and growth-oriented – than in the competencies that reflect what I do as an evaluator, particularly in evaluation design and planning. While these results largely confirmed what I already suspected, they also revealed important nuances, especially around the idea that evaluation must be intentionally designed for use, not just conducted at the end of a process.
One of my strongest competencies is reflective practice,
particularly the ability to examine my work and make meaningful adjustments.
This aligns with both AEA competency 1.5 (reflecting on evaluation to improve
practice) and Stevahn et al.’s (2005) emphasis on reflective practice as a core
professional competency. For example, in my work in Applied Behavior Analysis
(ABA), I became increasingly frustrated that students were being referred for
services without reliable baseline data. Initially, this frustration remained
internal, but through reflection, I realized that I had a role in addressing
the issue. Instead of continuing to work within an inconsistent system, I
developed a shared template for baseline data collection and introduced it to
my team. This is where reflection moved beyond thought into action. What
distinguishes this as a professional competency is not simply noticing a
problem, but using reflection to change practice in a measurable way.
Closely connected to this is my second strength: interpersonal
communication and perspective-taking, reflected in AEA competencies 5.2 and
5.3. Implementing the baseline data template required more than a good idea – it
required collaboration. Rather than imposing the change, I brought the template
to colleagues, invited feedback, and worked toward shared agreement. This
aligns with Stevahn et al.’s (2005) interpersonal competency, which emphasizes
communication, facilitation, and collaboration. This experience highlighted an
important insight: competencies do not operate in isolation. Reflection helped
me identify the problem, but interpersonal skills made the solution possible.
Together, they allowed for meaningful change within my team.
At the same time, my self-assessment revealed clear growth
areas, particularly in evaluation design and methodology (AEA 2.3 and 2.4).
While I have experience analyzing data and assessing outcomes, I have not yet
designed full evaluation plans. This distinction between assessment and
evaluation became more apparent through this process. Assessment often focuses
on individual-level performance, whereas evaluation is broader and more
systematic, considering context, implementation, outcomes, and how findings will
be used. According to Stevahn et al. (2005), this falls under the domain of
systematic inquiry – designing and conducting evaluations using appropriate
methods. Currently, I am more comfortable working within evaluation structures
created by others than creating those structures myself.
Another significant growth area is planning and management,
particularly planning for evaluation use (AEA 4.1 and 4.4). This competency was
the most surprising to me. Before this course, I viewed evaluation as something
that occurred at the end of a process – a way to determine whether something
worked. However, I now understand that evaluation must be designed with its use
in mind from the very beginning. This shift in thinking was reinforced by both
AEA principles and Stevahn et al.’s (2005) framework, which emphasize that
evaluation should inform decision-making, not just measure outcomes.
A concrete example of this gap comes from my current work. A
lead teacher recently changed a communication intervention based on
professional judgment rather than data. While the decision may have been
appropriate, there were no predefined success criteria to evaluate whether the
change was effective. In this situation, two issues were present: a lack of
clearly defined evaluation criteria and a lack of clarity about who was
responsible for defining them. This reflects gaps in both planning and
stakeholder involvement (AEA 2.8). More importantly, it reinforced the
realization that evaluation is not just about collecting data – it is about
asking the right questions before data collection even begins. I have seen
firsthand how data can be collected but never meaningfully used, turning
evaluation into a performative task rather than a tool for improvement.
A second example comes from my experience in graduate
coursework, where I designed an online learning module intended to increase
engagement. While I incorporated interactive elements and tracked
participation, I did not establish clear evaluation criteria to determine
whether the design actually improved learning outcomes. In hindsight, I was
designing instruction without integrating evaluation. This directly connects to
IBSTPI competencies, which emphasize alignment between evaluation questions,
methods, and outcomes. Moving forward, I recognize the importance of embedding
evaluation into the design process rather than treating it as an afterthought.
Based on these insights, I have identified several actions
to continue developing my competence as an evaluator. Beyond this course, I
plan to seek opportunities to design small-scale evaluations within my current
organization, such as evaluating the effectiveness of a specific intervention
or training protocol. This will allow me to practice aligning evaluation
questions with methods, defining success criteria, and managing evaluation
processes in a real-world context. This approach directly addresses my growth
areas in Domains 2 and 4 and provides hands-on experience that cannot be fully
developed through theory alone.
Next Step
One key growth area I will focus on is planning and
management, specifically defining evaluation questions and success metrics
(Domain 4). In Module 2, this will show up in how I structure my evaluation
plan assignment. I will intentionally begin by clearly defining evaluation
questions and identifying measurable success criteria before selecting methods.
Evidence of improvement will include evaluation questions that are specific,
measurable, and directly aligned with intended outcomes. This directly addresses
a gap I observed in my ABA work, where interventions were modified without
clearly defined criteria for success. Moving forward, I am committing to
starting with clearly defined questions and metrics so that evaluation is
purposeful, actionable, and aligned with decision-making.
References
American Evaluation Association. (2018). Guiding
principles for evaluators.
International Board of Standards for Training, Performance
and Instruction (IBSTPI). (2013). Evaluator competencies.
Stevahn, L., King, J. A., Ghere, G., & Minnema, J.
(2005). Establishing essential competencies for program evaluators. American
Journal of Evaluation, 26(1), 43–59.

Comments
Post a Comment