Skip to main content

Growing Into Evaluation: From Reflection to Practice

 


If I had to place myself on a scale from 1 to 6 as an evaluator, I would situate myself at a 3 – developing, but not yet proficient. I bring strong foundational skills in reflection, communication, and collaboration, but I have not yet fully engaged in the core technical  work of evaluation, such as designing, managing, or planning comprehensive evaluations. What stood out most in my self-assessment was a clear pattern: I am stronger in the competencies that reflect who I am as a professional – reflective, collaborative, and growth-oriented – than in the competencies that reflect what I do as an evaluator, particularly in evaluation design and planning. While these results largely confirmed what I already suspected, they also revealed important nuances, especially around the idea that evaluation must be intentionally designed for use, not just conducted at the end of a process.

One of my strongest competencies is reflective practice, particularly the ability to examine my work and make meaningful adjustments. This aligns with both AEA competency 1.5 (reflecting on evaluation to improve practice) and Stevahn et al.’s (2005) emphasis on reflective practice as a core professional competency. For example, in my work in Applied Behavior Analysis (ABA), I became increasingly frustrated that students were being referred for services without reliable baseline data. Initially, this frustration remained internal, but through reflection, I realized that I had a role in addressing the issue. Instead of continuing to work within an inconsistent system, I developed a shared template for baseline data collection and introduced it to my team. This is where reflection moved beyond thought into action. What distinguishes this as a professional competency is not simply noticing a problem, but using reflection to change practice in a measurable way.

Closely connected to this is my second strength: interpersonal communication and perspective-taking, reflected in AEA competencies 5.2 and 5.3. Implementing the baseline data template required more than a good idea – it required collaboration. Rather than imposing the change, I brought the template to colleagues, invited feedback, and worked toward shared agreement. This aligns with Stevahn et al.’s (2005) interpersonal competency, which emphasizes communication, facilitation, and collaboration. This experience highlighted an important insight: competencies do not operate in isolation. Reflection helped me identify the problem, but interpersonal skills made the solution possible. Together, they allowed for meaningful change within my team.

At the same time, my self-assessment revealed clear growth areas, particularly in evaluation design and methodology (AEA 2.3 and 2.4). While I have experience analyzing data and assessing outcomes, I have not yet designed full evaluation plans. This distinction between assessment and evaluation became more apparent through this process. Assessment often focuses on individual-level performance, whereas evaluation is broader and more systematic, considering context, implementation, outcomes, and how findings will be used. According to Stevahn et al. (2005), this falls under the domain of systematic inquiry – designing and conducting evaluations using appropriate methods. Currently, I am more comfortable working within evaluation structures created by others than creating those structures myself.

Another significant growth area is planning and management, particularly planning for evaluation use (AEA 4.1 and 4.4). This competency was the most surprising to me. Before this course, I viewed evaluation as something that occurred at the end of a process – a way to determine whether something worked. However, I now understand that evaluation must be designed with its use in mind from the very beginning. This shift in thinking was reinforced by both AEA principles and Stevahn et al.’s (2005) framework, which emphasize that evaluation should inform decision-making, not just measure outcomes.

A concrete example of this gap comes from my current work. A lead teacher recently changed a communication intervention based on professional judgment rather than data. While the decision may have been appropriate, there were no predefined success criteria to evaluate whether the change was effective. In this situation, two issues were present: a lack of clearly defined evaluation criteria and a lack of clarity about who was responsible for defining them. This reflects gaps in both planning and stakeholder involvement (AEA 2.8). More importantly, it reinforced the realization that evaluation is not just about collecting data – it is about asking the right questions before data collection even begins. I have seen firsthand how data can be collected but never meaningfully used, turning evaluation into a performative task rather than a tool for improvement.

A second example comes from my experience in graduate coursework, where I designed an online learning module intended to increase engagement. While I incorporated interactive elements and tracked participation, I did not establish clear evaluation criteria to determine whether the design actually improved learning outcomes. In hindsight, I was designing instruction without integrating evaluation. This directly connects to IBSTPI competencies, which emphasize alignment between evaluation questions, methods, and outcomes. Moving forward, I recognize the importance of embedding evaluation into the design process rather than treating it as an afterthought.

Based on these insights, I have identified several actions to continue developing my competence as an evaluator. Beyond this course, I plan to seek opportunities to design small-scale evaluations within my current organization, such as evaluating the effectiveness of a specific intervention or training protocol. This will allow me to practice aligning evaluation questions with methods, defining success criteria, and managing evaluation processes in a real-world context. This approach directly addresses my growth areas in Domains 2 and 4 and provides hands-on experience that cannot be fully developed through theory alone.

Next Step

One key growth area I will focus on is planning and management, specifically defining evaluation questions and success metrics (Domain 4). In Module 2, this will show up in how I structure my evaluation plan assignment. I will intentionally begin by clearly defining evaluation questions and identifying measurable success criteria before selecting methods. Evidence of improvement will include evaluation questions that are specific, measurable, and directly aligned with intended outcomes. This directly addresses a gap I observed in my ABA work, where interventions were modified without clearly defined criteria for success. Moving forward, I am committing to starting with clearly defined questions and metrics so that evaluation is purposeful, actionable, and aligned with decision-making.

 

References

American Evaluation Association. (2018). Guiding principles for evaluators.

International Board of Standards for Training, Performance and Instruction (IBSTPI). (2013). Evaluator competencies.

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59.

Comments

Popular posts from this blog

Shaping the Future of Learning: My Aspirations as a Learning Designer

  Designing with Purpose: My Aspirations in Learning Design and Technology When I first discovered the field of Learning Design and Technology, it felt like a lightbulb moment. I had always been fascinated by the intersection of education, creativity, and technology, but I didn’t realize there was a discipline dedicated to weaving those threads together into meaningful learning experiences. What drew me in wasn’t just the theory or the technology, but the possibility of creating learning environments that spark curiosity, make knowledge accessible, and empower people to grow. Now, as I reflect on my journey and look toward the future, I see this field as both a professional path and a personal calling. What Drew Me In My entry point came through Applied Behavior Analysis, where I first started designing materials to support skill-building for clients. I created simple digital visuals, handouts, and even gamified tasks to help learners engage. But it wasn’t until I pursued my ...

Finding My Place in Learning Design and Technologies

When I decided to enroll in Arizona State University’s Learning Design and Technologies (LDT) program, I wasn’t just chasing another degree – I was chasing a feeling. A sense of alignment. And in week one, I felt it. As a recent graduate with a master’s in Special Education – Applied Behavior Analysis (ABA), I had already begun building a career rooted in evidence-based practice, individualized support, and measurable outcomes. But there was a part of me that hadn’t yet found full expression: my creative side. I’ve always enjoyed building digital materials, editing content, and finding new ways to make learning accessible and engaging. When I discovered the LDT program, I saw a space where all my passions, education, behavior science, technology, and design, could finally converge. It wasn’t until I read through the first week’s materials, especially Ellen Wagner’s piece on the evolving identity of learning design professionals, that I truly connected to the path ahead. Wagner’s writin...

I'm on the Curve

  Introduction As a learning designer and behavior specialist working in education, I’ve come to recognize how my openness to exploring new tools shapes my professional identity. Everett Rogers’ Diffusion of Innovation Theory (1962) offers a useful framework for understanding how individuals adopt new technologies. According to Rogers, the diffusion process follows a bell-shaped curve consisting of five adopter categories – Innovators, Early Adopters, Early Majority, Late Majority, and Laggards (Porter & Graham, 2015). Each group demonstrates a different relationship with innovation, ranging from the adventurous to the cautious. Reflecting on my professional experiences, I would identify myself as an Early Adopter. My Position on the Curve Early Adopters, representing about 13.5% of individuals, are characterized by curiosity, adaptability, and a willingness to try new ideas once they’ve seen some evidence of success. They are not necessarily the first to experiment, but...