Skip to main content

I'm Reevaluating the Curve

 

My attitude toward emerging technologies continues to evolve, shifting from curiosity and excitement to a more intentional and ethically informed mindset. Early in my professional and academic experiences, I viewed new technology as inherently positive – something to adopt quickly because it represented progress, efficiency, or modernization. Like many early adopters described in innovation diffusion theory, I use to believe implementing new tools demonstrated forward thinking and adaptability. However, experience has reshaped that perspective. As artificial intelligence (AI), machine learning, automation, and large-scale data systems entered education and behavioral fields, I began questioning not only what technology could do but also whether it should do certain things. Today, instead of adopting technology simply because it is innovative, I evaluate whether it aligns with meaningful goals, human-centered learning values, and long-term societal implications (Davenport & Ronanki, 2018). My willingness to adopt new tools has not disappeared, but it has become more measured, strategic, and grounded in purpose.

Much of this shift has been influenced by learning about the ethical and social impacts of emerging technologies such as AI, blockchain, and big data. Understanding how these systems reshape power, decision-making, and human relationships has made technology adoption feel less neutral and more like a deliberate ethical stance. Scholars have emphasized that AI systems are not passive; they embed assumptions, biases, and values shaped by the data and designers behind them (Chklovski, 2019; Goksel & Bozkurt, 2019). Learning about instances of algorithmic bias, digital surveillance, and inequitable decision pathways has encouraged me to think critically before integrating technology into instructional or clinical practice (Dennis, 2018). This awareness has shifted my position on the innovation curve. Rather than rushing toward the newest AI-enabled tool, I now consider questions of transparency, governance, privacy, and equity before endorsing or using emerging systems.

One ethical dilemma that deeply resonated with me is the concern over algorithmic bias, specifically when flawed or incomplete datasets inform automated decisions affecting vulnerable populations. This issue is especially relevant within education and behavioral sciences, where data often informs intervention pathways, access to support, and evaluation of student behavior. According to Fournier (2018), AI systems trained on narrow or biased datasets can unintentionally reinforce inequity rather than reduce it. Understanding this has made me more cautious about integrating AI systems without strong human oversight, responsible data governance, and evidence of equity-focused design. Instead of being an unquestioning early adopter, I now fall closer to the early majority group – still open to innovation, but guided by ethical discernment.

This ethical lens becomes even more important when examining the rise of AI and its implications for the global workforce. Research predicts that automation and intelligent systems will reshape – not just support – labor sectors across industries (Dillon, 2019). While AI may create new career pathways, it may also displace workers lacking digital access, training, or economic flexibility (Hogle, 2017). Key figures in technology and policy have expressed concerns about widening inequality, accelerated labor shifts, and the existential uncertainty surrounding AI-driven systems (Ciolacu et al., 2018). Simultaneously, others argue that AI can serve as a powerful augmentation tool that enhances, rather than replaces, human labor by removing repetitive work and enabling higher-level tasks (Edwards & Cheok, 2018). These contrasting perspectives reflect broader social tension: AI may democratize access or deepen inequity depending on how systems are implemented and governed.

Within the profession of Applied Behavior Analysis (ABA), AI presents both potential benefits and challenges. On the positive side, AI-assisted analytics may help clinicians streamline data collection, identify patterns, and support treatment decisions grounded in behavioral science principles (Dillon, 2019). Automation may also reduce administrative burden, allowing behavior analysts to spend more time building therapeutic rapport and delivering meaningful interventions. AI-supported instructional systems could expand learner access, especially for individuals in remote or underserved communities (Ciolacu et al., 2018).

However, the risks cannot be ignored. AI systems may oversimplify complex behavioral variables or generate recommendations without true contextual understanding. The nuance of human communication, rapport, developmental variability, and environmental influence could be lost if AI models are treated as authoritative rather than supportive (Dennis, 2018). Additionally, the profession must address critical questions around data privacy, informed consent, and the ethical use of sensitive behavioral information (Chklovski, 2019). As behavior analysis intersects more directly with AI, the field must establish strong ethical frameworks to ensure technology enhances, rather than compromises, ethical care.

Ultimately, my stance on emerging technologies continues to evolve as I develop a deeper understanding of their power, limitations, and consequences. While I still value innovation, I now believe it must be pursued with transparency, critical reflection, and an unwavering commitment to human dignity. Technology can support meaningful progress—but only when implemented thoughtfully, ethically, and equitably.

 

References

Chklovski, T. (2019, January 28). 4 ways AI education and ethics will disrupt society in 2019—EdSurge news. EdSurge. https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019

Ciolacu, M., Tehrani, A. F., Binder, L., & Svasta, P. M. (2018). Education 4.0—Artificial intelligence assisted higher education: Early recognition system with machine learning to support students’ success. 2018 IEEE 24th International Symposium for Design and Technology in Electronic Packaging (SIITME), 23–30.

Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.

Dennis, M. J. (2018). Artificial intelligence and higher education. Enrollment Management Report, 22(8), 1–3.

Dillon, J. (2019, February 19). In real life: How will AI impact workplace learning? Learning Solutions Magazine. https://learningsolutionsmag.com/articles/in-real-life-how-will-ai-impact-workplace-learning

Edwards, B. I., & Cheok, A. D. (2018). Why not robot teachers: Artificial intelligence for addressing teacher shortage. Applied Artificial Intelligence, 32(4), 345–360.

Fournier, J. (2018, May 17). Getting your head around artificial intelligence. Learning Solutions Magazine. https://learningsolutionsmag.com/articles/getting-your-head-around-artificial-intelligence

Goksel, N., & Bozkurt, A. (2019). Artificial intelligence in education: Current insights and future perspectives. In Handbook of research on learning in the age of transhumanism (pp. 224–236). IGI Global.

Hogle, P. (2017, March 28). AI is everywhere, but what is AI? Learning Solutions Magazine. https://learningsolutionsmag.com/articles/2271/ai-is-everywhere-but-what-is-ai

Comments

Popular posts from this blog

Shaping the Future of Learning: My Aspirations as a Learning Designer

  Designing with Purpose: My Aspirations in Learning Design and Technology When I first discovered the field of Learning Design and Technology, it felt like a lightbulb moment. I had always been fascinated by the intersection of education, creativity, and technology, but I didn’t realize there was a discipline dedicated to weaving those threads together into meaningful learning experiences. What drew me in wasn’t just the theory or the technology, but the possibility of creating learning environments that spark curiosity, make knowledge accessible, and empower people to grow. Now, as I reflect on my journey and look toward the future, I see this field as both a professional path and a personal calling. What Drew Me In My entry point came through Applied Behavior Analysis, where I first started designing materials to support skill-building for clients. I created simple digital visuals, handouts, and even gamified tasks to help learners engage. But it wasn’t until I pursued my ...

Finding My Place in Learning Design and Technologies

When I decided to enroll in Arizona State University’s Learning Design and Technologies (LDT) program, I wasn’t just chasing another degree – I was chasing a feeling. A sense of alignment. And in week one, I felt it. As a recent graduate with a master’s in Special Education – Applied Behavior Analysis (ABA), I had already begun building a career rooted in evidence-based practice, individualized support, and measurable outcomes. But there was a part of me that hadn’t yet found full expression: my creative side. I’ve always enjoyed building digital materials, editing content, and finding new ways to make learning accessible and engaging. When I discovered the LDT program, I saw a space where all my passions, education, behavior science, technology, and design, could finally converge. It wasn’t until I read through the first week’s materials, especially Ellen Wagner’s piece on the evolving identity of learning design professionals, that I truly connected to the path ahead. Wagner’s writin...

I'm on the Curve

  Introduction As a learning designer and behavior specialist working in education, I’ve come to recognize how my openness to exploring new tools shapes my professional identity. Everett Rogers’ Diffusion of Innovation Theory (1962) offers a useful framework for understanding how individuals adopt new technologies. According to Rogers, the diffusion process follows a bell-shaped curve consisting of five adopter categories – Innovators, Early Adopters, Early Majority, Late Majority, and Laggards (Porter & Graham, 2015). Each group demonstrates a different relationship with innovation, ranging from the adventurous to the cautious. Reflecting on my professional experiences, I would identify myself as an Early Adopter. My Position on the Curve Early Adopters, representing about 13.5% of individuals, are characterized by curiosity, adaptability, and a willingness to try new ideas once they’ve seen some evidence of success. They are not necessarily the first to experiment, but...