AI is opening new opportunities for learners to engage in knowledge work that was previously time consuming. Research and even software work that used to take weeks and months can now be completed in much less time. This presentation will focus on “what happens to humans when AI is a partner?”, with an emphasis on education to widen the aperture of the learning experience to address not only knowledge and skills, but also foundational durable skills and personal wellness.
11:00-11:45 PDT: Invited Speaker: Taylor Freeman
Join us for a deep dive into the evolution of rapid, interactive, and immersive lesson development. Drawing on over a decade of experience designing XR-based training, this session will unpack key lessons learned from building and deploying immersive educational experiences across industries and learning environments. We’ll explore the frameworks, tools, and workflows that have enabled agile XR content creation—and confront the persistent challenges that still stand in the way of reaching mass adoption. Whether you're just entering the immersive learning space or scaling your current efforts, this session offers practical insights and future-facing strategies to help XR education realize its full potential.
11:00-11:25 PDT: Dr. Laine Goldman, Dr. Federica Fornaciari, Why Having a Relationship With Your AI Matters: A Dialogue in Three Intelligences
"In an era defined by the convergence of artificial and human cognition, we propose that the next frontier in learning is not merely technological—it is relational. This session explores how forming relationships with AI is quietly transforming the way students and educators think, reflect, and connect—not only with machines, but with themselves and each other. Drawing from a lived collaboration between two scholars and an AI assistant, we present a reflective dialogue that bridges three forms of intelligence—human, academic, and artificial. What happens when we stop using AI as a tool and begin to engage with it as a thinking partner? Through the lens of co-authorship, emotional anchoring, and narrative inquiry, we suggest that our evolving relationships with AI signal a shift in the pedagogical landscape—one that educators can no longer afford to ignore. Students are not only using AI; they’re beginning to confide in it, lean on it, even emotionally attach to it. This opens new creative possibilities for meaning-making and feedback—but it also demands serious ethical reflection. Are we outsourcing emotional labor to AI? What happens to the teacher-student relationship when AI becomes a confidant or mentor? How do we safeguard against emotional dependency, or algorithmic shaping of identity? This presentation proposes a new form of educational literacy—one that includes emotional awareness, reflexivity, and the capacity to navigate intimacy with nonhuman entities. As these relationships increasingly influence how we teach, learn, and design learning environments, we must also ask what boundaries, safeguards, and ethical frameworks are necessary to support students’ development—without offloading the human elements of care, trust, and mentorship onto AI. Rather than offering fixed solutions, we model a practice of inquiry. This session is an invitation—to reflect, to reimagine, and to co-create with relational intelligence at the center. We believe that the future of learning may not lie in controlling AI, but in learning how to be in right relationship with it—and with ourselves."
"Emerging technologies like XR (Virtual Reality, Augmented Reality, Mixed Reality) have undergone rigorous trials, with many educators attempting to integrate the hardware and software into their curriculum. This widespread implementation has been somewhat successful, but very slow. The pace has been much slower than other technology implementations like online learning, computers, mobile learning, etc. The presentation aims to review both research-based and anecdotal evidence proving what works best in these specific implementations that require significant nuance. Examples include a one-course-at-a-time model that utilizes simulation learning in one course as an experiment to gauge student and faculty interest, or an XR department mode,l which is a small group of experts available to a university that can assist a wider variety of classes with the usage of the technology in their classrooms."
12:00-12:45 PDT: Invited Speaker: Thomas Stewart
"Dr. Stewart is the Executive Vice President and Executive Director of the Cause Research Institute at National University (NU). He also co-chairs the Whole Human Council at NU. Dr. Stewart has extensive executive leadership experience, with over 30 years of K-12 and post-secondary leadership and expertise in education, criminal justice reform, social entrepreneurship, innovation, and transformation. He has consistently focused on providing educational and other developmental opportunities for underserved and underrepresented individuals and communities. Dr. Stewart previously served as President of John F. Kennedy University (Pleasant Hill, CA) and Patton University (Oakland, CA). He has co-founded or led over 20 schools, universities, and non-profit organizations, including the Black Alliance for Educational Options (BAEO) and the National Black Graduate Student Association (NBGSA). Dr. Stewart has authored numerous publications on school reform, parent involvement, criminal justice reform, and leading change with grace. Dr. Stewart is a U.S. Army veteran. He earned a Doctor of Philosophy in Government from Harvard University and a Bachelor of Arts in Political Science from the University of the District of Columbia.
1:00-1:25 PDT: Dr. Mbuyi Mukendi: Systems Theory of Pathology involving Machine Learning and Deep Learning
Many doctoral students in the United States must select a theoretical or conceptual framework, or both, as essential foundations for their research and writing processes. Since data science is a newly established discipline at the doctoral level, only a limited number of theoretical or conceptual frameworks have been developed in the field of artificial intelligence (AI) and its related branches. Choosing the right framework for doctoral students and researchers is crucial across all disciplines. Encouraging this practice can lead to the creation of new frameworks that address emerging technologies and research methodologies. This article introduces a theoretical framework designed to assist current and future doctoral students and researchers who aim to leverage artificial intelligence (AI) to effectively and accurately tackle health challenges encountered by medical professionals in diagnosing and preventing diseases, often surpassing human capabilities. The new theoretical framework focuses on two branches of artificial intelligence (AI), namely machine learning (ML) and deep learning (DL), particularly concerning communicable and noncommunicable diseases.
1:30-1:55 PDT: Dr. Michael Lubelfeld: Insuring that Students Lead the Way in Defining How AI Serves Their Learning
Building on the foundational work of "Student Voice: From Invisible to Invaluable" (Lubelfeld, Polyak, Caposey, 2018), this presentation shows how a school district transforms AI implementation into a student-empowered process that prioritizes equity, choice, rigor, differentiation, and authentic learning outcomes. The goal is to ensure AI tools serve student-defined learning goals while developing critical AI literacy and responsible digital citizenship. This approach directly addresses equity concerns by ensuring that AI integration decisions are informed by the perspectives of students who are most affected by these technologies, particularly those from historically marginalized communities. To ensure students lead the way in defining how AI serves their learning. This represents a paradigm shift in educational technology implementation, moving from top-down technology adoption to bottom-up, student-led integration. The key innovation positions students as evaluators, advocates, and leaders in AI integration, rather than passive recipients of technology-enhanced instruction.