The views and opinions expressed or implied in WBY are those of the authors and should not be construed as carrying the official sanction of the Department of Defense, Air Force, Air Education and Training Command, Air University, or other agencies or departments of the US government or their international equivalents.

AI Wingman in the Classroom: An Iteratively Adversarial Approach to Level-Up DAF PME

  • Published
  • By Maj. Matthew A. Cooper, USSF

Finalist in the Spring AI and Military Education Innovation Essay Contest

The U.S. Department of the Air Force finds itself at a pivotal juncture, where the dynamic swirl of emerging technologies demands new approaches to educating the next generation of leaders. Rather than treating artificial intelligence (AI) as a novelty or a threat, we can position it as a “wingman”, that ever-present ally which augments the teacher–student relationship. By weaving AI into the classroom, conceptualized loosely upon a three-body problem couple with the idea of Generative Adversarial Networks (GANs), the Department of the Air Force (DAF) can enrich officer education while preserving the human spark so integral to military development. This essay argues that AI-driven simulations, virtual tutors, and adaptive teaching systems, to be calibrated with an instructor’s oversight, can reshape learning experiences and strengthen competencies such as decision-making, operational analysis, and strategic thinking. Yet it will also show why human mentorship remains indispensable, and how a GAN-like method can unify the best of human and machine in ways that promote self-improvement, trust, and readiness.

In many corners of higher education, attempts to ban or restrict AI tools have proved unworkable. Students already embrace AI for convenience, real-time feedback, and anonymity it grants. One study revealed that 88 percent of college learners relied on AI multiple times per week, an indication that any approach ignoring this shift risks irrelevance.[1] At the same time, a near-peer competitor now leads the world in AI research output.[2] The Department of the Air Force, therefore, must pioneer responsible integration as an enterprise rather than lag behind, ensuring that professional military education aligns with the demands of modern warfare. The central premise is that an AI “wingman” can stand alongside the instructor and student, forging a creative tension akin to a GAN, in which each node learns through iterative feedback.

PME education in the DAF historically pivots on curated readings, seminars, lectures, and wargaming. Yet large language models and advanced simulations can transform these practices. AI-driven scenarios let students face reactive adversaries or crises that evolve in real time. In a standard wargame, the red team may follow a scenario script, be “white-carded,” or be played by a single human. By contrast, an AI-driven opponent can update tactics moment by moment based on known doctrine and historical practices, forcing the learner to adapt. This environment fosters critical thinking under pressure and prepares officers for the complexities of Combined Joint All-Domain engagements. Old static war stories now becoming living breathing challenges. With AI in the loop, no two runs of the exercise are exactly alike, mirroring the fog and friction of the real world.

AI-powered tutors also promise to personalize learning for every member. Large language models such as ChatGPT, Claude, Gemini, and LLaMA can respond to inquiries, review drafts, or generate advanced practice problems. Students at some universities using digital “faculty twins” report higher grades and deeper engagement, as the model stands ready at all hours.[3]  Instead of waiting for the next in-person seminar, a student can refine an operational plan with immediate AI assistance. This real-time loop increases the volume and speed of feedback. By focusing the AI’s knowledge on specialized content of joint doctrine, history, and mission analysis, learners receive relevant practice aligned with curriculum goals. Meanwhile, the human instructor steps in as the ultimate judge, discerning how well the student’s new skills match official expectations.

Studies suggest that the best AI tutors do more than dish out answers. They ask guiding questions, prompting the student to think.[4] This method harnesses inquiry-based learning, which fosters the kind of deep comprehension each military member needs. In a staff planning course, the AI might nudge a student to articulate second and third order effects of a proposed air interdiction, rather than just confirm the plan is viable. Later, the instructor can validate or contest that reasoning in a group discussion, ensuring no aspect of professional rigor is lost. The synergy between AI’s broad, immediate feedback and the instructor’s authoritative evaluation brings out the best in both.

Personalization, however, must not undermine core standards. Air Command and Staff College or Squadron Officer School each have set competencies an officer must master. AI can tailor how a learner gets there, but the destination remains fixed. If a student is strong in theoretical analysis but weak in writing, the AI may assign extra exercises in staff-paper composition. Another student might need more immersion in logistics scenarios. Each path can differ without eroding shared outcomes. Instructors still decide whether every student has fully grasped the required material. The advantage is that AI identifies weaknesses more quickly, thus raising the baseline across the board. Everyone ends at the same threshold of readiness, but faster learners can move ahead while less confident ones receive remedial help. The schoolhouse stops being a one-size-fits-all model and becomes a place where each student’s specific needs and deficits are addressed, strengthening members as a whole.

Although AI tools are powerful, something tangible is at risk if we let them dominate. DAF education hinges on trust, mentorship, and the intangible quality an experienced leader imparts in face-to-face exchanges. War stories laden with personal emotion or moral complexity cannot be fully captured by an algorithm. People learn from the physical cues and convictions of a mentor who has faced the realities of battle. If too many interactions move to the AI environment, we risk losing the ephemeral spark that shapes moral reasoning and group cohesion. A digital twin of a professor offers convenience but does not replicate the unspoken rapport and personal influence that can inspire a flight commander’s evolution. The classroom also fosters peer interaction and debate, critical in forging cohesive teams. AI may be able to simulate a contrarian stance, but it lacks genuine conviction or emotional investment in the argument. Officers need those lively challenges from classmates who see the world differently. That friction sharpens leadership skills, negotiation, and the empathy required to rally diverse teams. Thus, even as the proposed model leverages AI, the central flame of teacher-student contact must still be able to burn independently.

The better question is not whether to adopt AI, but how to do so in a structured, collaborative way. We find an elegant model by invoking the analogy of a Generative Adversarial Network, adapted for the collective student-instructor-AI trifecta. Traditionally, a GAN involves a generator network attempting to produce images that fool a discriminator network into seeing them as real. The interplay spurs rapid improvement in both, animating our classrooms. Here, the student becomes the “generator,” producing solutions or ideas to meet a standard while the instructor stands as the “discriminator,” evaluating correctness and appropriateness. The third complimentary element is AI, as a hybrid role, simultaneously generating challenges and offering preliminary assessments that sharpen the student’s output.

As the student refines each piece of work, the AI can critique it, suggest improvements, or escalate complexity. The instructor’s feedback remains the final authority and shapes the AI’s future responses. Just as a GAN’s generator updates its parameters after each interaction, the appropriately designed AI assistant can refine its approach based on how well the student’s final submission fared under the instructor’s judgment. The instructor learns which misconceptions appear most often, helping adjust the next lesson as the “adversarial” tension between parties drive growth. The student strives to produce consistently higher-quality work as no one wants to lose face in a wargame simulation, even a virtual one. The AI is energized to keep the student in a productive challenge zone, neither too easy nor too crushingly difficult. The instructor ensures the game remains constructive and properly aligned with doctrinal truths. In effect, every node in this triad learns from the others.

Such an interplay implies several conditions. First, the AI must continuously update or fine-tune with new data, so it does not stagnate. If the model is static, the advantage of iterative improvement decays. Second, instructors must remain involved, bridging the AI’s suggestions with real-world expertise. If they hand off all evaluations to the AI, the approach collapses. Third, both the culture and policies within the Air Force must adapt. This model demands trusting the AI enough to handle daily drills, quiz creation, or scenario expansions, while maintaining security and confidentiality. The potential for illusions or “hallucinated” knowledge from an LLM still remains real despite many recent advances.[5] As such, our educational approaches cannot tolerate spurious claims of key facts going uncorrected. Routine checks are vital, plus an internal system for promptly updating doctrinal changes so AI never lags new Tactics, Techniques, and Procedures (TTPs). Over time, an “AI wingman” requires a dedicated pipeline of validated data, a robust cloud environment, and an ethic of continuous improvement.

Instructors also must train to harness AI’s capabilities skillfully, so they do not drown in analytics or spurious suggestions. A professor with a digital twin can experiment in real time, letting the AI handle preliminary reading quizzes or content reviews while stepping in for deeper mentorship. They can watch for signs the model is giving partial truths or skipping vital context. Once a pattern emerges, for instance, the AI incorrectly explaining mission command principles, the instructor can rectify by feeding corrections back into the system, thus improving future performance.

Another risk is students tailoring their work to please the AI rather than truly broadening their minds. In standard GANs, the generator sometimes uses shortcuts that fool the discriminator but deviate from the target distribution. Students might learn to exploit an AI’s feedback loop by peppering key terms or by seeking quick fixes. If this becomes an arms race, the student’s deeper understanding may suffer. The antidote is the human teacher’s vigilance. The instructor can design offline tasks or culminating events that require genuine understanding. Face-to-face articulation of a plan or a graded group exercise reveals whether the student has learned or merely parroted.

When balanced properly, the result can be a healthy synergy that elevates the entire learning process and keeps the human dimension intact. The day-to-day friction between student and AI fosters iterative improvement, while the instructor stands above this swirl, guiding both. The student ultimately gains resilience by wrestling with adaptive challenges. The AI refines its tutoring approach by seeing what works. The instructor gains real-time insights into student misconceptions. If done right, education might accelerate beyond the glacial speeds of typical coursework, producing sharper, more versatile officers in a compressed timeframe.

Such transformations demand policy attention. The DAF has not yet issued a comprehensive directive on generative AI in PME. The best strategy would be guidelines that encourage experimentation under ethical constraints. Blanket bans, as some schools tried, only push the technology underground.[6] Instead, we can highlight academic honesty guidelines, clarifying that AI is permissible for brainstorming or practice, but final assessments must reflect personal effort. Courses might allow AI assistance for preliminary drafts, while requiring an attestation that final submissions are the student’s own. The guiding principle is to harness AI’s tutoring power without letting it overshadow genuine mastery. The institution’s curriculum designers must also reconfigure syllabi to embed AI-based tasks, calibrate each lesson to a triad approach, and ensure a fair measure of time remains for in-person debate and problem-solving.

The real return on investment is readiness. Military professionals educated through such a process will not only absorb doctrine but also develop an internal familiarity with human–machine teaming, a concept that will only become more and more ingrained in our daily lives as technology continues to advance. They will evaluate AI suggestions in the field with a practiced eye. Whether analyzing sensor data from an unmanned system or coordinating logistics, they will approach the AI as a collaborator, aware of how to cross-check or override it. By weaving AI into the intellectual DNA of professional military education the DAF fosters a new generation of leaders fluent in balancing digital input with human judgment. This advantage may prove decisive if near-peer adversaries rely on brute-force AI or a top-down approach that stifles creativity. American military, shaped by this proposed proses, can adapt.

More specifically looking into a challenge previously alluded to, AI sometimes introduces errors or misinformation, especially if training data are incomplete. If the solution is to constantly retrain or refine the model, that requires dedicated teams of engineers. Some topics, like certain ethics lessons or sensitive tactics, might be ill-suited to a broad-based AI tool. The community must keep a watchful eye on classification boundaries, ensuring that no privileged data inadvertently slip into a cloud model. There is also the moral dimension. Human leadership is often about intangible inspiration that technology cannot replicate. Even the best digital twin cannot replicate a living presence shaped by sacrifice, personal relationships, and real battlefield experience.

We can mitigate these limitations through blended approaches. For example, an AI might be used for practice quizzes, one-on-one tutoring, or real-time scenario escalations, while the concluding conversation or reflection remains human-centered. If the day’s scenario involved a questionable moral trade-off, the instructor facilitates and debates the implications with the class, bridging the gap between the infinite logic of an algorithm and the haunting reality of moral choice in warfare. This synergy preserves the intangible unity of a war college classroom while letting AI handle the routine tasks that once consumed hours.

The proposed model, inspired by the beneficial tension in GANs, shows a practical roadmap for integrating AI as a constructive force in education. The student, as the creative engine, tries to reach or exceed the standard. The AI, always present, feeds the student challenges, offers immediate feedback, adapts its difficulty, and adapts its approach even when the student may apply wildly diverse possibilities. The instructor stands watch, validating everything, ensuring standards remain high, infusing the intangible qualities of leadership. Each node learns from the others: the student grows faster, the AI refines its teaching, and the instructor gains a real-time lens on student progress. The result can be more students achieving mastery in less time, with less busywork, and with a more robust understanding that emerges from iterative tests.

This synergy becomes especially compelling when we consider the near-peer competition. In an era when China and others break new ground in critical research every day, we cannot afford to train its future leaders with dated TTPs, models and methods.[7] The urgency and need is as much strategic as academic. We can treat AI integration in PME as a laboratory for the future of warfighting synergy, enabling the opportunity for officers to become accustomed to learning alongside an AI assistant, and adept at leveraging advanced machine outputs under stress. They will carry those habits into operations, able to question, refine, and direct AI-based systems with nuance.

Yet certain intangible aspects such as mentorship, emotional intelligence, and leadership presence remain strictly human. By preserving them, we avoid losing the soul of PME, that intangible meeting of minds. The final answer is not to default to AI as a perfect teacher, but to harness it as an ever-ready wingman that complements the deeper, human-led mission.

Ultimately, the future of DAF PME can be envisioned as an arena where high-tech personalization meets the guiding presence of faculty mentors, where advanced simulations run in parallel with lively in-person debates, and where an AI wingman helps each officer reach a higher potential without supplanting the leadership lessons gleaned from human experience. By embedding AI not as a novelty but as a fundamental tool, we can produce graduates primed for an era defined by human–machine collaboration. The challenge is to keep that synergy under discipline, maintaining accountability and moral depth. If we do so properly, the collaboration becomes an engine of creativity, forging agile thinkers adept at integrating technology into the art of command.

Additional Suggested Resources

  1. Harshal Akolekar et al., “The Role of Generative AI Tools in Shaping Mechanical Engineering Education from an Undergraduate Perspective,” Scientific Reports 15 (2025): Article 9214.
  2. Shahida Rehman, “Digital Faculty Twins: Beyond Virtual Instructors – A Bliss or A Boomerang?” LinkedIn, March 24, 2025.
  3. Rachel Slama, Nelson Lim, and Douglas Yeung, eds., Leading with Artificial Intelligence: Insights for U.S. Civilian and Military Leaders on Strengthening the AI Workforce (Santa Monica, CA: RAND Corporation, 2024).
  4. Dan Hawkins, “Tailoring Generative AI: A Secure Sandbox and The Need for Role-Specific Training and Resources,” Air Force Research Laboratory – WIN the FUTURE (news article), April 1, 2025.

Maj Cooper is the Deputy Commander for US Space Forces Korea and is currently enrolled in ACSC.

 

[2] Hodan Omaar, “How Innovative is China in AI?” Information Technology & Innovation Foundation, August 26, 2024.

[4] Lixiang Yan et al, “Promises and Challenges of Generative Artificial Intelligence for Human Learning,” Natural Human Behavior 8:10 (2024): 1839-1850.

[5] Yan et al, “Promises and Challenges.”

[6] Junfeng Jiao, Saleh Afroogh, Kevin Chen, David Atkinson and Amit Dhurandhar, “The Global Landscape of Academic Guidelines for Generative AI and LLMs,” Natural Human Behavior 9:4 (April 2025): 638-642.

[7] Omaar, “How Innovative is China in AI?”

Wild Blue Yonder Home