
Written by Page Stites, Upper School Director
There’s no lack of discussion these days on the role of Artificial Intelligence in the world of business and its impact on the economy. In just the past few weeks, we’ve seen a senior researcher at OpenAI publicly resign her job in protest over “the erosion of OpenAI’s principles to maximize engagement” and the advent of advertising on AI platforms. We’ve seen an AI startup founder and investor write an essay that went viral in which he sounds the alarm at the pace of development of AI systems as they begin to program their next iterations themselves:
“This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement."
There’s much more to choose from, but I was particularly struck by a recent essay by the CEO of Anthropic, Dario Amodei, titled “The Adolescence of Technology,” in which he writes the following:
“I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”
Closer to home, we’ve continued our efforts to have open and transparent conversations with students and faculty about AI and the role it should and shouldn’t play at Field, all with an eye towards helping leverage the power of AI while educating students about responsible and ethical use. Given the broad societal impact of these tools and their pace of development, we haven’t “solved” the problem of AI in education, and that’s not really the point. Instead, we’re working hard to engage with the topic, remain healthily skeptical but also open-minded, and to take the same inquiry-based approach we ask of our students in the classroom.
One faculty member wrote to me recently to share the ways she was experimenting with developing an interactive AI interface that might help students “use generative AI as a research tutor to test the viability, scope, and evidentiary basis of a historical research question without allowing the AI tool to write the thesis, argument, or research question itself.” She shared that she specifically designed the trial so that “it demonstrates boundary-setting, refinement through questioning, and responsible academic use.” She struck a note of optimism, recognizing the potential to help students, and also of concern, wondering about placing this tool in front of students without further experimentation and testing.
In another conversation, a teacher shared his story of using an AI tool to create multiple explanations in different formats for a difficult concept and presenting those—with full transparency about the fact that he had used AI to create them, albeit with careful vetting and review—to his students as alternate entry points to the topic. He was excited to share his innovative approach and the novel ways he felt it helped him expand his reach and impact as a teacher. At the same time, he wanted to discuss his students’ reactions to it: while some found the explanations helpful, others took a stand and refused to engage with them because they had been AI-generated.
All of us, students and faculty, are grappling with what this all means.
Last fall, I wrote about the launch of our student “AI Working Group” and previewed the group’s trip to a national conference in early December titled “From Classrooms to Careers: How AI is Shaping Our Paths.” Alongside the six students from the AI Working Group, three faculty members, and I attended a parallel conference, the National Association of Independent Schools’ “NAIS Symposium on AI and the Future of Learning.”
At the conference, the students worked intensively to learn and practice “deliberative democracy," in which they discussed a range of AI-related issues with their peers to dig deeply into the topic. This short video, produced by the organizers of the conference, captures many of the elements of the experience for our students: Conference Video
The group returned to Field full of enthusiasm for the topic of AI in school and also for the deliberative democracy process they learned at the conference. While the students are excitedly planning to facilitate discussions in the community, we also recognized that we wanted to gather more information from students and faculty before diving into that work.
To that end, the group is currently working on developing a survey for faculty and students that asks not only about the nature and extent of each group’s AI usage, but also about each group’s perceptions, awareness of, and feelings about the other group’s usage. Our hope is that the survey will help us gain valuable insights into the patterns of AI usage at Field. Beyond this, we believe that comparing the usage data with the information we collect about perceptions will open up necessary and important conversations in the community about AI. We hope these conversations will serve as the foundations for shaping the next direction of our school policies around responsible and ethical AI use at Field.
At Field, we think and talk a lot about what it means to be a community of learners. For our faculty and our students, this means being willing to grapple with complex questions, to accept and even embrace ambiguity, and to be willing to adapt and change as the world changes around us. This work, which brings students and faculty together around AI—one of the most complex, ambiguous, dynamic, and critical topics we face as a society today—is a perfect example of why we believe this approach is beneficial to our entire community.
