The conference room buzzed with quiet conversation as educators shuffled in, some carrying steaming cups of coffee, others clutching notebooks and laptops, ready to wrestle with the questions AI has forced onto the education system. Hosted by the National Organization for Student Success, this summit about AI attracted teachers, administrators, and industry professionals alike. I was there representing Thea Study, an AI-powered study tool, to get a sense of how educators felt about AI’s growing presence in their field. The energy in the room was a mix of curiosity and hesitation—an unspoken divide between those eager to embrace AI’s potential and those who feared what it might mean for the future of teaching.
The first session, led by Dr. Clifford Poole from Appalachian State University, focused on an area I hadn’t thought much about before: student support centers. AI is already creeping into classrooms, but what about the people who help students outside of them? Dr. Poole wasted no time diving into the ways AI is being used in academic advising, career counseling, and mental health support. He told a story about how he’d directed one of his more demanding students—who needed immediate responses to a steady stream of questions—to an AI-powered chatbot, giving them access to a 24/7 support system. The way he framed it, AI wasn’t replacing human advisors; it was just giving students another avenue of support.
I could see how it would take some of the pressure off overworked advisors and help students feel like they always had somewhere to turn. But there are some concerns with this type of approach. Where do we draw the line? If a student relies too heavily on AI for emotional support, does that ultimately isolate them rather than help them? Dr. Poole acknowledged these concerns but stood firm: AI should never replace human advisors, just assist them. He made it clear that there were certain areas AI had no business touching—situations involving suicidal thoughts, interpersonal violence, or any scenario where a human’s ability to read between the lines is essential.
The ethical considerations came up next: data privacy, bias, and transparency about sources. Dr. Poole stressed that students should never share personal health information or home addresses with AI tools. He also raised an interesting point about bias, explaining that while AI can make recommendations, it must be programmed to avoid reinforcing racial, gender, or socioeconomic stereotypes. As he spoke, I found myself wondering how the creators of AI are addressing some of these concerns, and if there even will be ways to solve for them.
By the end of the session, it was clear that AI has a role in student support, but it’s one that requires careful thought. I left thinking about how much of this conversation boiled down to trust—trusting students to use AI responsibly, trusting institutions to create ethical policies, trusting the technology itself to be a helpful tool rather than an unchecked force shaping students’ lives in ways we aren’t prepared for.
The next session took a sharp turn into a different, but equally complicated, realm: student’s use of AI. Dr. Emily Suh, who works with students in remedial literacy and math courses, opened with a question: “What are your biggest concerns about AI?”
The room came alive with responses. Cost. Accreditation. Plagiarism. Curriculum design. Someone near the front, with a skeptical frown, said, “I’ve caught more students using AI to cheat this year than I have in the last five years combined.” A murmur of agreement spread through the room. Another teacher chimed in with a story about a student who had asked ChatGPT to generate discussion answers in real-time during class. “It’s not just homework anymore,” she said. “They’re using it mid-conversation.” Heads nodded.
There was a clear sense of frustration, but also a recognition that AI wasn’t going anywhere. Dr. Suh shifted the discussion, telling us about an experiment she ran with her students. Instead of banning AI, she had them use it in various writing assignments. In one, students’ asked AI to generate five different versions of their own thesis statement and then they analyzed which was the strongest. In another assignment, students had to take an AI-generated essay and reverse-engineer the outline. In Dr. Suh’s course, reflection made up 25% of each assignment’s grade—they were graded not just on their final output, but on how they used AI as a tool.
This approach fascinated me. Rather than treating AI as a threat, she was treating it as an opportunity to teach critical thinking. Still, I could tell that many in the room weren’t convinced. “But how do we make sure they’re learning?” someone asked. Dr. Suh’s answer was simple: “Scaffolding.” She talked about how she had introduced more frequent check-ins throughout assignments so that students couldn’t just submit an AI-generated final product. The general consensus in the room was that answers alone were no longer enough—teachers now had to focus on how students arrived at those answers, because they couldn’t always trust that the final product was truly the student's work.
By the time we reached the next session, I had heard plenty of concerns about students using AI, but what about faculty? The educators from Valencia College tackled that question head-on. They had spent the summer running a professional development course on AI for teachers, and their findings were revealing.
To start, they had the audience share their institutions’ AI policies. The results shocked me. Not a single school in attendance had a formal AI policy. Many had committees that have been working on guidelines for months or even years. A K-12 teacher in the room shook her head and said, “Our school has no stance at all. It’s entirely up to the teachers.” More nods. A representative from McGraw Hill explained that their AI policy prohibited authors from including AI-generated content in textbooks. It was clear that, across the board, there was no consensus.
Another learning that stood out to me from this session: most of the AI training for faculty was not intended to revolutionize their lectures or assignments like I thought it would be—it was to teach faculty AI literacy so they could save time on administrative tasks. And honestly, I get it. If AI can draft a lesson plan or letter of recommendation in seconds, why wouldn’t you take advantage of that? At one point, they had us play a game called “Spot the Bot,” where we had to guess which of four short texts was AI-generated. Less than a quarter of the room got it right. I certainly didn’t. And neither did ChatGPT.
This activity was illuminating—I wasn’t able to catch the AI, and it proved to me that detecting it, if not impossible now, soon will be. Some attendees insisted they could always tell, citing repetitive phrasing or off-topic tangents. Personally, I think they’re wrong. AI has accelerated so much in just the past few months. A suggested solution was to encourage students to use personal stories and their own “voice” more. But I don't think that works either. Students could simply upload their past papers or personal creative writings to AI, train it on their style, and have the AI create an essay in their voice. The advancement of AI isn’t an arms race between Generative AI and AI detectors. There’s a clear winner, and it’s already here.
The AI Summit left me with plenty to think about, but one comment has stuck with me more than any other. A veteran teacher, who had been in the field for decades, compared the rise of AI in education to the introduction of graphing calculators in the ‘90s. At the time, people panicked—would students stop learning math? Would calculators make them lazy? Now, no one blinks at a TI-84.
It made me wonder: how will we look back on the introduction of AI in a decade? Will today’s fears seem just as overblown? Or will AI’s impact on education be something far greater—something we should have prepared for more carefully?
One thing is clear: institutions and educators can’t afford to wait. AI isn’t a distant possibility; it’s already reshaping education. The challenge isn’t stopping it—it’s deciding how to guide it. Schools have to define policies that work with this new technology rather than against it. If they don’t, they won’t stop students from using AI. They’ll just lose the opportunity to shape how it’s used.
Comments