Artificial intelligence is rapidly weaving its way into campus life, and our community is learning to navigate it in real time.
In a multi-week research project, our Advanced Decision Making with Analytics class set out to understand how generative AI is being used and perceived by students, faculty and staff. Our research began with a simple question: How are people on campus using AI? The answers weren’t so simple. We uncovered differences in use, trust and confidence through interviews, training sessions and a campus-wide survey with 362 responses representing students, staff and faculty. Understanding these differences is crucial, they highlight where support is most needed as we prepare for the inevitable integration of AI into our academic and professional future.
Students reported the highest use among regular users. Many faculty and staff expressed hesitation, not out of disapproval, but unfamiliarity. This divide is rooted in access and understanding. ChatGPT emerged as the most used tool. Across all roles, AI was applied to tasks like writing, content creation and coding. Students especially leaned on it for writing support, learning assistance and idea generation.
Although generative AI has been used on campus, its adoption has not been consistent across disciplines. About half of business majors are regular users, likely due to exposure through courses such as Business Analytics. In contrast, students from the School of Music, Theatre, and Dance reported minimal engagement. As one student noted: “AI cannot sing in the 1:1 studio class of a voice professor … it has the potential to create some inequities in learning conditions.” Still, these concerns highlight opportunities: AI can’t replace creativity or hands-on learning, but with guidance, it can support both artistic and analytical growth. By embracing AI thoughtfully, we can enhance, not diminish, human-centered disciplines and better equip students for evolving creative industries.
Some voiced fears about AI’s impact on originality and job security. As one participant put it, “People become overly reliant and can’t do things themselves.” Another added, “We are training ourselves out of independent thought.” These sentiments reflect a deeper concern that overreliance on AI may erode human skills. It isn’t just about tools — it’s about what kind of thinkers we want to become.
For many, the biggest barriers weren’t technical, but educational. Respondents struggled with crafting effective prompts and cited limited access to advanced tools. Nearly half cited the need for clearer guidance as their top challenge.
Privacy concerns also surfaced. One user shared, “I always must catch myself [redacting] personal information before prompting AI.” Hesitation like this underscores the importance of ethical considerations. Addressing these concerns now ensures that future classrooms can adopt AI in a way that protects personal data and maintains trust.
One participant summed up the unease: “I’m fed up with how much AI has co-opted higher education and [the] business world …it’s [making] young people [unable] to think critically.” Disillusionment like this can’t be dismissed; it emphasizes the need for education and more institutional guidance. There’s also a clear demand for support. One faculty member said, “I’d [teach it]. But I’m not well–versed enough in it.” That isn’t disinterest, it’s a call for help.
Our training sessions showed that even small improvements in prompting build trust and boost usage. One trainee said, “I am confident that I will use ChatGPT much more frequently to draft clearer emails, run more user-friendly reports that others can benefit from, etc. I have already started using it. I find that it is like having my own personal assistant! So, thank you for spearheading this project. I know it will become a huge help.”
50% of faculty and 66% of students seek stronger ethical guidance.
AI is already part of education; we must empower our community to use it wisely.
Here’s how we can move forward:
- Deepen education through tailored training.
Campus-wide AI workshops are a start, but discipline-specific training and one-on-one support are more relevant and effective than broad workshops. With AI, training can be customized to reflect the specific language, tools and tasks of a given discipline. Whether it’s analyzing data, drafting content, or exploring creative ideas. AI can also assist in one-on-one support by acting as a live demonstration tool, helping users test prompts, receive feedback and explore discipline-specific applications in real time. - Clarify policies to empower ethical use.
Clear, accessible policies create transparency for everyone. For faculty, additional resources can make it easier to update course content and communicate expectations. For students, knowing where, when and how AI is permitted builds trust and minimizes confusion. When both faculty and students have shared guidance, it fosters a more consistent and ethical learning environment. - Improve access to helpful tools.
OpenAI’s free access to ChatGPT+ for students through May is helpful, but we’ve seen firsthand how having full access to tools like ChatGPT+ significantly enhances learning, creativity and research. Ensuring broader availability, especially in academic settings, can empower students and faculty to explore AI’s full potential.
The conversation around AI is just beginning. How we respond today will shape our classrooms and workplaces tomorrow.
This research shows that AI is already influencing how students learn, how faculty teach and how staff support both. If we ignore the gaps in confidence, access and policy, we risk leaving parts of our community behind. But if we adapt — with clear guidance, relevant training and ethical support — we can build a more inclusive, innovative, academic environment. These choices we make now not only define how AI is used in education, but also how we prepare our graduates to use it responsibly in their careers.