EDTECH: Where do you anticipate AI tools to be practically applied on your campus in the next year?
MCINTOSH: I’ve recently tasked one of our team members to download the OpenAI version of the large language model and put it inside the Richmond ecosystem, and then we’re putting together our own data based on our content over the years. It’s a small start, but we have a prototype up, so in the new year I’m going to be asking some of the vice presidents to think about some use cases where they could query this. This could help us use the new framework of how we want to market our institution using our core content.
ANDRIOLA: There’s going to be a battle between pedagogical comfort when having these things introduced and then what the students can do on their own, outside of what the university can control, to use these tools. That tension’s going to be there for a while. Longtime faculty say that’s the way the calculator actually looked, kind of same thing: “That’s cheating if you use that calculator to work on your calculus homework.” So, I think we’re going to have that.
In terms of our students going out into the world, we have to think about what we want to see in the classrooms. Because, when they go out there, they’re going to be expected to use these tools, and using them could be a competitive advantage. One of my favorite phrases these days is, “AI is not taking your job. The person who knows how to use AI is going to be the one to take your job.” Because they’re going to be 10 times more productive and deliver higher quality, or do something three times as fast because they’re using a better tool. If I’m digging a foundation with a shovel, and you bring in a backhoe, guess who’s getting the contract? We can throw all the buzzwords around we want, but it’s a better tool for me to use to be a more productive and more valuable employee.
RELATED: Generative AI gets dangerously smart for education.
EDTECH: Cybersecurity is always a major priority for higher ed IT leaders. Where are you on the adoption of zero trust? Can it be implemented effectively in a university setting?
ANDRIOLA: Zero trust has tremendous promise for us, but I think the implementation is going to take time for the products to mature and for the implementation methodologies to really be figured out by organizations before we’ll realize some of those benefits. I also think that the horizon is long: If I can compare it to the AI journey, the headwinds you’re up against are somewhat technical and then also in mindsets. Mindsets have to change before you can start to change toolsets. With zero trust, the technologies will come. We’ll figure out implementation. But there’s a really different headwind in this situation: It’s called the bad actor community. We’re talking about a set of people who get up every day like, “I want to extract something from you that’s going to hurt you.” And that headwind is going to be really strong. So, every step we make forward with zero trust, you’ve got this community out there that’s finding the weak spots in whatever you deploy that you think is better. And they have these AI tools and are using them at a much faster pace than you and me. So, I like the philosophy, but I think the horizon is going to be longer because we have a different headwind here than we traditionally do.
MCINTOSH: My outstanding director of information security has wooed me over to zero trust. I don’t know if I want to lead with zero trust as the language around the campus community, but the concepts behind it, the principles behind it, are guiding our steps now. I think we have some more rudimentary intermediate steps that we need to do to bolster our security posture, but I would say, long-term, I see the benefits of it. Anything that can help protect our data from the adversaries and bad actors that are out there, I’m all for it. We’re at the very beginning of that conversation, but zero trust is definitely part of our vernacular, and we’re planning for it.
Click the banner below to find out how identity and access management paves the way to zero trust.