For example, SDSU has built an AI assistant that allows students to ask questions about their instructors’ course content, he says.
“We’re taking some lessons back to us, so it forms a great partnership and makes us all stronger together,” Suchy says.
Because TritonGPT operates in a closed, secure environment, the university allows employees to use P3-level data, such as student education records and employee personnel records, but they cannot use P4 data that includes personally identifiable information, he says.
The metadata resides in an in-house AI assistant. So, instead of manually searching for data, users can simply ask the AI assistant for the data they need, Pollak says.
RELATED: Effective AI requires effective data governance.
“Analysts can ask the AI assistant, ‘What student count field should I use if I want to build a report to show third-week enrollment?’ and it directs the analysts to the correct field and lists the protection level of the data and other characteristics,” he says.
While the university encourages employees to use TritonGPT, it also makes OpenAI’s ChatGPT and Microsoft Copilot available. The university’s commercial agreements with the two vendors protect university data, so the IT department allows staff and faculty to use up to P3 level data.
“They have safe handling practices with our data. It’s all covered under our security agreement with those vendors,” Suchy says.
However, if employees use the free version of ChatGPT, they should only use publicly available data and not internal university data that should be protected, he says.
DISCOVER: Universities provide resources to create and use proprietary chatbots.