Close

Goodbye, Legacy Apps

What can updating legacy applications mean for your university?

Sep 19 2024
Security

Are Your AI Chatbots Giving Away More Information Than They Should?

Higher education institutions have been embracing artificial intelligence, but new implementations bring security risks — including from an overly chatty bot.

One of the appeals of welcoming an AI-trained chatbot to campus is the ability to answer any question, at any time, on any day. It can be a lifesaver for overworked staff burned out from answering the same questions over and over while more important projects fall by the wayside.

It’s no wonder, then, that with dramatic advancements in large language models — like the one famously powering ChatGPT — chatbots are popping up at universities all over the country to handle all types of questions and come to the rescue in all kinds of situations:

  • A professor grading papers on the weekend gets locked out of his account: Ask the chatbot.
  • A student realizes in the middle the night that she missed the deadline to register for classes: Ask the chatbot.
  • A hungry student wants to know when the nearest dining hall opens: Ask the chatbot.
  • A prospective student wants to check on the status of his application while at his family’s Thanksgiving dinner: Ask the chatbot.
  • A remote worker wants to know how to amend her benefit enrollment declarations: Ask the chatbot.
  • A cybercriminal with hacked credentials decides it’s time to step up an attack and access personally identifiable information, medical records and financial information for the student he’s impersonating: Ask the chatbot?
  • A bad actor posing as a researcher wants to know how many students have a specific medical condition, or are accessing mental health services, or are receiving federal aid, or are behind on their tuition: Ask the chatbot — and, if you’re working in IT or compliance, cross your fingers and hope the bot is going to say “no.”

Segmenting data and other sensitive information on a college campus has long been a security best practice. But introduce a bot into the ecosystem, and suddenly your immediate priority becomes putting up reinforced guardrails that even AI can’t breach.

Click the banner to remind yourself of the good things AI can do for the student experience.

 

How Much Do Chatbots Need to Know?

If higher education networks are set up carefully and correctly, the risk of a nosy chatbot spilling sensitive information should be low. Just like any other user on the network, the chatbot should have access only to the information it needs and should be restricted from everything else. It’s a version of zero trust, but for chatbots — which shouldn’t be trusted to go anywhere on the network without a human administrator’s permission.

Setting up a chatbot correctly, of course, is easier said than done, and the consequences of a mistake could be disastrous. Spilling PII is not just a nightmare for the person whose information is stolen, it can also lead to lasting reputational damage for the institution and potential compliance penalties from the federal government.

When configuring a chatbot’s access permissions, it’s useful to remember that there’s nothing about chatbots that makes them immune to the data privacy challenges plaguing the rest of the internet. From the very beginning, when a chatbot is being trained on real-world examples to build its neural base, to the moment when it is released on the world and uses new queries from users to continue its learning, data is being ingested. That potentially includes personal data.

Worse still, data given over to publicly available chatbots, such as ChatGPT, disappears into an opaque database used for machine learning and can be matched to other information gleaned from other sources, allowing the chatbot to build a profile of a user when those data sets merge.

Jennifer King and Caroline Meinhardt, researchers at the Stanford University Institute for Human-Centered Artificial Intelligence, noted in an article on the Stanford HAI website that “generative AI tools trained with data scraped from the internet may memorize personal information about people, as well as relational data about their family and friends.”

There have also been reports of large language models tricked into revealing things they shouldn’t, such as internal system information and how to commit criminal acts.

The thing to remember is that any time a chatbot learns something, that’s data that could potentially be shared with the wrong people. The key is to make sure administrators know exactly where that data is going and to keep it tightly secured.

RELATED: How zero trust can protect against evolving cybersecurity threats in higher ed.

Building a Chatbot That Behaves the Way You Want It To

The only way to guarantee that data remains secure and doesn’t comingle with data a public AI has already ingested is for colleges and universities to build proprietary chatbots unique to their institution. And there are other benefits of doing it this way; for example, chatbots trained on a single university can offer more personalized and specific answers to questions and will be better able to direct users to the right information.

The process of building a chatbot can seem daunting, and it is a time- and resource-intensive project, but the benefits outweigh the risks of using a prebuilt, third-party option. Trusted partners like CDW can help universities build custom chatbots. CDW has the experience and expertise to ensure data stays segregated and stowed far enough away from the AI that requests for personal data won’t be answered — at least, not without another layer of security on top.

Best practices for segmenting data from a chatbot include role-based access controls — a key component of zero-trust security — and privileged access management that protects the most personal and sensitive information colleges possess.

This article is part of EdTech: Focus on Higher Education’s UniversITy blog series.

BlackJack3D/Getty Images