Mar 22 2021
Internet

AI-Powered Content Filtering Adds New Protection Powers

Recent innovations give content filtering greater impact and broader use in improving student learning and engagement overall for schools.

Schools are responsible for protecting students from harmful internet content, which requires K–12 IT teams to figure out how to keep the bad out and let through only material that is safe and educational. But past efforts have often fallen short. The internet is constantly evolving and adding content, and students are notoriously ingenious at subverting content filtering measures.

However, the newest content filters might be able to outsmart the smartest would-be student hackers, thanks to artificial intelligence (AI) and machine learning (ML). Lightspeed Systems has been working in the student online safety space for more than 20 years. Recently, the company announced several patents related to the technology that powers its enterprise-level filtering solution, Lightspeed Filter (formerly Relay).

The first patent is for an agent that installs a proxy on the student device that filters all content.

“When the proxy is on the device, everything is faster, and there’s far less latency,” says Carson McMillan, Lightspeed CTO. “The information doesn’t need to be sent to a server, and there’s more power and control on what’s happening right on the device. It enables schools to scale and be on more devices without the user incurring any slowness.”

The second patent is for what Lightspeed calls “proxy injection technology,” which inserts functionality into websites and apps on the device to collect information on what users are doing.

How AI and ML Help Filter Content

“One challenge with the internet is that it’s dynamic,” says Brett Baldwin, vice president of sales for Lightspeed. “AI and ML add context to the analysis so the algorithms can be changed on the fly.” A significant contribution AI brings that enhances content filtering is its ability to categorize websites and videos in Lightspeed’s database.

“We have hundreds of web crawlers automatically going out and millions of points of student data coming in,” says McMillan. “We know what the most popular websites are. AI helps us check and recheck all that content, categorize it, and make sure it’s accurate.”

DIVE DEEPER: What do K–12 schools need to know about FERPA compliance in the digital age?

AI and ML also help determine safety by analyzing and updating contextual information.

For example, a student might search for why a historical figure committed suicide, with “suicide” being a keyword the AI has flagged. Because it’s within an educational context, the content would not be blocked, but administrators would be alerted to see if this type of information should be filtered in the future. The ML then adjusts the algorithm to block or allow future content.

However, if the context is more concerning, such as a student who is creating a Google Doc that indicates a potential for self-harm, administrators will be alerted, and the right team at the school, such as counseling, will intervene.

Data Analysis Adds Value to Content Filtering

AI-powered content filtering can do much more than analyze websites for appropriateness or pick up on red flags requiring action. It can also provide meaningful data for decision-making.

Most schools aren’t using these administrative analysis tools yet, but McMillan hopes more will in the future.

“We’re not just building a data platform for our customers, we are building much more,” he says. “One key component is for customers to benchmark their results compared with other schools and districts. When data analysis tools are adopted, we’re seeing great results.”

Getty Images/PeopleImages
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.