Nov 23 2021
Internet

What Does the Future of Content Filtering Look Like for K–12 Education?

Content filtering programs are evolving with the help of artificial intelligence, looking for signs of self-harm and patterns of behavior.

All K–12 school districts have some variation of content filtering in place. They have to, to comply with the Children’s Internet Protection Act. Since its passing in 2000, CIPA has required schools to protect students from inappropriate content online, including content or images that are obscene, contain child pornography or are hurtful to minors.

Until recent years, this is where content filtering’s capabilities ended. The school prevented students from accessing harmful content by blocking certain sites, keywords and search terms while in the building.

The pandemic added another layer to content filtering needs. Suddenly, students were using school-provided devices — and doing schoolwork — at home. To continue filtering content, IT teams moved to cloud-based content filtering. Many of these programs are tied to students’ accounts, meaning that no matter where they log in or on what device, they can’t access blocked content.

Content filtering is getting smarter still, and although many districts have been slow to adopt more advanced filtering options, these programs are working to keep students safe and make IT administrators’ lives easier.

Click the banner to access customized content and articles featuring exclusive expert interviews.

Incorporating Social-Emotional Health into Content Filtering

With traditional content filtering, IT teams could run a report to see what sites students have tried to access and how many attempts they have made. This works well at its most basic level.

Now, however, there are content filtering platforms that take student well-being into account.

With the shift to remote learning, educators lost visibility into student mental health and well-being. When teaching in person, educators could easily see if a student was upset, but it has been difficult for educators to know how students are feeling when they aren’t on camera or if they are absent from remote classes.

To address this, content filtering companies such as GoGuardian have programs that specifically work to protect students from self-harm and suicide. Instead of IT teams running diagnostics, GoGuardian’s Beacon sends an alert if it detects activity to suggest a student is considering self-harm or suicide.

READ MORE: Software keeps students’ mental health at the forefront.

Some districts have been slow to adopt this technology because of liability concerns. Who is responsible for acting on an alert that may come through at 3 a.m.? If they don’t act on an alert and something happens, will the school be held responsible?

Other districts have partnered with local emergency services, allowing first responders to receive alerts and perform a check on students, particularly if a notification comes through outside of school hours.

The social-emotional health of students is detected with the help of artificial intelligence. AI’s use is growing throughout K–12 education, but it’s especially useful in content filtering.

Advanced Content Filtering Programs Learn and Look for Patterns

AI in content filtering also helps to make the programs smarter in other ways. For example, biology students frequently run into difficulties accessing materials they need for class because of general keywords blocked by the content filter. Rather than IT teams manually going into the system to grant access each time a student is blocked, the AI-driven programs can learn what is harmful content and what is needed for, say, a biology research paper.

Other instances of machine learning and AI could be used to identify and prevent school shootings. Research shows that school shooters frequently talk about their plans with others and conduct searches online while planning an attack (one common search is for information on the Columbine shooting). AI-powered content filtering could help administrators identify harmful behavior, as opposed to a student researching a term paper, by finding patterns or other red flags in students’ searches.

DISCOVER: Artificial intelligence helps keep K–12 districts safe from cyberattacks.

AI-powered content filtering can take a more holistic look at what students are searching to determine whether to alert IT teams at a low or high level. The programs can also be set up to change permissions as students grow. Because students are frequently tied to an account throughout their school career, content filtering can set more restrictive parameters for younger students that change as students get older.

With the power of AI, and with IT teams that care about student well-being, content filtering will continue to advance. Schools can make life easier for their IT departments while keeping students safe by incorporating this technology into their districts.

This article is part of the “ConnectIT: Bridging the Gap Between Education and Technology” series. Please join the discussion on Twitter by using the #ConnectIT hashtag.

[title]Connect IT: Bridging the Gap Between Education and Technology

bunditinay/Getty Images
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT