Incorporating Social-Emotional Health into Content Filtering
With traditional content filtering, IT teams could run a report to see what sites students have tried to access and how many attempts they have made. This works well at its most basic level.
Now, however, there are content filtering platforms that take student well-being into account.
With the shift to remote learning, educators lost visibility into student mental health and well-being. When teaching in person, educators could easily see if a student was upset, but it has been difficult for educators to know how students are feeling when they aren’t on camera or if they are absent from remote classes.
To address this, content filtering companies such as GoGuardian have programs that specifically work to protect students from self-harm and suicide. Instead of IT teams running diagnostics, GoGuardian’s Beacon sends an alert if it detects activity to suggest a student is considering self-harm or suicide.
READ MORE: Software keeps students’ mental health at the forefront.
Some districts have been slow to adopt this technology because of liability concerns. Who is responsible for acting on an alert that may come through at 3 a.m.? If they don’t act on an alert and something happens, will the school be held responsible?
Other districts have partnered with local emergency services, allowing first responders to receive alerts and perform a check on students, particularly if a notification comes through outside of school hours.
The social-emotional health of students is detected with the help of artificial intelligence. AI’s use is growing throughout K–12 education, but it’s especially useful in content filtering.
Advanced Content Filtering Programs Learn and Look for Patterns
AI in content filtering also helps to make the programs smarter in other ways. For example, biology students frequently run into difficulties accessing materials they need for class because of general keywords blocked by the content filter. Rather than IT teams manually going into the system to grant access each time a student is blocked, the AI-driven programs can learn what is harmful content and what is needed for, say, a biology research paper.
Other instances of machine learning and AI could be used to identify and prevent school shootings. Research shows that school shooters frequently talk about their plans with others and conduct searches online while planning an attack (one common search is for information on the Columbine shooting). AI-powered content filtering could help administrators identify harmful behavior, as opposed to a student researching a term paper, by finding patterns or other red flags in students’ searches.
DISCOVER: Artificial intelligence helps keep K–12 districts safe from cyberattacks.
AI-powered content filtering can take a more holistic look at what students are searching to determine whether to alert IT teams at a low or high level. The programs can also be set up to change permissions as students grow. Because students are frequently tied to an account throughout their school career, content filtering can set more restrictive parameters for younger students that change as students get older.
With the power of AI, and with IT teams that care about student well-being, content filtering will continue to advance. Schools can make life easier for their IT departments while keeping students safe by incorporating this technology into their districts.
This article is part of the “ConnectIT: Bridging the Gap Between Education and Technology” series. Please join the discussion on Twitter by using the #ConnectIT hashtag.