When students email each other about the adolescent hot topics of who likes who or to gripe about teachers, more people than they realize may see the messages.
That’s also true when the emails, chats or other online activity lean toward harm — either self-inflicted or against other students — including instances of cyberbullying.
Technology allows school and district leaders to keep a closer pulse on it all and intervene if necessary.
Content filtering and web monitoring software aim to prevent students from accessing inappropriate materials online, and many options also include features to alert school leaders to cyberbullying and other instances in which students’ digital communications or web activity might indicate troubling behavior.
In Alabama’s Marshall County Schools, Securly, the web filtering service the district began using last year, alerted officials to a man inappropriately communicating with students through their district email addresses.
What’s scary, Technology Director Trent Jordan says, is that type of communication had happened before and school officials had no idea.
“Now we know it’s happening,” he says, “but how much more is out there that we’re not catching?”
Taking a Proactive Approach to Student Safety
That’s the sort of question driving administrators and IT professionals to find proactive ways to protect students, providing physical security as well as safety online.
Cyberbullying in particular is a significant problem for K–12 schools. A majority of U.S. teenagers, 59 percent, say they have experienced cyberbullying, according to a 2018 report from Pew Research Center.
Of teens who say they are online “almost constantly,” 67 percent say they have experienced cyberbullying, according to Pew.
Research ties bullying to depression, suicide attempts, feeling unsafe at school, absenteeism and other troubles.
Some companies that produce monitoring software report sending tens of thousands of alerts about cyberbullying, violence, sexual content and other concerns.
The companies also tout their technology as sophisticated enough to tailor access to different user groups and to pick up the context of the content, pinpointing when something is not appropriate.
For example, SonicWall’s Content Filtering Service can “allow or deny access to sites based on individual or group identity — students, faculty, visitors — or by time of day, for more than 50 predefined categories,” Brian Patch of SonicWall told EdTech earlier this year.
In Alabama, Marshall County Schools received grant funding for safety upgrades such as security cameras and door locks. But “all of that is for situations that either are happening or have happened,” says Jordan, the technology director.
What was missing, he says, was a way to better prevent incidents or potential tragedies.
When questions, actions or communications occur, district leaders — principals, assistant principals, counselors and district leaders — are notified and then determine how best to respond, Jordan says. With serious situations, depending on the circumstances, they take immediate action such as contacting a student’s parents or police, he says.
The monitoring is not a secret. In Marshall County, for instance, students and their parents are asked to sign a document acknowledging that any data they share through district email can be monitored, Jordan says.
Striking a Balance Between Security and Privacy
Features of content filtering and monitoring software are expanding amid ongoing conversations about physical security and cybersecurity for schools. Those trends also include questions about how increased monitoring affects student privacy.
As administrators and school leaders see more of what students are doing and saying, additional questions arise about the line between protecting students’ privacy and what it takes to keep them safe.
“With the safety conversation as big and significant as it is these days, and the states looking for solutions for the safety problem, we have a new use case for what started out as required protection to help kids stay safe online and only see appropriate stuff when they’re in school,” says Linnette Attai, president of the global compliance consulting firm PlayWell.
The Children’s Internet Protection Act is a Federal Communications Commission regulation that ties E-rate discounts for internet access to schools and libraries meeting certain criteria such as monitoring safety and security, including email and chats. The law also requires instructing children about safe online practices, or digital citizenship.
Essentially, Attai says, CIPA addresses “excluding inappropriate content.” “In determining how to comply with CIPA, or to address safety concerns including what tools to use, schools and districts should be looking for a balanced approach that addresses three areas: safety, security and privacy,” she says. “When one of them is off, you have a problem. Where’s the balance that you’re striking?”
It’s important for administrators to know whether the filtering and monitoring tool or service they’re considering has been proven to have a positive effect.
“Try to avoid just grasping at something new because it’s new and there’s not another solution,” Attai says. Doing so will help to improve your school’s safety profile, she says.
School leaders should also look through the lens of privacy requirements for federal, state and community norms. Some tools can be very invasive, Attai says.
District officials should determine whether they see the actual improvement they expected and pay attention to individual privacy concerns, Attai says.
“We need to consider these things very thoughtfully.”