With attackers constantly probing networks, smart IT managers know that performing security audits once a year isn’t enough. Best practices now call for continuous monitoring to obtain an up-to-the-minute view into networks and systems.
Vulnerability analyzers provide independent information about network traffic and link this information to knowledge bases showing real and potential vulnerabilities.
Security pros can choose from two complementary vulnerability analysis techniques: active scanning and passive scanning. Active scanning tries to connect to every IP address on a network and determine open TCP/IP ports, application version information and device vulnerabilities. On the other hand, passive scanning uses one or more network taps to see which systems are actually communicating and which apps are actually running.
The two techniques often are used together. For example, when a passive scanner detects a new system, it can launch an active scan of the system to gather more information about network apps that may be running, but unused.
Many manufacturers, including eEye, McAfee and Tenable, sell active vulnerability scanners or scanner signatures. Passive scanning is a new technology that presents fewer options, but choices include Tenable and Sourcefire. Here are some guidelines for choosing when and how to use active and passive scanning.
Start with active scanning, both inside and outside the firewall.
“Credentialed scanning,” which gives the vulnerability analyzer a username or SSH key to log on to each system, is a necessary part of active scanning. Together, these techniques provide a view of systems, operating system and application versions. They highlight out-of-date applications, missing patches and potential misconfigurations that could lead to security vulnerabilities.
Managers of IT environments with a rapidly changing application mix or weak configuration control should add in passive scanning.
Vulnerability analysis based on active scanning is only as accurate as the frequency of the scans, while passive scanning instantly identifies new systems and new active apps, as well as some version information. Discoveries made by passive scanners should update the same database as active scanners to build the most complete picture.
Regularly and continually test organizational firewalls with “lightweight” active scans.
Firewalls are easy to misconfigure, and their policies become out of date very quickly. Scans outside the firewall provide “third party” knowledge of what is alive and able to respond across the Internet. Hackers constantly scan organizational networks anyway; self-scanning just helps to level the playing field a little.
Passively scan user networks.
With mobile devices popping up every few seconds and green eco-conscious users preferring to turn systems off at night, there is little point in actively scanning user networks. Passive scanning can help track devices across subnets, and also responds instantly to user-installed (or malware-installed) applications that begin chatting across the network.
This is an area where vulnerability analysis and intrusion detection/prevention begin to overlap. Several network access control products already include both passive and active scanners to discover new user network devices.
Understand that IPv6 and “black box” networks create complications.
Network managers planning to move to IPv6 should be aware that active scanning of IPv6 subnets is impossible. A typical IPv6 subnet is 16 million times larger than a typical IPv4 subnet, so any plans for IPv6 support may require a rethinking of vulnerability scanning strategy. “Black box” networks, which are invisible to vulnerability scanners, also are becoming big issues.
As more and more individual hosts and servers have their own firewall capability enabled, vulnerability scanners without credentials quickly lose effectiveness. Additional techniques, such as passive scanning, careful attention to application firewall logs and security information/event monitoring are needed to maintain security visibility.