Think Visibility, Then Control
Having just returned from the annual Black Hat conference in Vegas, the only thing clear these days is things are not getting better. Attackers are persistent and they have significant resources.
September 26, 2011
By Mike Rothman
Having just returned from the annual Black Hat conference in Vegas, the only thing clear these days is things are not getting better. Attackers are persistent and they have significant resources. Our users have not improved their security savvy much (if at all) and they often fall for the same attacks that prey on their gullibility. All of this makes our defenses and controls ineffective. We've been talking on this site for close to a year now about how application whitelisting can address some of these issues.
Yet there is still resistance. In some cases, resistance is justifiable because whitelisting (by definition) impacts user experience. Practically, the only way to stop end users from hurting themselves is to restrict what they can do. As a security professional, you know whitelisting can help for some specific user cases, but you need to make the case internally, amidst the resistance to doing anything different.
I'm not going to say I'm old, rather I'm experienced. I've been in the business 20+ years at this point and I've seen a lot of technologies come and most of them go. I've paid attention, so I've learned a few lessons on how to "sell" new technologies to resistant user communities. Some of these lessons can help us make a case for whitelisting. Let’s revisit the history books and see how web activity monitoring first made its way into quite a few organizations.
Way back when, lots of employees used to waste a lot of time on the Internet. Yet organizations didn't know how much time or what their users were doing. So companies like WebSense and Vericept produced devices that would monitor Internet usage and provide reports to show what users were doing on the Internet. It was "enlightening" to say the least. What was unseen was now visible. Organizations now had the visibility to know who was being naughty and who was nice.
To be clear, the intent of using these devices was to learn what was really happening. But that knowledge usually forces the hand of the organization, as most cannot just sit idly by while malfeasance is happening. Thus, web monitoring quickly gave way to enforcing web acceptable usage policies by controlling what sites could be viewed and what data could be sent outside of the organization.
In this case, visibility led to control. Another more recent example is application-aware firewalls. Dubbed "Next Generation Firewalls" by the industry, we are seeing the same type of adoption curve here. The innovator in the space, Palo Alto Networks, took a familiar approach. Organizations started by installing a box and had it watch outbound traffic. The resulting report provides granular information regarding the applications in use, traffic flows, and what kind of data escapes the boundaries of the organization.
Suffice it to say, the report is similarly enlightening to the organizations that had no idea what really happened on their network. Before long, organizations took action and started enforcing outbound application policies on the firewalls. Right, history repeats itself.
Can we see a similar adoption path for whitelisting? It's not out of the realm of possibility. Organizations can deploy whitelisting without implementing enforcement policies. Rather, they can simply watch what's really happening on those endpoints, gather the data and generate reports that show what applications run on the endpoints, and what's maybe not aligning with corporate policies.
If you are committed to whitelisting, it's all about figuring out the path of least resistance to start the project. Given that most organizations cannot turn a blind eye when they know bad things are happening in their organization, visibility can be a good beginning tactic to ease the introduction of whitelisting into the organization.
About the Author
You May Also Like