Do we really need a firewall on our desktops?
Posted on 22 Sep 2010 at 15:15
Jon Honeyball asks whether a client firewall is necessary when you're sat behind a server
Here’s a contentious topic to chew on, but before I go any further let me make something crystal clear – I’m not advocating that you try this, I’m not saying it’s a good idea, and I’m not saying I would do it on my own networks.
However, risk assessment is all about identifying and managing risks, while system administration is about choosing trade-offs (and I don’t mean “dodgy shortcuts”). System administrators have to balance the cost and time to make some changes against the expected rewards in money, time or reliability. So here’s my contentious question: should you be running firewalls on your desktop and server machines?
I can sense you backing away already, stuffing your hands in your pockets and mumbling “uh oh, this is too radical”. Yes, it may be radical, but it’s worth doing as a thought experiment: should you be performing IP filtering (for that’s what a firewall does) on machines inside your network?
The biggest screw-ups tend to happen when someone makes assumptions that turn out to be false, or a really bad idea
One group will undoubtedly be saying “there’s no harm in running both client- and server-side firewalls, so why even contemplate the heresy of turning off the built-in Windows firewall?” You would of course be right, except for one thing – it’s actually quite hard to turn off the built-in firewall, and that set me thinking about the way we do layered security inside an organisation. The biggest screw-ups tend to happen when someone makes assumptions that turn out to be false, or a really bad idea. Their logic might have been good and the intention honourable, but not every angle was covered.
I’m reminded of that Windows SQL worm that hit many networks a few years ago under the name of SQL Slammer. What was interesting about it was the havoc it wreaked on certain major banks’ security – you may recall that some Windows-based ATMs crashed at one bank, while at another they discovered the infection had entered via a VPN tunnel from a trusted third-party company. It turned out this VPN tunnel was wide open and allowed access straight into their server room.
I’d rather have security baked right into my network design than scattered willy-nilly around my desktops and servers. For example, there’s much to be said for separating your machine room from your desktop computers with a robust firewall, through which all client/server traffic is routed. You’d then have a boundary router (more likely several) in place to protect the desktops from the outside world, and perhaps even firewalls between various sections of your business’ desktop population. Think of these as layers of an onion, where you can also divide each layer to separate out similar sections into their own managed spaces.
Small number of gatekeepers
If you’re still wondering why I’d suggest this, well, it seems to me that there’s much sense in concentrating your security into a small number of trusty gatekeepers rather than relying on a fog of barely managed faux security devices. Of course, it puts your eggs into fewer baskets, but it does mean these gatekeepers are easier to control and manage: monitoring them in real-time becomes routine.
And before anyone cries, “what about our laptops?”, let me point out that any portable device taken outside the company needs special consideration anyway. What data is held on it? Does that really need to be on it? Is it encrypted? What happens if it’s left on the back seat of the proverbial cab? What happens if you create business-critical data on this laptop; how do you ensure it gets properly replicated back to the office network? And how do you ensure such a device is protected while connected to public IP, perhaps in a coffee shop with dozens of other users on the same subnet?
Of course, such a device needs proper firewall protection, which will be different from the default settings you might have been running in the office. Your laptop needs to be aware that it’s no longer in touch with the mothership, so close the shutters. Windows’ built-in distinction between Home, Work and Public is a start, but you might want to look at third-party software firewalls that are more savvy about the location status of the laptop. And while doing that you might want to disable the Windows Firewall that’s built into Windows 7, Vista and Server 2008.
Disabling this firewall isn’t easy, and if you’re going to do it you need to ensure that a grown-up is holding the scissors. This TechNet article walks you through the steps, starting with this advice: “Because Windows Firewall with Advanced Security plays an important part in helping to protect your computer from security threats, we recommend that you don’t disable it unless you install another firewall from a reputable vendor that provides an equivalent level of protection.” I agree, but I’ll add that you don’t have to install another firewall on this computer – it might be out on the network.
There are basically three ways to disable the firewall: you can issue the NetSH command “netsh advfirewall set profiles state off” where profiles is one of AllProfiles, CurrentProfile, DomainProfile, PrivateProfile or PublicProfile; you could use the Firewall Control Panel program and hit the “Off (not recommended)” button; or you could use the Advanced Security MMC snap-in by right-clicking on “Windows Firewall with Advanced Security on Local Computer” and going to Properties, where for each of the Domain Profile, Private Profile and Public Profile tabs you can switch off the firewall (or you can do it via Group Policy).
I don’t recommend you do this, but it’s useful to know that you can should you decide to install some third-party protection scheme. I won’t be doing this on my network, because I prefer to keep the default security in place. Even so, and this is the big issue, I’m a total advocate of the layered-onion approach to security within a company, as far too many businesses operate a wide-open policy between the various parts of their internal network infrastructure, and it’s a real wake-up call when this thinking is shown to be incomplete.
Off = nightmare...
I took over a site that didn't have the firewall installed, the machines had 256MB RAM (in 2009!) and the AV software was set to minimum mode and never scanned the machine, and the firewall was disabled, turning either on got moans about poor performance.
The sites were separated by VPN routers with local firewalls installed. One site had somehow got the Conficker worm inside it and the whole network kept getting re-infected, it was a complete nightmare.
Luckily the perimeter firewalls set up between the VPNs and the Internet kept the other sites, and the server room, free. But once a worm got inside a segment, it had a field day (that most of the machines still hadn't received SP2, let alone the fix for Conficker didn't help either!).
By big_D on 22 Sep 2010
Do you manage your desktops?
Or do you let them operate as-is, at the discretion of end users? I'm guessing that if you're responsible, you likely have an antivirus application installed, with some central way to deploy that application, periodically check that it is still in-place and running, and properly updated. Same thing for vendor-supplied patches, right?
With this in mind, your claims about "a fog of barely managed faux security devices... scattered willy-nilly around my desktops and servers" break down rapidly. I'm assuming this conversation is only about Windows clients, since those are the only ones you mention. If you're honest, you'll acknowledge that there's nothing willy-nilly about every Windows 7 (and Vista, if there are any) client enabling the Windows Firewall with Advanced Security by default. The "with Advanced Security" part is important, because it refers to the integration of IPSEC in the IP stack.
Like those other infrastructure functions I mentioned, a responsible administrator will manage these functions from a central location. The difference between those applications and the firewall and IPSEC configuration is the ease with which they are managed, because both the firewall and IPSEC features include native, no-cost central management by way of Group Policy. Over the past 10 years, Group Policy has become a well documented and understood toolset widely deployed by administrators of just about every Windows network in existence. So why in 2010 you posit that those same admins will barely manage the biggest improvement in Windows security since file ACLs is beyond me. Keep in mind, the firewall does more than prevent unsolicited inbound traffic. It offers stateful inspection of both inbound and outbound data and is an integral part of Windows service hardening, which prevents Windows services from listening or establishing connections on ports outside of their standard configuration. Disabling the firewall opens up the potential for both remote and locally installed malware to propagate.
If there is any thinking on this issue that is incomplete, I would suggest that it is yours.
By Ottoh on 22 Sep 2010
Mon Cher Ottoh...
Can you explain wht IPSEC does, specifically, to counter the scenario contributed by big_D? Jon's experiences, and mine, include a group you cheerfully ignore - those who you would identify as "irresponsible" administrators. Personally, I do not see any kind of alignment between the provisions of Group Policy, and the threats which have arisen in the timeframe you cite. You are presenting a complete non-sequitur: there is no GP setting that says "inhibit Conficker" - so it wouldn't have done anything for the network big_D was lumbered with. Putting time into managing user's ability to change screen backdrops isn't the foundation of a usable security policy in networks where budgets are unlimited and user priorities have commercial justifications. Is mine a complete comment? of course not: but then, completeness is a very tall order, and not the intention here...
By Steve_Cassidy on 23 Sep 2010
Budgets are *limited*...!
By Steve_Cassidy on 23 Sep 2010
Firewall = OFF
As one of a team supporting a network of a few hundred computers we had an always up to date AV desktop policy fully scanned and monitored servers and different AV solution providers on our boundary firewalls. We had a firewall = off desktop policy and it never gave us any problems and made remote management tasks simpler.
The only "virus" we had problems with was the energy efficiency manager who insisted that the users had to turn off their PCs every night so the overnight AV scans never took place!
By MIssingLink on 23 Sep 2010
I thought this piece would stir up the juices!
To be 100% clear, I would never, ever recommend running with no firewall or AV etc etc (lets call it "security software") in an organisation. That is clearly a recipe of trouble.
But *where* and *how* you run it should be an issue of considerable thinking and risk assessment.
Many organisations, especially in the SME space, use the perimeter defence model of having a firewall (sonic etc) which does anti virus, firewall etc. And this is fine, for the perimeter. Then they put stuff on each desktop to prevent machine A causing problems (of whatever type) to machine B C D E F.
This, in theory, is fine. But my question on this piece is simply "is this the best way to do it?" *if* there is a risk that the desktop software side ends up being unmanaged and, worse still, allowed to get out of date/wither on the vine.
How many times have we seen AV installations (for example) where the definitions are not up to date on some machines? And no-one has noticed? Or there are holes punched in the firewall, and no-one has noticed? Or a new threat comes along which just waltzes into the organisation? Or machines which are way behind on being patched on software update?
Hence my provocative piece to ask the question -- is it better to have a smaller number of properly managed, updated, considered devices/solutions separating off the various business function areas rather than one large security model where nothing is really quite what it seems?
Personally, I think both routes are right -- perimeter security, inter-department security (where the granularity can be small!) and stuff on machines too. But it is the active management and updating which is the critical part here.
By JonH_ on 24 Sep 2010
- Windows Server 2012 R2: how the Datacenter edition could change SMBs
- Invoices and VAT: how to set up your documents correctly
- Nexus 5 vs Samsung Galaxy S4 Active: the best phone for avoiding screen burn
- How much is a social user worth?
- The key to choosing a secure password
- Thunderbolt Bridge: a fast Mac migration tool
- Should you advertise on Twitter?
- How to track a lost smartphone
- Self-publishing success: the best way to sell your book
- 1.6TB SSD: why would you need one?
- Move over Delia: IBM Watson is cooking tonight
- Eric Schmidt on the double-edged smartphone: friend and foe
- Getty joins the race to the bottom
- Hour of Code: five steps to learn how to code
- Sony Xperia Z2 Tablet review: first look
- Sony Xperia Z2 review: first look
- Samsung Galaxy Gear 2 review: first look
- Nokia XL review: first look
- Samsung Galaxy S5 review: first look
- Nokia X review: first look
- IDC: iPad intertia opens door for Windows tablets
- Office 365 goes social with "Oslo" news feed
- Windows XP: upgrading 30,000 PCs in 30 days
- LibreOffice: ignore Microsoft's "nonsense" on government's open source plans
- Intel Xeon E7 v2 servers support 6TB of RAM
- Microsoft promises video calls between Skype and Lync
- Office for iPad due before July
- Windows 7 on business PCs gets an extension
- Windows apps land on Chromebooks with VMware
- Office 365 gets two-factor authentication