BeyondTrust

Security in Context: The BeyondTrust Blog

Welcome to Security in Context

Bringing you news and commentary on solutions and strategies for protecting critical IT infrastructure in the context of your business.

Don’t believe everything you hear when it comes to security

Posted March 1, 2012    Sarah Lieber

Our good friend Ellen Messmer, recently published Network World article “13 security myths you’ll hear — but should you believe?”  , which listed common security myths shared and commented on by some of security’s leading experts and practitioners. Working at a security company, I work (and also sit) closely with a stellar team of researchers. When this article surfaced I was curious to know if they shared the same list as the author. So I asked them. And when you ask, you shall receive. Below, I’ve listed each security myth from Ellen’s post with an eEye researcher’s commentary for each.

Here’s what they had to say:

Security Myth 1: “More security is always better.”

JD: I agree that this is not always the case.  Security is a delicate ratio, what’s important is that you get the best bang for your buck, but not go overboard.  A good example would be in our market, with vulnerability management.  It is necessary that you run some form of a vulnerability management system, but running 3 different scanners for internal scans won’t net you very much more for the extra cost.  You might have one or two new things detected, but at a price of several thousands of dollars (depending on the scale of the company), this is not worth it.  Also, the more locked down a system is, the harder it is to use.  It’s important that user’s needs be evaluated and met, with the maximum amount of security being applied after that goal is reached.  If you really want to secure a computer perfectly, you need to unplug it and shoot it off to the moon so that nobody can get to it.  There is no such thing as perfect security, the goal ultimately though is being reasonably secure.

Security Myth 2: “The DDoS problem is bandwidth-oriented.”

JD: I agree with the article here, Distributed Denial of Service (DDoS) isn’t always about bandwidth.  In these attacks, attackers will often exploit the way the server handles the requests.  Here, the goal is to be able to send a small request, that takes longer to process on the server side, then it does to generate and send on the client side.  By flooding these requests against a server, you can cause a server to work harder than the client.  With enough of them, it can cause the server to become so over burdened with the requests that it stops responding all together.  The article’s numbers are likely accurate that very few of the actual DDoS attacks conducted could have been prevented with more bandwidth.  In some situations, more bandwidth would simply allow the attacker to send more of these small packets that cause the server to work extra hard making the overall attack easier to conduct.

Security Myth 3: “Regular expiration (typically every 90 days) strengthens password systems.”

JD: I agree completely with the article’s commentary, I hate that annoying popup that forces me to change my password every 90 days.  What this functionality does is limit the effectiveness of passwords that are compromised by attackers, ensuring that they can only be used for a 90 day time frame.  However, the disadvantage here is that users are constantly required to come up with new passwords.  This means that new passwords are either extremely simple derivatives of the original, or are overly simplistic in the first place (or both).  When the password changes, most attackers will attempt a few derivatives of the compromised password and will likely guess the new one, which completely mitigates the effectiveness that this feature brings to the table.  Besides, from a user experience perspective, it is really annoying trying to come up with new passwords every few months.  By the time I have one memorized, I have to change it again, it’s annoying.  What would be more effective is a more complex, and harder to compromise, password that is used once, and not repeated anywhere else in the system or infrastructure of the network.  This prevents attackers from reusing something already compromised, but isn’t very effective.  It would be better to make that password harder to compromise in the first place, and not yield access to anything else on the network.

Security Myth 4: “You can rely on the wisdom of the crowds.”

JD: This is an interesting myth because it highlights the different tiers of security information and how that information propagates to the masses.  It’s fairly common (with any industry really) that, by the time something hits mainstream media, it’s been hashed out pretty well by the specialists.  This means that anything popping up on CNN that is computer security related has likely already been seen for quite some time on the techie mailing lists and blogs.  The exception here would be something like the Anonymous DDoS attacks, where coverage of these attacks hit media fairly quickly due to the simplicity of the attack and the impact of service on the common user.  However, something like Stuxnet, wouldn’t have really hit mainstream news until it was known what it was doing and why it was doing it.  As an example, CNN reported on Stuxnet (to my knowledge) for the first time here, even though it was being talked about as early as July (timeline: http://www.infracritical.com/papers/stuxnet-timeline.txt).  I wouldn’t say that this is a myth with computer security so much as it is a fact that happens in every industry.

CJ: With respect to hearing about the latest security threats, “the crowds” are not a reliable source of wisdom. More often than not, the real threats are drowned out by all the hype that tends to sound more appealing to fear-mongering folks. The most serious threats go undetected, so the only way to keep on top of them is to be actively searching them out, rather than passively listening for news of their existence.

Security Myth 5: “Client-side virtualization will solve the security problems of ‘bring your own device.'”

JD: Here, we are talking about separation of tasks into different virtualized environments in order to promote a more secure infrastructure.  The problem this tries to solve is that a lot of compromises happen with personal data, that then gets transferred unknowingly to an area with sensitive data.  This is where that lovely human element comes into play.  No matter how much you separate things, users will always break the rules to create a bridge between the two playgrounds.  This can help security with educated users.  After all, it’s basically what I do with my two machines at work.  There are some things I do on my laptop that I need on my desktop, so I will transfer them over.  The trick here is to be smart about what is being transferred over, most common users are easily fooled and any attempts to be careful are easily defeated.  I agree that the cost for setting up such an enormous and complex environment could be too costly for the security it introduces, but now you are getting into the realm of what can you afford compared to what you need.  There are other security measures you can take that are more effective for the money spent.  The best solution is to keep those personal devices completely off of the corporate computers and networks, but that is a game that IT folk have been playing for a very long time.

CJ: As far as security is concerned, in every connection between two computers, there exists a paradigm where “ease of use” is on one end and “tightened security” at the other. If two or more computers are connected (virtual or otherwise), they will necessarily have components shared between them that enable the connection to exist. This is a double-edged sword: the bigger this connection gets, the easier it is to use both computers at once, but it is also harder to keep restricted components in a protected state. To keep work data separate from personal data, keep both types of systems (and all data within them) completely separate.

Security Myth 6: “IT should encourage users to use completely random passwords to increase password strength and they should also require passwords to be changed at least every 30 days.”

JD: I agree with the expert talking in this myth.  Completely random passwords are hard to remember.  You can get just as much complexity (if not more) out of using a sentence for a password that contains both upper and lower case letters, numbers and symbols.  Any completely random password that a user created would have to be incredibly short to make it easy enough to remember, which lowers the bit entropy of the password itself.  Then, you add changing it every 30 days into the mix and nobody is going to want to use a password that is complex or long.  All passwords would be a predictable deviation of the original and fairly useless for keeping people out once the original password was discovered.

CJ: During a recent pentest engagement, we cracked a “completely random” password in 10 days by using a non-optimized, freely available password-cracking program. It only contained a combination of lowercase characters and symbols. A strong password will include lower/uppercase characters, numbers, and symbols. Additionally, it will not include words that can be found in a dictionary. The length of time between forced password resets will need to be determined at the discretion the IT department, which knows how sensitive its users’ data is.

Security Myth 7: “Any computer virus will produce a visible symptom on the screen.”

JD: This is, for the most part, quite true.  Most people are under the impression that if nothing seems wrong, then it is virus free… and if something bad is happening, it must be a computer virus.  This is completely false of course, with the most sophisticated malware doing everything it can to *not* make its presence known.  Though, the reference given is that “showing the files melting away or making the computer itself catch on fire” which assumes people think that what happens in Hollywood is exactly what happens in real life.  I like to think that most people don’t think that way.  Files can be deleted in front of you, but they won’t be shown visually melting away… they will just disappear.. Most of my friends are fairly tech savvy, so I honestly don’t know if most people think this way or not, but it is true that just because something is wrong, doesn’t mean it’s a virus, and just because something isn’t wrong, doesn’t mean that a virus is not there.

Security Myth 8: “We are not a target.”

JD: This is absolutely a myth and I agree, it is absolutely not true.  Everyone is a target.  The big notion in today’s world is that all attacks occur because a hacker picks out a single target and pounds away at it for months until they break in.  Though this does happen, this is not always the case.  It is far too common for attackers to just be running sweeps against any server or IP they can find that will answer back to them.  From there, they check for very common and easy to exploit issues, if found, attack, if not, move on to the next one.  Any target can yield valuable information, and you will gain more in the long run from attacking the low hanging fruit than you will from spending all your time on one big target.  The sites that take the stance of “we don’t collect valuable information, therefore nobody will attack us” don’t really understand everything that is potentially valuable.  A compromised web server, no matter how small, can be used to deliver attacks against its users to gain more bots for a botnet.  Compromised login credentials can be used in other locations where that same user logs in, potentially gaining access to banking information from another site.  There are benefits to any compromise, it’s just important to consider the different angles.  It doesn’t matter if you are small or big, if you have private information or not; what matters is that it needs to be protected and if other users trust your services, they need to be protected as well.

Security Myth 9: “Software today isn’t any better than it used to be in terms of security holes.”

JD: This is true as well.  You could simply just refer to the configurations white paper the eEye Research Team wrote last year, where we specifically address newer software requiring less patches.  Software is getting better, but it will never be perfect.  Compared to the old days, there are significantly more protection mechanisms in place and better code review happening.  Upgrading to the latest product is almost always a more secure route to take.  The thing to keep in mind is, that with more protection mechanisms and more attention on security, there are also more advanced researcher tools being published and more researchers in general sorting through all of the code.  There will likely always be software vulnerabilities, but the trend has shown a decline in volume and an increase in the difficulty of exploitation.

Security Myth 10: “Sensitive information transfer via SSL session is secure.”

JD: There are really two points addressed here, I will address them each individually.

Myth 10a:  “Sensitive information transfer via SSL session is secure.”

The issue here is not with the encrypted data itself, but with the way it is transferred.  The article seems to focus on vulnerabilities in the encryption algorithms themselves.  Though it is true that there are several known vulnerabilities in various encryption algorithms, with new ones being discovered every day, hackers will always choose to focus on bypassing encryption, rather than attacking it head on.  What this means is, there is no point in intercepting encrypted communications, just to spend the next 30 years brute forcing the encryption keys when you can compromise a machine on one end of the communication channel.  This is seen time and time again, with a perfect example being their own example, except used against their argument.  The Citi breach mentioned in this section was not actually a flaw in the encrypted communications, but rather a flaw in their website that allowed attackers to bypass authentication on other user’s accounts.  The attackers leveraged a vulnerability in the web application (parameter tampering) in order to cycle through accounts belonging to other users.  From there, the attackers harvested any information available to them in the web application about the individual accounts.  Here, the communication between Citi and its customers was completely secure, but the Citi web site got compromised giving attacker access to information that is normally transmitted in an encrypted channel. Read the article here.

Myth 10b: “any notion that using trusted certificates from a certificate authority is airtight.”

Here, they are saying that a common mistake is to assume the trusted certificate authorities (responsible for issuing the keys used for encrypted communications) are 100% reliable and trust worthy.  And they are completely right in assuming that is not true.  The best example here is with the Digitnotar attacks (read about those here).  Again, as with part A, you have one side of the equation compromised in order to facilitate the compromise of the communication.  The communication itself, though often the final or intermediary goal, is rarely the actual target.  With poor patching management in place and uneducated users, it’s often significantly easier to just compromise one end of the tunnel and listening in on one side for the full communication.

Security Myth 11: “Endpoint security software is a commodity product.”

JD: I agree with their assessment of this myth.  All security products are not created equal.  I also agree with the statement that some organizations are not aware of all of the features and capabilities of security products and don’t really use them to their full potential.  All too often, security products are seen as the cure all to security, and that is just plain wrong.  Most attacks can actually be stopped by configuring systems correctly, which is what our entire argument in the configuration paper is.  You can’t just deploy a brand new firewall with default settings at your perimeter and expect to be safe from everything.   There are client side attacks to be worried about, and the firewall may not come perfectly setup for the individualized environment it is deployed in.  it’s important that the system itself be evaluated for what all it can do, and configured specifically for its needs.  Additionally, it will need to be watched and monitored over time, with tweaks and changes applied where needed, in order to ensure it is being used to its full potential.

Security Myth 12: “Sure, we have a firewall on our network; of course we’re protected!”

JD: I read this myth to be a scattered collection of several, related, myths.  Really, it seems like an extension of the previous myth.  Correct, a firewall alone cannot hope to protect against everything and it needs to be custom configured for its environment.  I don’t do specific research in compromises in the real world, so I don’t have any numbers or anything for you, but in today’s world, it’s the client side attacks that seem to be doing most of the damage.  Attackers are exploiting third party software installed locally on each machine in order to launch an attack against a corporation.  They deliver the attack through email or websites, and gain access to a machine already sitting behind that precious firewall.  From there, they can pivot and move anywhere within the network they want, and never have to worry about attempting to compromise the hardened machines protecting the perimeter the network.  In the attack of a city, there is no point in banging your head against the outer brick wall all day when you can drop a single, plague infested rat in the middle of the city and watch the chaos ensue.

Security Myth 13: “You should not upload malware samples found as part of a targeted attack to reputable malware vendors or services.”

JD: They pretty much sum up the point quite well in this entry.  I agree, it’s not a great idea to try and keep quiet about malware used in a targeted attack.  These things happen, the sooner you can get that malware out in the open for analysis, the sooner you can fully understand just how badly you were compromised and help prevent the same threat from happening to other people.  There is something to be said for keeping these things secret during an internal investigation, but getting samples to your anti-virus vendor will help them build signatures to help you eliminate the threat from the rest of the compromised machines.  All in all, the person they are quoting sums things up nicely.

Thank you so much to the eEye Research Team for sharing their thoughts on each of the security myths from the Network World article. So what do you all think? Was there a myth that was left out? We’d love for you to post the myths you hear most commonly below and heck, we’ll even get one of our researchers to comment on them. Let us know and post below.

Tags:
, , , ,

Leave a Reply

Additional articles

Password Game Show

Managing Shared Accounts for Privileged Users: 5 Best Practices for Achieving Control and Accountability

Posted November 20, 2014    Scott Lang

How do organizations ensure accountability of shared privileged accounts to meet compliance and security requirements without impacting administrator productivity? Consider these five best practices…

Tags:
, , , , , ,
Triggering MS14-066

Triggering MS14-066

Posted November 17, 2014    Research Team

Microsoft addressed CVE-2014-6321 this Patch Tuesday, which has been hyped as the next Heartbleed.  This vulnerability (actually at least 2 vulnerabilities) promises remote code execution in applications that use the SChannel Security Service Provider, such as Microsoft Internet Information Services (IIS). The details have been scarce.  Lets fix that. Looking at the bindiff of schannel.dll, we see a…

Tags:
, , , ,
Monetary Authority of Singapore

Why MAS Compliance is Still a Real MUST

Posted November 12, 2014    Morey Haber

As reported in our blog earlier this year MAS guidelines are set to change the way financial institutions conduct business in Singapore. Now, nearly four months past the compliance date of July 2014, we are revisiting the guidelines that surround the regulations. Non-compliance was said to result in the following implications for financial institutions: Financial…

Tags:
, , , , ,