LinuxSecurity.com
Share your story
The central voice for Linux and Open Source security news
Home News Topics Advisories HOWTOs Features Newsletters About Register

Welcome!
Sign up!
EnGarde Community
Login
Polls
What is the most important Linux security technology?
 
Advisories
Community
Linux Events
Linux User Groups
Link to Us
Security Center
Book Reviews
Security Dictionary
Security Tips
SELinux
White Papers
Featured Blogs
All About Linux
DanWalsh LiveJournal
Securitydistro
Latest Newsletters
Linux Advisory Watch: July 25th, 2014
Linux Advisory Watch: July 18th, 2014
Subscribe
LinuxSecurity Newsletters
E-mail:
Choose Lists:
About our Newsletters
RSS Feeds
Get the LinuxSecurity news you want faster with RSS
Powered By

  
FTP Attack Case Study Part II: the Lessons Print E-mail
User Rating:      How can I rate this item?
Source: Linuxsecurity.com - Posted by Anton Chuvakin, Ph.D.   
Features This article presents part II of a case study related to a company network server compromise. Lessons on designing and implementing security are drawn from the case.

Computer forensics investigation was undertaken and results are presented. The article provides an opportunity to follow the trail of incident response for a real case. We will organize the case study based on the prevention-detection-response metaphor. For example, how to prevent future incidents of that kind? What technological means do we need to detect them? How to effectively respond to them?

I. Prevention

What could the company have done to prevent it? Their network DMZ architecture was robust (see description in the previous paper), and that actually prevented the spread of damage. As was discovered from the netForensics software network traces, the attacker had attempted to connect to several internal machines, all without success.

In addition, an effective DMZ setup prevented the attacker's attempt to connect to his site and to other sites. Overall, it shows that disallowing DMZ machines all access to the outside is very important. It serves to prevent potential liability claims against your company if it is used to launch denial-of-service attacks or to commit other network abuse against third parties. Many security experts predict the rise of lawsuits against companies whose networks were used for attacks. It is much more likely that a DMZ machine will be compromised by outside attackers than a machine on the internal network. Due to this fact, DMZ machines should only be allowed to "do their job" by access rules (principle of least privilege). Namely, the web server should only be allowed to serve pages, while email servers - accept and forward mail, etc, but no more.

As for the prevention of an actual accident, it is entirely clear that steps should have been taken. The system administrator should have patched the WU-FTPD using Red Hat update page (http://www.Red Hat.com/apps/support/errata/). Updated FTPD is supposedly safe from this particular attack (see previous part for details).

However, the issue is more complicated than just timely patching. Many people are saying that companies should patch immediately upon seeing the patch on the vendor site, or right after testing it, etc. But talk is cheap and security talk is no exception. It was calculated and reported that large companies (especially those that are Microsoft-only), will not have enough time to complete the previous round of patching before the next patch is released using their system and network staff. And prioritizing patches is another hard challenge (http://www.securitywatch.com/RES/September3.html). The situation is dramatically better in the UNIX/Linux world, but one still has to check vendor announcements often enough to stay secure. And judging by the number of FTP scans on our honeynet, the vulnerable FTP server might be exploited within days (we see hundreds of attempted FTP accesses per day).

Still, patching is good security, but it is also a losing battle. It might not seem to be, if you have just a few UNIX servers and use an automated update (or notification) system, but for a large environment it likely is. As Bruce Schneier writes in his "Secrets and Lies", information systems are becoming more complex and thus more vulnerable. His concept of "window of exposure" shows that there always will be an interval of time where attackers have an advantage.

What is the prevention method that will always work? Good network design only goes so far. Hardened hosts and firewalls will not stop application-level attacks, since something will always have to be allowed to pass through. Patching will work, if you have time to do it and watch for new patches for all mission-critical software every day. Avoiding network daemons with a bad security history might help, but programs change and new bugs get introduced every day. For example, old SunOS security guides actually recommended replacing the stock Sun ftp daemon with a "secure" WU-FTPD. And now WU-FTPD is the guilty party. One proposed solution is the emerging category of "intrusion prevention", loosely defined as operating systems or special software that puts bounds on the applications behavior. And while the precise definition is still missing (see some discussion at http://www.infosecuritymag.com/2002/apr/note.shtml), these products can sometimes prevent incidents like this - without any need to patch or update. For example, in the Linux world, EnGarde Linux (http://www.engardelinux.com/) and some other vendors provide solutions that can mitigate application attacks by limiting the applications behavior and stopping attacks on an unpatched systems. For example, if the FTP server was unable to spawn a shell, the attack would not have been successful.

II. Detection

In this case study detection was not an issue. Empty server disks served as a reliable indicator that something was amiss. However, our hypothesis is that the attacker decided to delete the disk contents only after he or she understood that the environment was not conducive for his or her purposes (no outgoing connections, nothing to hack inside). The rootkit installation presupposed the intention to keep the box for future purposes. Now imagine that the DMZ was not very robust and the attacker got away with deploying the rootkit and preserving the ability to come back. In this case, only a good network IDS (such as Snort http://www.snort.org) with up-to-date attack signatures would have helped.

Not surprisingly, intrusion detection attack signatures were proven to be an effective detection mechanism. But an IDS is only as effective as the person watching the screen and analyzing and correlating the log files. Products such as netForensics are also effective in detecting violations by providing a full picture of the incident and notiying the security professional. However, it remains important that somebody actually looks at the data and acts on it.

III. Response

Part 1 of the paper outlined an effective investigation involving computer and network forensics. Several lessons can be drawn from it. First, having network security devices helped a lot - but only after the intrusion had occurred. The absence of a person monitoring the network in real-time made all the deployed IDS an effective forensics tool - and no more. In addition, network forensics using the data aggregation and correlation software helped to restore the picture of intruder's actions within the DMZ. In fact, if it were decided to only investigate the cause of an incident and not to go for the recovery of hacker tools, doing network forensics would have been sufficient.

Second, disk forensics is not a hard science - its a game of chance. Guaranteed recovery of deleted files on UNIX file systems is simply not possible, especially in case where a long time has passed since the incident. Recovered forensic evidence helped restore the picture of the attack and gave us some hacker tools to study. However, disk forensics procedures can be very time consuming.

Third, detailed analysis of hacker attacks and tools is better left for the research honeypot environment. Production systems and the people running them (i.e. system and network administrators) are not well suited for this battle, since many of the business requirements will run counter to the needs of security researchers. For example, letting hacker keep access to the box for some period of time is unthinkable for a production system. However, it should be allowed in the honeynet, since it may provide valuable feedback on hacker operations.

As an overall conclusion, the case study highlights the risks of running exploitable servers, the benefits of good DMZ and some investigative techniques.


Anton Chuvakin, Ph.D. is a Senior Security Analyst with netForensics (http://www.netforensics.com), a security information management company that provides real-time network security monitoring solutions.

Only registered users can write comments.
Please login or register.

Powered by AkoComment!

 
< Prev   Next >
    
Partner

 

Latest Features
Peter Smith Releases Linux Network Security Online
Securing a Linux Web Server
Password guessing with Medusa 2.0
Password guessing as an attack vector
Squid and Digest Authentication
Squid and Basic Authentication
Demystifying the Chinese Hacking Industry: Earning 6 Million a Night
Free Online security course (LearnSIA) - A Call for Help
What You Need to Know About Linux Rootkits
Review: A Practical Guide to Fedora and Red Hat Enterprise Linux - Fifth Edition
Weekend Edition
Four fake Google haxbots hit YOUR WEBSITE every day
Mozilla fixes CRITICAL security holes in Firefox, urges v31 upgrade
The Barnaby Jack Few Knew: Celebrated Hacker Saw Spotlight as 'Necessary Evil'
What I Learned from Edward Snowden at the Hacker Conference
Partner Sponsor

Community | HOWTOs | Blogs | Features | Book Reviews | Networking
 Security Projects |  Latest News |  Newsletters |  SELinux |  Privacy |  Home
 Hardening |   About Us |   Advertise |   Legal Notice |   RSS |   Guardian Digital
(c)Copyright 2014 Guardian Digital, Inc. All rights reserved.