This article presents part II of a case study related to a company network server compromise. Lessons on designing and implementing security are drawn from the case.

Computer forensics investigation was undertaken and results are presented. The article provides an opportunity to follow the trail of incident response for a real case. We will organize the case study based on the prevention-detection-response metaphor. For example, how to prevent future incidents of that kind? What technological means do we need to detect them? How to effectively respond to them?

I. Prevention

What could the company have done to prevent it? Their network DMZ architecture was robust (see description in the previous paper), and that actually prevented the spread of damage. As was discovered from the netForensics software network traces, the attacker had attempted to connect to several internal machines, all without success.

In addition, an effective DMZ setup prevented the attacker's attempt to connect to his site and to other sites. Overall, it shows that disallowing DMZ machines all access to the outside is very important. It serves to prevent potential liability claims against your company if it is used to launch denial-of-service attacks or to commit other network abuse against third parties. Many security experts predict the rise of lawsuits against companies whose networks were used for attacks. It is much more likely that a DMZ machine will be compromised by outside attackers than a machine on the internal network. Due to this fact, DMZ machines should only be allowed to "do their job" by access rules (principle of least privilege). Namely, the web server should only be allowed to serve pages, while email servers - accept and forward mail, etc, but no more.

As for the prevention of an actual accident, it is entirely clear that steps should have been taken. The system administrator should have patched the WU-FTPD using Red Hat update page (https://www.redhat.com/en/services/support). Updated FTPD is supposedly safe from this particular attack (see previous part for details).

However, the issue is more complicated than just timely patching. Many people are saying that companies should patch immediately upon seeing the patch on the vendor site, or right after testing it, etc. But talk is cheap and security talk is no exception. It was calculated and reported that large companies (especially those that are Microsoft-only), will not have enough time to complete the previous round of patching before the next patch is released using their system and network staff. And prioritizing patches is another hard challenge ( ). The situation is dramatically better in the UNIX/Linux world, but one still has to check vendor announcements often enough to stay secure. And judging by the number of FTP scans on our honeynet, the vulnerable FTP server might be exploited within days (we see hundreds of attempted FTP accesses per day).

Still, patching is good security, but it is also a losing battle. It might not seem to be, if you have just a few UNIX servers and use an automated update (or notification) system, but for a large environment it likely is. As Bruce Schneier writes in his "Secrets and Lies", information systems are becoming more complex and thus more vulnerable. His concept of "window of exposure" shows that there always will be an interval of time where attackers have an advantage.

What is the prevention method that will always work? Good network design only goes so far. Hardened hosts and firewalls will not stop application-level attacks, since something will always have to be allowed to pass through. Patching will work, if you have time to do it and watch for new patches for all mission-critical software every day. Avoiding network daemons with a bad security history might help, but programs change and new bugs get introduced every day. For example, old SunOS security guides actually recommended replacing the stock Sun ftp daemon with a "secure" WU-FTPD. And now WU-FTPD is the guilty party. One proposed solution is the emerging category of "intrusion prevention", loosely defined as operating systems or special software that puts bounds on the applications behavior. And while the precise definition is still missing (see some discussion at Security Information, News and Tips from TechTarget), these products can sometimes prevent incidents like this - without any need to patch or update. For example, in the Linux world, EnGarde Linux and some other vendors provide solutions that can mitigate application attacks by limiting the applications behavior and stopping attacks on an unpatched systems. For example, if the FTP server was unable to spawn a shell, the attack would not have been successful.

II. Detection

In this case study detection was not an issue. Empty server disks served as a reliable indicator that something was amiss. However, our hypothesis is that the attacker decided to delete the disk contents only after he or she understood that the environment was not conducive for his or her purposes (no outgoing connections, nothing to hack inside). The rootkit installation presupposed the intention to keep the box for future purposes. Now imagine that the DMZ was not very robust and the attacker got away with deploying the rootkit and preserving the ability to come back. In this case, only a good network IDS (such as Snort https://www.snort.org/) with up-to-date attack signatures would have helped.

Not surprisingly, intrusion detection attack signatures were proven to be an effective detection mechanism. But an IDS is only as effective as the person watching the screen and analyzing and correlating the log files. Products such as netForensics are also effective in detecting violations by providing a full picture of the incident and notiying the security professional. However, it remains important that somebody actually looks at the data and acts on it.

III. Response

Part 1 of the paper outlined an effective investigation involving computer and network forensics. Several lessons can be drawn from it. First, having network security devices helped a lot - but only after the intrusion had occurred. The absence of a person monitoring the network in real-time made all the deployed IDS an effective forensics tool - and no more. In addition, network forensics using the data aggregation and correlation software helped to restore the picture of intruder's actions within the DMZ. In fact, if it were decided to only investigate the cause of an incident and not to go for the recovery of hacker tools, doing network forensics would have been sufficient.

Second, disk forensics is not a hard science - its a game of chance. Guaranteed recovery of deleted files on UNIX file systems is simply not possible, especially in case where a long time has passed since the incident. Recovered forensic evidence helped restore the picture of the attack and gave us some hacker tools to study. However, disk forensics procedures can be very time consuming.

Third, detailed analysis of hacker attacks and tools is better left for the research honeypot environment. Production systems and the people running them (i.e. system and network administrators) are not well suited for this battle, since many of the business requirements will run counter to the needs of security researchers. For example, letting hacker keep access to the box for some period of time is unthinkable for a production system. However, it should be allowed in the honeynet, since it may provide valuable feedback on hacker operations.

As an overall conclusion, the case study highlights the risks of running exploitable servers, the benefits of good DMZ and some investigative techniques.


This email address is being protected from spambots. You need JavaScript enabled to view it., Ph.D. is a Senior Security Analyst with netForensics ( ), a security information management company that provides real-time network security monitoring solutions.