Gary McGraw is perhaps best known for his groundbreaking work on securing software, having co-authored the classic Building Secure Software (Addison-Wesley, 2002). More recently, he has co-written with Greg Hoglund a companion volume, Exploiting Software, which details software security from the vantage point of the other side, the attacker. He has graciously agreed to share some of his insights with all of us at LinuxSecurity.com. LinuxSecurity: Gary, please give our readers a brief introduction to your new book, Exploiting Software: How to Break Code (ISBN 0-201-786-95-8). Who does the book appeal to? Where can it be purchased?

Gary McGraw: Traditionally, the field of computer security has been about network and operations people -- think network administrators and IT staff. The basic idea was to build a wall around your networks, protecting your vulnerable stuff (inside) from bad people (outside). The problem with this approach is twofold. First off, "perimeter security" is, at its essence, reactive. Secondly, there is no such thing as a well-defined "perimeter" anymore now that distributed systems are so widespread.

The advent of Web services, mobile code, and Internet-based computing makes the notion of perimeter security seem quaint. Furthermore, the attackers have a distinct advantage, since they only need to find one flaw to exploit, while the defenders have to find and remove them all. In the end, the vulnerable stuff is software, and a lot of software is so buggy that it's almost impossible to protect.

Because of all this, there has been an increasing interest in software security. That means one of the key questions to consider is, how do you harden software against attack? The only reasonable way to do this is to deeply understand just what attacks are. After all, how can you know that your company's critical software is secure, and that it has been built with security in mind, if you really know nothing about what the attacks on it are like? In this sense, Exploiting Software is useful to operations people, and to their management.

By knowing how the software living on and making up their network is likely to be attacked, a good network admin is better equipped to manage risks. Only by understanding how easily buggy software can be compromised can CIOs begin to apply the kind of risk management that business demands to the software they rely on.

Our book is also useful to software developers. It is vital that the people who build our systems (including Linux) learn to look at their software creations not just as a collection of features, but as potential targets for hackers and their nefarious exploits. It only makes sense for builders to get into the mind of their attackers so they know what they are really up against.

For more information about buying a copy of Exploiting Software for yourself (and your entire development staff), check out www.exploitingsoftware.com. The publisher, Addison-Wesley Professional, also has a website www.awprofessional.com/title/0201786958 too.

LS: Can you offer a brief description of your background and that of your co-author, Greg Hoglund? How did you two meet or decide to co-write a book? How long have you been in the software security field? How would you say this collaboration has enriched the book?

GM: First, I'll answer for myself. I am a scientist at heart, and I've been playing with computers ever since I got an Apple II+ in 1981. I studied the lucrative field of Philosophy at the University of Virginia and, inspired by the Pulitzer Prize winning book by Douglas Hofstadter Godel Escher Bach, I ended up earning a dual PhD in Computer Science and Cognitive Science from Indiana University (where Doug was my thesis advisor). Around 1985, I got interested in the 'Net. In 1993 we put up one of the first 400 nodes on the Web (Yahoo was a complete list of all websites back then). In 1995, Java came out, and being a programming languages and Web junkie, I downloaded it. Investigating its interesting claims of being a "secure" computing platform led to my first book, Java Security, written with Ed Felten of Princeton. After that, the need for some good work in software security became obvious. In 2001 I wrote the first book in the world on software security, called Building Secure Software with John Viega.

Greg Hoglund started out as a black hat. In contrast to my academic background, he was completely self taught, and has the innate ability to think like a hacker. In fact, he wrote the first rootkit for Windows NT, and started www.rootkit.com in the process. We became friends some years ago and he agreed to co-write this book just after the publication of Building Secure Software. The idea was to look at a large corpus of attacks and exploits and see if we could discern any patterns in these attacks. Greg's expertise was absolutely critical to this undertaking, and his thinking pervades the technical aspects of the book.

LS: You have previously written a book called Building Secure Software, which covered the design and implementation of secure code for the software developer. Can you explain why we might also a primer for black hats, such as Exploiting Software?

GM: Actually, Greg and I call Exploiting Software the black hat book (in contrast to BSS which is the white hat book)! Together, the two books make up the "black and white series."

The reason we saw the need for the black hat book is that we strongly believe that you simply cannot build a defensible, secure system without knowing how people will attack it. The bad guys already understand software. In fact, they are software people. They know how to write code, they know what bugs to look for, and they know how to exploit them. They can hold a security patch up against a broken piece of software and use it as a roadmap for writing a new exploit. Unfortunately, on the other side of the divide, most security operations people do not understand software. They are excellent firewall and router admins, but they don't code. That's a problem.

LS: What are some of the most major pitfalls that software designers fall into? Can your book help to avoid these problems?

GM: In the book, we provide a set of 49 attack patterns. Attack patterns are directly related to pitfalls and problems that we see in real production software today.

One of the points I like to emphasize is the difference between bugs and flaws in software. Bugs are simple mistakes in code leading to problems like buffer overflows; flaws are mistakes in design. It turns out that a lot of software is flawed. In fact, if you step back and look at a multitude of security problems over time, you'll find that about 50% of them are due to bugs and 50% due to flaws.

There are too many common design failures to list exhaustively here. But here are a couple of examples...

Object Oriented programming is the latest, greatest widespread coding paradigm, and it can indeed be very useful. But the distributed code model has a cost: each class (and possibly every method) is expected to do its own error handling, meaning that there is not necessarily an inherently centralized error handling mechanism. This means that it becomes difficult to determine exactly what will happen at various points of failure. Keeping track of precisely what is going on in the software as it throws exceptions willy nilly is difficult at best. Error handling is complex, and complexity is a great friend to attackers. Without some ability to manage errors in a consistent system wide fashion, it becomes nearly impossible to be sure that nothing intentionally bad could be made to happen.

Then there is the more general concern that software designers tend to be a very feature-oriented crowd. They build things, and things have features. So they naturally default to thinking about security as a set of features (think SSL or access control). The problem is that security is an emergent property of a complete system.

Related to this "featurism," is a related problem involving trust. Developers tend to have too much trust to their users, and do not treat user input with due suspicion. They think, "users want to access these features," instead of "attackers might abuse my system in surprising ways."

These are just a few high level issues that can render software insecure. There are many, many more where these came from.

LS: What practical steps can be taken out of the book, from a security analyst or administrator's point of view, to further secure their systems given that their software may be riddled with flaws? Anything beyond keeping up with their vendor's patches?

GM: I hope to convince operations people to intuitively distrust software. They should be equally skeptical of the claims of security software vendors to solve all of their problem with a magic whatzit (like magic crypto fairy dust).

Any security person can learn to become aware of programming languages that software is written in, and think through the security implications there.

If you want secure software, you cannot just rely on (spurious) claims about security. For example, in order to 'prove' that the software is secure, a security vendor might tout their EAL level. But the Common Criterion is not a fail proof approach. In fact, it is really a "least common denominator" approach to security; because ultimately all it requires is that the vendors demonstrate specific claims that they themselves make about security, such as how well the code was protected while it was being written. But this may have no relationship to real security! That is, some of the vendor claims may not be meaningful to the consumer; and there is really no way to show that any set of claims put together somehow collectively equal "secure software" for all possible situations. This is particularly confusing for the market and its constituent consumers, because the market may not be versed in the subtleties of why various claims may not add up to something secure enough.

It is at this point that we can circle back to the point of the book---flawed software, and how it is actually exploited. Consider that it can be quite difficult to make code secure when you insist on writing it in C. Only those who need very low level functionality should use a low-level language like C, which provides too much rope for a majority of programmers. Perhaps there should be some sort of a license for writing C! In the end, an administrator or security person should understand that programs written in C are much more likely to be insecure than those written in a modern type safe language like Java or C#. The CIO and/or top operations people can keep this in mind when they decide what software to deploy to solve a particular problem.

One way for administrators to deal with software they cannot completely trust is to be very careful to not give software excessive privileges. The manner in which software is installed can make all the difference in the world from a security perspective. In fact, a bad deployment can ruin otherwise pretty darn good software!

LS: How difficult is it to get started? How long would it take for today's regular non-security-focused developer or administrator to use these techniques to begin to test and improve the security of the software for which they are responsible?

GM: Getting started on software security can be as quick and painless as reading some of the now available books and articles. But getting things going on a really secure footing can be a large undertaking if your organization is big. Consultants like Cigital can sometimes help with this.

Of course, even without achieving excellence in software security, there is plenty of value to be found in taking the first steps, such as simply regarding software with the proper dose of suspicion and keeping privileges to the minimum necessary.

LS: Your book contains an entire chapter on disassembling and reverse engineering. What is your opinion of security by obscurity? Are open systems less secure because the code is freely available, without having to be disassembled?

GM: Firstly, I should say that security by obscurity simply doesn't work, with one exception. That exception is that if you carefully design a new piece of software, have it tested very carefully for security and have it extensively vetted by software security experts, and then don't publish the design, then it does work to make the software harder to exploit. So, security by obscurity only helps if the software is already rock solid.

On the other hand, the "open source is more secure" debate is a red herring. All the big software houses, think Microsoft, pay entire armies of people to look at the code. In fact, the economics of the situation are on the side of the closed source guys. Analyzing code for exploitable bugs is a hard job best left to professionals who like to be paid for their work. Because they have deep pockets and can pay people to work hard on software security, Microsoft has greatly improved the security of its code in recent years.

Crispin Cowan tried to set up a community-based economy for security analysis of open source called Sardonix. Unfortunately, it didn't work; mostly because there are not enough really qualified people who are interested in doing security analysis for free or for brownie points. In the end it is ridiculous to say that "all bugs are shallow." That's only true for the easy bugs. Some bugs are subtle and simply not easy to detect, even by hordes of people.

A related problem...at its core, open source tends to be grown organically. Open source people are feature-oriented, and as we have already discussed, security is not a feature.

However, I would like to add that open source has improved from a security perspective, probably just as much as closed source. Partially because of guys like Crispin Cowan and increased scrutiny due to recent attacks. I believe that this improvement was mostly due to the efforts of companies like IBM, and, to a lesser extent, SuSE and Red Hat rather than by some non-economic, non-subsidized means.

LS: Your book mentions that black box case testing software that is touted as the final solution for software security is really no panacea. Can you expand upon this point?

GM: Black box application testing solutions are not worth much money! Rudimentary dynamic testing that runs 100 canned tests and declares victory is really rather silly. How can you know if the inputs you ended up testing (especially with canned tests) are the ones that would uncover defects? It's true that this sort of testing can help determine "badness." That is if a black-box test suite finds problems, you are in very big trouble indeed. But if all tests are successfully passed, that says little about your real security posture.

Don't forget that the basic problem of software security is usually at least as much one of design as of implementation. Black box tests cannot find design flaws (unless they get lucky).

A much more useful idea is that of code scanning, looking at source code for potential vulnerabilities. A number of open source scanners can be found (ITS4 and flawfinder to name two), but you have to make sure you understand what the scanner is scanning for in the code. Ask yourself, are you sure that the scanning rules are right? What will they catch, and what might they miss? If you understand what they are looking for and why, code scanning can be very useful.

LS: One of the chapters of your book is about the infamous buffer overflow problem. Can you briefly explain to our readers what exactly a buffer overflow is, and why it has become such an issue?

GM: Buffer overflows occur because of bad language design (think C) and sloppy code. The best solution is for more software to be written with better languages, like Java and C#, instead of C or C++. Essentially, a buffer overflow happens when information is written over parts of memory that were not intended to hold that information. Think about pouring too much beer in a glass. By overwriting the parts of the memory that hold critical information such as return addresses, an attacker can get an attack payload to execute, with whatever privileges the executing program may have. We explain this in great detail in the book.

Direct memory manipulation comes down to us from the days when memory was exceptionally expensive and precious, and had to be used with extreme frugality. But nowadays, there is memory to waste, and programs grab huge swaths of it and rarely release it, treating it as a raw sea of bits. Handling memory properly is something that is beyond the current understanding of many programmers, and the issue is made worse by the use of common, powerful system calls whose security implications may not be well understood by those people using them.

LS: In your book, you emphasize the importance of risk analysis to overall software security. Can you please explain this concept to our readers? Why should we not aim for perfectly secure software?

GM: We can't just stop using software because its not absolutely provably secure. The only way to make your computer really secure is simply to turn it off. Whenever we add functionality (or turn the machine on), we open the door to security issues. You just have to ask yourself if the functionality you want to utilize is worth the risk you take on. In many cases, the answer will be that it is, but it is critical to realize the basic fundamental nature of this tradeoff.

Software security is about minimizing the risk inherent in the additional functionality that software supplies. That's why it is a risk management game.

LS: What concrete improvements would you like to see in the field of software design in the future?

GM: Less is more in security. Turn off unnecessary functionality and services. The feature-oriented set tend to think more is better, but more is less secure. Bloatware is bad.

I always ask developers: who plans on having less code at the end of the year as the beginning? Very few of them say they do! Less stuff would be better.

Complexity, network-based design, and extensibility (involving mobile code that does stuff on the fly and arrives arbitrarily) are the friends of the attacker. I call these the Trinity of Trouble (also covered in Exploiting Software).

When the Internet toaster is finally available, it will give an attacker the chance to burn your breakfast!

LS: What are your future plans? Do you plan on writing any other books/articles?

GM: When I write a book, I get to a point where I am absolutely sick of writing, and I say to myself "I'll never write another book again!" And then later, I forget the pain and do it again. I'm still somewhat in the sick of writing phase, but I'm sure I'll soon forget.