David A. Wheeler is the author of the "Secure Programming for Linux HOWTO", the Linux Documentation Project's document describing how to write secure programs for Linux. We thought it'd be interesting to interview David to learn more about his Secure Programming for Linux HOWTO and his views about security in general. He's the author of many other books and papers, including co-author of the GNOME User's Guide and lead editor of the IEEE's book Software Inspection: An Industry Best Practice. He's contributed various small programs and documents through the web as well. He lives in Northern Virginia.


LinuxSecurity:When did you get involved in computing and security? What intrigued you about them?

Wheeler: I first got involved with computing in the 1970's; like many, I was intrigued by being able to command a machine that could solve problems, create virtual worlds, and support new means of communication. In the 1980's I got involved with Unix systems and was very fascinated by Unix's elegance; it made it easy to hook together small tools to solve complex problems. About the same time I became interested in security; friends showed me subtle security holes that made me realize how difficult it can be to make a system secure. Yet, to realize the potential of computer technology, we've got to make computers much more secure.

LinuxSecurity: What have you done since?

Wheeler: Not everything I've done has been directly related to security. A lot of my work has involved how to build large, complex systems that need to be very reliable and interoperate. I've evaluated a number of large software development programs (evaluating their management approach, development process, and/or resulting code). I also work hard on keeping up with the technology, so I've learned a large number of languages, interesting libraries, standards, and so on.

More specific to security, I've worked on developing processes to improve software security, and I've contributed to the development and implementation of the the Common Criteria for IT Security Evaluation (an ISO security standard). I've examined various security standards, Java, a commercial firewall, and Linux.

LinuxSecurity: Why did you develop the "Secure Programming for Linux HOWTO?"

Wheeler: Over the last several years I noticed that many developers for Linux seem to keep falling into the same security pitfalls, again and again. Auditors were slowly catching problems, but it would have been better if the problems weren't put into the code in the first place. I believe that part of the problem was that there wasn't a single, obvious place where developers could go and get information on how to avoid known pitfalls. The information was publicly available, but it was often hard to find, out-of-date, incomplete, or had other problems. I guess all of that leads up to the answer: I developed this document in the hope that future software developers for Linux won't repeat past mistakes, resulting in an even more secure form of Linux.

LinuxSecurity: For those who haven't read ``Secure Programming for Linux HOWTO,'' can you briefly describe what's in it?

Wheeler: I guess I'll start by quoting the introduction: ``This paper describes a set of design and implementation guidelines for writing secure programs on Linux systems. For purposes of this paper, a secure program is a program that sits on a security boundary, taking input from a source that does not have the same access rights as the program. Such programs include application programs used as viewers of remote data, CGI scripts, network servers, and setuid/setgid programs.''

After the background section, the paper summarizes Linux security features; The paper then lists hundreds of specific guidelines, grouped into the following categories: validate all input, avoid buffer overflow, structure program internals and approach, carefully call out to other resources, send information back judiciously, and special topics.

I didn't invent these guidelines ``whole cloth;'' much of this information has been available in various locations, but they weren't unified into a single location and almost none of them specifically discussed Linux. My document has lots of credits and references to those other sources, and in fact I'd recommend reading all of those references if you can. What I did was find these various sources (not an easy task!), add some guidelines of my own, developed a (hopefully) rational organization, and brought these guidelines together so there'd be a single, simple place to look.

LinuxSecurity: Can you give any rationale for why you included what you did?

Wheeler: Sure. I added the section on Linux security features because I've found that many programming books don't provide enough information about Linux security features, or they distribute the information across so many parts of the book that you can't understand the ``big picture.'' I put buffer overflow in a separate chapter because it's so important.

I try to do more than just warn that there's a problem; I discuss what you should do instead, including trade-offs of alternatives. For example, for buffer overflows in C/C++, I discuss the ``standard'' solution (using strncpy and so on), alternative approaches (such as strlcpy and libmib), and discuss their trade-offs. My emphasis is on what a programmer can do, so for example, I don't dwell on all the horrible things that can happen to setuid/setgid programs that don't control the environment variables sent to them; instead, I recommend the only approach that works in most cases (pull out what you need, and throw away the rest).

LinuxSecurity: Why doesn't your HOWTO cover administration, configuration, development processes, or other Unix-like systems such as the *BSDs?

Wheeler: I don't cover administration or configuration because there are already a lot of useful books on the subject. There's the Linux Security HOWTO, the Linux Security Administrator's Guide, the Linux Administrator's Security Guide, and SANS' Securing Linux Step-by-Step, among many. Clearly, you need to know how to configure a secure system, but configuration won't help if the underlying software is fundamentally insecure.

As far as not covering development processes or other Unix-like systems, I did that primarily so I could create a useful version of the document more rapidly than if I'd tried to do it all at once. I omitted some guidelines I considered less critical, but now I think that was a mistake and I intend to add them back in to make this a ``complete'' document. I'm thinking of expanding the document to cover development processes, other Unix and Unix-like systems (like the *BSDs), more about CGI and Java, and adding code examples. Recommendations and pointers to references are always appreciated.

LinuxSecurity: Do you suggest any particular tools for auditing or development of secure software?

Wheeler: I don't have any particular recommendations for program auditing tools. You might look at, for example, RST's ITS4. Be careful: RST says ITS4 is an "open source tool," but it's only free for non-commercial use, violating requirement 6 of the open source definition. I'm afraid this will confuse a lot of people. However, while I don't like RST's confusing use of the term "open source," I'm glad to see tools like this more widely available. Remember that any such tool can be very useful, but it'll still miss a lot; a clean run does not mean that the software is secure.

As far as development goes, currently one of the biggest security problems are buffer overflows. Thus, from a security point of view I'd suggest using a programming language that prevents buffer overflows; Python, Perl, Java, Ada, Eiffel, LISP, and lots of others fit that bill. If you really want to use a language that doesn't include buffer overflow protection (say C or C++), be sure to use libraries that defend you against buffer overflows and be especially careful with every line of code.

In the end, it's a truism that the most important tool for auditing or developing secure software is the one between your ears. Everything else pales in comparison.

LinuxSecurity: Do you think open source software has the potential for being more or less secure than proprietary software?

Wheeler: In general, I see open source software as having the potential for being much more secure. I'm not alone in this; other security specialists such as Bruce Schneier have come to the same conclusion (see his comments about this in Crypto-Gram or Linux Magazine). The argument is fairly simple: when you close off your software, only a small number of people can review your software to find (and fix) errors, and when your software is open source, you can have many more reviewers.

I don't buy the argument that opening source aids attackers more than defenders; attackers have had disassemblers and other tools for years, so they don't need source code to attack. There is the danger that people can insert malicious code into open source code, but malicious code can also be inserted into proprietary code. Both have happened, and so far the open programs' malicious modifications have been detected and fixed much more rapidly (because it's easier to detect such things).

I will say this greater security is a potential, not a foregone conclusion. Very early versions of open source software haven't gone through a peer review process, so they're probably no better until such a review has occurred. And of course, peer review only works when there's real peer review by experts. In particular, if the software is of interest to only a few, then there are fewer to do that review and probably less trust should be extended to the component. Finally, there has to be a maintainer willing to fix any problems found.

LinuxSecurity: There's a lot of detail that has to be ``gotten right;'' is making secure software hopeless?

Wheeler: It's not hopeless; it's just not easy. Capturing lessons learned from other programs (as I've done) hopefully makes it a lot easier. Peer review helps too, especially if the peer reviewers know of specific problems to look for.

I should note that things aren't really any easier with other computer technologies. It's no easier to make secure programs for other Unix-like systems; indeed, many of the same guidelines apply. It's no easier for Windows NT/2000-based systems; indeed, the fewer years of experience with it suggests that we've only begun to learn how to avoid its pitfalls, and the constant flow of security reports shows that it's not fundamentally easier to develop secure programs on it. It's not even necessarily easier with Java, depending on the kind of application you're building. Java prevents buffer overflows, but there are new and interesting attacks in Java because its security is enforced by software instead of more traditional hardware protection mechanisms. In Java, even a minor JVM break or weakness (in such details as type enforcement or data visibility) can cause Java's entire security edifice to crumble. For applications where the Java program mediates security, you still have the difficulty of ensuring that exactly the right service (no more and no less) is provided.

LinuxSecurity: It's been said that the bazaar method of open source development is the same as the failed ``penetrate and patch'' process. Do you agree?

Wheeler: No, I don't agree, although I understand why they'd be confused. In the ``penetrate and patch'' process, the software is completely developed without really thinking about security issues; the primary method to secure the component is having ``experts'' penetrate it after development and then patching the holes found. Now, even some well-designed programs have had to be patched; the difference is that, in penetrate and patch, expert penetration is depended upon as the primary method for gaining security. It's been repeatedly shown that this doesn't work very well; in nearly all cases it's very difficult to retrofit security into a component not designed to be secure. Eric S. Raymond coined the term ``bazaar;'' readers who aren't familiar with the term should see his definition. In the bazaar method, software updates are released early and often (e.g., daily), with the intent of gaining rapid feedback from co-developers. A project using the bazaar method may or may not be using the ``penetrate and patch'' process, depending upon whether or not component developers and reviewers consider the security ramifications of the initial version and proposed changes. If security issues are considered by contributors and reviewers of contributions, then the project isn't using the ``penetrate and patch'' process, even if it's using the bazaar method. I think larger bazaars are more likely to consider security, since they're more likely to have members with security expertise, so I'd expect that larger bazaars would be less likely to follow a ``penetrate and patch'' process.

I'm not surprised these approaches would be confused because, in the bazaar method, the development process is public in a way that a proprietary products' process would not be. If the bazaar reviewers use the information in secure programming guidelines (like mine), then hopefully it will be even easier for bazaar developers to avoid a ``penetrate and patch'' process.

LinuxSecurity: Where do you see IT security going in the future?

Wheeler: People and organizations are increasingly depending on the Internet, and they can't depend on it if their machines and networks can be easily damaged. So I see organizations and governments spending much more money to protect what's becoming an increasingly vital service. In particular, I hope that the U.S. government will spend some money to speed the securing of open source components, so that the U.S. infrastructure will become secure more rapidly.

Computers will become more plentiful and available to more people. These people want to use these computers as tools and not worry about security problems. Thus, I expect that many machines' basic components will be automatically updated as necessary (just as Windows and Mac users' virus checkers are updated now). You'll also see more emphasis on ease-of-use of security capabilities. These machines will have to support much finer protection between applications, so that running one application is much less likely to corrupt or expose the data of another (Java already does this). There will need to be more flexible security policies (since people's needs vary so much); there are already several efforts to implement more flexible security policies in Linux.

LinuxSecurity: What are your future plans?

Wheeler: I hope to be doing more things to improve Linux security. Linux's combination of power, flexibility, and extremely low cost are great advantages. If its security was even better and its policies more flexible, it could be used in even more situations.

David A. Wheeler's website is at David A. Wheeler's Personal Home Page.