検索 - みる会図書館

検索対象: Software security. building security in

Software security. building security inから 414件ヒットしました。

Software security. building security in


C あ 4 7 Defining 4 D な c ゆ 〃 〃 ビ There a 「 e two main flavors Of buffer overflows: those associated with stack- allocated buffers and those associated with heap-allocated buffers. Overflowing a stack- allocated buffer is the most common attack. This is known as "smashing the stack. " The ( Programming Language (the C "bible") shows C programmers how they should never get input (without saying "never") [Kernighan and Ritchie 1988 , p. 1 64 ]. Since we teach people tO program in C as an introduction tO programming, we should not be surprised at hOW common buffer overflow vulnerabilities are. Many, many C library functions and arithmetic issues can lead tO buffer overflows. Consider the snippet below. This is a dangerous piece 0f vulnerable code. N0t on ツ are we using gets ( ) tO get (unbounded) input, but we're using it tO load a localvariable on the stack. By providing just the right kind of input to this program,an attacker can Obtain complete ( ont 「 over program ( ont 「 0 け IOW. voi d mai n ( ) { char buf [ 1024 ] ; gets(buf); FO 「 more on buffer overflows, see Building ( リ 肥 50 Ⅳ 0 尾 (where you are taught in excruciating detail how buffer overflows work) and Exp10iting 50 斤 Ⅳ e (which desc 「 ibes trampolining and 0the 「 more advanced buffer overflow attacks, as well as plenty 0f 「 e 引 - world examples) [Viega and McGraw 2001 冫 Hoglund and McGraw 2004 ]. げ you are concerned about buffer overflow problems and 0ther basic software security bugs. don't use C. げ you must use C ′ use a source COde security scanner as described in Chapter 4. By the way, C + 十 is even worse than C from a security perspective. C + 十 is C with an object modelcrammed halfway down its throat. Flaw: A flaw is a problem at a deeper level. FIaws are often much more subtle than simply an off-by-one error in an array reference or use Of an mcorrect system call. A flaw is certainly instantiated in software COde, but it is alSO present ()r absent! ) at the design level. For example, a number Of classic flaws eXISt in error-handling and recovery systems that fail in an lnsecure or inefficient fashion. Another example can be found in the bOX, Microsoft Bob: A Design Flaw, that follows. Automated technologies to detect design-level flaws d0 not yet exist, though manual risk-analysis processes can identify flaws (see Chapter 5 ). TabIe 1 ー 2 provides some simple examples of bugs and flaws. ln practice, we find that software security problems are divided 50 / 50 between bugs

Software security. building security in


332 Appendix ス な れ So 〃 尾 Co 施 A れ 4 s な Suite ル 4 / 2. How do you know whether or not the SCA Engine was able to find and read all of the required files? 5. ExpIoring the Basic SCA Engine Command Line Arguments ThiS exerclse contlnues the introduction 0f the Source C0de Analysis Engine. ln this exerclse, you will experiment with the basic command line arguments accepted by the SCA Engine. 1. Consider the command line syntax: ・ For C and C 十 十 source code, the syntax is: sourceanalyzer [options] compi7er [compiler-flags] 角 7e5 ・ For Java source code, the syntax is: sourceanalyzer -cp c7asspath [options] 角 7e5 ・ For a . NET executable, the syntax is: sourceanalyzer [options] -libdi rs d 「 5 executab7e 2. Experiment with the following basic command line arguments using the sample programs from the prevlous exerclse. ・ Compiler: For C and C 十 十 code, the sou rceanal yzer command is included in the compile line as a prefix to the actual build command, such as gcc or cl. For complex builds, the sou rceanal yzer command is also used tO intercept archiving commands, such as a r, and linking commands, such as 1 i nk and 1 d. The SCA Engine interprets the flags passed in tO the build command and adjusts its own operation accordingly, without affecting the actual build. For Java code, the compiler is implicitly j avac. ・ Output Format: -format format This option specifies the output format. The default format is text. To select the Fortify VuInerabiIity Description Language (FVDL) format, which is the Fortify Software XML-based vulnerability description language, specify —format fvdl. You can also specify fvdl -zi p, which produces a zipped FVDL e. FVDL is more ver- bose than text and is used by the Fortify Audit Workbench and Other tOOls. ・ Output Location: -f fi 7ename This option specifies a 61e location to which the output will be

Software security. building security in


Mo R な た A れ 4 5 な C lient Tier Web Tier 157 Client Computer 0 「 de 「 Processing Web lnterface Web Serve 「 CIient Computer 0 「 de 「 Processin Rich lnterface rde 「 Processing Application Ⅵ u Directory Application Server Application Server Remoting Service 「 d 部 Processing Application AppIication Tier Database Server Data Tier Order Database Figure 5 ー 3 A forest-level view 0f a standard-issue four-tier Web application. ・ The probability of such a risk being realized ・ The business lmpact of such technical risks, were they to be realized as the data flOW ・ The kinds of vulnerabilities that might exist in each component, as well ・ risks present ln each tier's envlronment ・ The threats who are likely to want to attack our system During the risk analysis process we should consider the following: the application. can immediately draw some useful conclusions about the security design 0f application. If we apply risk analysis principles to this level of design, we a simple four-tier deployment design pattern for a standard-issue ・ Web-based approaches can yield meaningful results. Consider Figure 5 ー 3 , which shows UMLsec, to attempt to model risks, even the most rudimentary analysis Although one could contemplate using modeling languages, such as possible. specified mathematical model) makes risk analysis at the architecturallevel

Software security. building security in


Appendix A Fortify Source C0de AnaIysis Suite TutoriaIl 323 perm1SSlon. lThis appendix was created and is maintained by Fortify Software and is reprinted here with 2. Auditing Source Code Manually 1. lntroducing the Audit Workbench There are nine lessons thiS tutorial: Chapter 4 ). Source Code Analysis Engine and the Fortify Audit Workbench (see dio). Specifically, we include information about how to use the Fortify sis Suite for Java, C/C + + (using gcc), and . NET projects (using Visual Stu- The tutorial provides an introduction to the Fortify Source Code Analy- using a set Of open source COde bases. presented. The finallesson allows you to practice what you have learned the previous lessons, SO the lessons should be taken on in the order they are source code analysis topics. Each lesson builds on the knowledge gained in ThiS tutorial presents a set 0f lessons that cover a number of different letters exclusively. There are no numbers. FSDMOBEBESHIPFSDMO. To prevent any confusion, this key is composed of The key you will need to unlock the demo on the CD is cross-S1te scnpting or access control vulnerabilities. buffer overflow and SQL injection vulnerabilities but does not scan for COde Analysis Suite. For example, this demonstration verslon scans for software includes only a subset of the functionality offered by the Source product is included with this bOOk. Please note that the demonstration special demonstration verslon Of the Fortify Source COde Analysis

Software security. building security in


126 C わ 4 4 Co R ビ レ ル ル / 舫 4 肪 0 / A security-oriented development focus IS new tO a vast maJority Of usually a business reason. Security t001S need tO support the business. 6. Make sense to multiple stakeholders. Software is built for a reason— analysis in familiar development t001S. grate 1ntO eXISting build processes and alSO coexist with and support support popular build tools like make and ant. G00d t001S both inte- operate with existing compilers used on the various platforms and application development team's tOOlset, the t001 must properly inter- t001. For a source COde analysis t001 tO become accepted as part Of an build processes and IDES iS an essential character1Stic Of 4 〃 ア software 5. support existing development processes. Seamless integration with 0S1 れ OSiS ). using them, developers can learn about software security (almost by source code analysis t001S are excellent teaching t001s. SimPlY bY find 4 〃 d fix security problems as efficiently as possible. l-Jsed properlY' to fix the problems uncovered by a t001. G00d t001s allow users t0 t001S support not only analysts but also the poor developers wh0 need focus their attention directly on the most important lssues. GOOd your code. The best automated t001S make it possible for analysts tO ically fix security problems, Just as debuggers can't magically debug is complicated and hard. Even the best analysis t001S cannot automat- 4. Be useful for security analysts and developers alike. Security analysis approach tO security is doomed tO fail. corporate security policies, meaning that a fixed add their own security rules. Every organization has its own set Of can be expanded tO encompass them. Likewise, users must be able tO way, as new attack and defense techniques are developed' the t001 architecture that supports multiple kinds 0f analysis techniques. That fectly detect all security vulnerabilities. G00d t001s need a modular species on a continent. 、 ・ 0 technique set Of rules ever per- 3. Be extensible. Security problems evolve, grow, and mutat% just like one or tWO languages meet the needs Of modern software. erly negotiate between and among tiers. A t001 that can analyze only must support each 0f these languages and platforms as well as prop- cuted on a different platform. Automated security analysis software tiple tiers each written in a different programming language and exe- Most business-critical applications are highly distributed' with mul- ten in a single programming language or targeted t0 a single platform. 2. support multiple tiers. Modern software applications are rarely writ-

Software security. building security in


106 C わ 叩 右 催 4 CO 尺 e レ ル ル / 舫 4 肪 0 / USing a t001 makes sense because COde revlew is and tedious. Analysts WhO practlce COde review Often are very familiar with the "get done, go home" phenomenon described in B 〃 〃 市 れ g Secure SO ″ ル 4 [Viega and McGraw 2001 ]. lt is all t00 easy t0 start a review full 0f dili- gence and care, cross-referencing definitions and variable and end it by giving function definitions (and sometimes even entlre pages Of code) only a cursory glance. lnstead Of focusmg descriptions Of generrc COde or COde inspection in this chapteg I refer the reader to the classic texts on the subject [Fagan 1976 ; Gilb and Graham 1993 ]. This chapter assumes that you know something about manual code review. If you don't, take a quick 100k at Tom Gilb's web site <http://www.gilb. C0前> before you contlnue. Catching lmplementation B 95 EarlY (with a T00 り programmers make little mistakes all the time¯a missing semicolon an extra parenthesis there. 、 lOSt Of the time' such gaffes are inconsequen- tial; the compiler notes the error, the programmer fixes the COde' and the development process continues. This quick cycle 0f feedback and response stands in sharp contrast tO what happens with most security vulnerabilities which can lie dormant (sometimes for years) before discovery. The longer a vulnerability lies dormant, the more expensive it can be tO fix. Adding insult t0 injury, the programming community has a long history 0f repeating the same securlty-related mistakes. One of the big problems is that security is not yet a standard part 0f the programming curriculum. You can't really blame programmers who intro- duce security problems intO their software if nobody ever tOld them what to avoid or hOW tO build secure soft 、 vare. Another big problem is that most programming languages were not designed with security ln mind. Uninten- tional (mis)use Of various functions built intO these languages leads to very C0n11 れ on and Often exploited vulnerabilities. Creating simple t001s t0 help 100k for these problems is an obvious way forward. The promise Of static analysis is tO identify many common coding problems automatically, before a program is released. static analysis t001S (alSO called source COde analyzers) examine the text Of a program staticallY' without attempting tO execute it. TheoreticallY' they can examme either a COde a COmPiled Of the gram to equal benefit' although the problem 0f decoding the latter can be

Software security. building security in


202 C わ 4 7 R な た - Ba d Se り s 行 〃 g eXtreme P 「 09 mm 9 and Security Testing XP takes an interesting approach to testing, often referred to as "test first" 0 「 "test- driven design. " lronically, this approach encourages coding to the tests—an activity that was explicitly discouraged by testing gurus before XP came along. Test-driven design is not a disaster.ln fact, coding tO the tests may work fo 「 standard software "features. ″ ー bet you can guess the problem though—security is not a feature. Tests based t00 closely on features can fail to probe deeply into more subtle user needs that are nonfunctionalin nature. Probing secu 「 ity features on ツ gets us so far. Once again, this is a problem Of testing for a negative. Though unit tests and use 「 stories in XP are supposed to specify the design, they simply don't d0 this well enough tO get tO design flaw issues. The code is the design in XP, but finding design flaws by staring atlarge piles of code is not possible.ln fact, 「 efactoring aside, top-down design does not really happen explicitly in some XP shops. That means there is no good time to consider security flaws explicitly. By using acceptance tests (devised in advance Of coding) as release criteria, XP practitioners keep their eyes on the functional ball. However, this myopic focus on func- tionality causes a propensity tO overlook nonfunctional 「 equi 「 ements and emergent sit- uations. Security fits there. One solution tO this problem might be tO focus mo 「 e attention on abuse cases early in the lifecycle. This would cohere nicely with XP's use 「 stories. perhaps some "attacker stories" shou 旧 be devised as well and used tO create secu 「 ity tests. FO 「 more on my opinions about XP and software security, see my talk, "XP and S0ftware Secu 「 ity?! You Gotta Be Kidding," delivered at XP Universe in 2003 く http://www.cigital.com/presentations/xpuniverse/>. Using a black-list approach (which tries to enumerate all possible bad input) is silly and will not work. lnstead, software needs to defend its input space with a white-list approach (and a Dracoman white-list approach, for that matter). If your program enforces statements like "Accept only input of 32-bits as an lnteger" (something that is easy to do in a modern type-safe language), you're better 0ff right 0ff the bat than with a system that accepts anything but trles tO filter out return characters. Make sure that your testing approach delves directly into the black-list/white-list input-filtering issue. Microsoft pays plenty of attention to malicious input in its approach to software security. You should t00. (See Writing 立 Co [Howard and LeBlanc 2003 ]. )

Software security. building security in


Ⅳ わ 0 S わ 0 d Do So 〃 ル 4 立 c ″ 99 almost never know anything about compilers, language frameworks, soft- ware architecture, testing, and the myriad other things necessary tO be a solid software person. Arming a normal infosec guy with a silly first-generation code scanner like ITS4 or a black box testing t001 like Sanctum's Appscan rarely helps. TOOIS dO not have enough smarts to turn net 、 professionals into soft_ ware people over night. Beware of security consultants who claim to be application security specialists when all they really know how to do is run ITS4 or Appscan and print out an incomprehensible report. Start with software people. security is much easier tO learn about and grok than software development is. Good software people are very valuable, but software security is so important that these highly valuable people need to be repositioned. AISO note that software people pay attention only to other software people, especially those with impressive scars. Don't make the mis_ take Of putting lamers or newbies in front of a group of seasoned develop- ers. The ensuing feeding frenzy is downright scary ()f not hugely entertaining). ldentifying a responsible person or two is critical tO a successful soft- ware security program (see Chapter 10 ). Not only is this important from an accountability perspective, but the sheer momentum that comes fror れ a ded_ icated person can't be matched. If you want to adopt a new way to dO code review (using a toollike Fortify), identify a champion and empower that person tO get things done. Often the most useful first person in a software security group is a risk management specialist charged with addressing software security risks that have been uncovered by outside consultants. Appointing a risk management person makes it much less likely that important results will be swept under the rug or otherwise forgotten by very busy organizations (and who is not busy these days?). The risk management specialist can be put in charge of the RMF. Mentor1ng or ()therwise traimng a new SOft 、 securlty be lmpossible if there are existlng software securlty types in organiza_ tion. If that's the case, hire outside consultants tO come and help you bOOt up a group. The extensive experience and knowledge that software security consultants have tOday are as valuable as they are rare, but it is well worth investing in mentoring your people in order to build that capability. Ultimately, you want two types of people to populate your software security group: black hat thinkers and white hat thinkers. If you're lucky,

Software security. building security in


302 C わ 4 73 ス 〃 〃 0 d B ル 〃 ogra 々 わ ア 4 〃 イ Re ル 尾 〃 c お [Barnum and McGraw 2005 ] Sean Barnum and Gary McGraw. "Know1- edge for Software Security," お E 立 り び p 行 0 3 ( 2 ) , 2005 , pp. 74 ー 78. One of the original BSI articles from IEEE S % 〃 り び p ″ 40 magazine that sparked this book. See <http://www.com/uter.org/securitp for sub- scnptlon informatlon. [Bisbey and HoIIingworth 1978 ] Richard Bisbey and Dennis Hollingworth. "Protection AnaIysis Project Final Report, ' ' ISI/RR-78-13 , DTIC AD A056816 , USC/Information Sciences lnstitute, 1978. A description of the Protection Analysis (PA) project meant to enable anybody (with or without any knowledge about computer security) to discover securlty a system by using a pattern-directed approach. Formalized patterns were used to search for corresponding errors. The PA project was the first pr0Ject to explore automation of security defect detection. [Bishop 2003 ] Matt Bishop. Co 川 々 立 ァ : Art 4 〃 d 立 〃 . Addison- Wesley, Boston, MA, 2003. A decent though overly formaltextbook on computer security. Matt Bishop is one of the pioneers of software security. Echoes of his philoso- phy 0f building security in are evident in this book. [Bishop and DiIger 199 司 Matt Bishop and Mike DiIger. 。 ℃ hecking for Race Conditions in File Accesses," Co 川 々 町 g S s 9 ( 2 ) , 1996 , pp. 131 ー 152. Matt Bishop's seminal paper explains a simple static analysis t001 for detecting time-of-check—time-of-use (TOCTOU) defects. [Bush, Pincus, and Sielaff 2000 ] William Bush, Jonathan Pincus, and David Sielaff. "A Static Analyzer for Finding Dynamic programming Errors, " so ″ - ル 4 ~ c 〃 4 〃 d E ェ 々 〃 , 30 ( 7 ) , June 2000 , pp. 775 ー 802. The only paper published about prefix, the complicated precursor to Prefast invented by Jon Pincus and used internally at Microsoft for many years. [Cavusoglu, Mishra, and Raghunathan 2002 ] Huseyin Cavusoglu, Birendra Mishra, and Srinivasan Raghunathan. "The Effect of lnternet security Breach Announcements on Market Value Of Breached Firms and lnternet Security DeveIopers, ' ' Technical Report from the University of Texas at Dallas School of Management, February 2002. A minor academic study indicating a link between security events and negatlve StOCk price movements.

Software security. building security in


194 C わ 4 7 R な た - Ba d Se 砒 り 計 g security tests (especially those resulting in complete exploit) are difficult tO craft because the designer must think like an attacker. Second, security tests don't Often cause direct security exploit and thus present an observability probler れ . Unlike in the movles, a security compromise does not usually result in a red blinking screen flashing the words 。 Full Access Granted. " A security test could result in an unanticipated outcome that requrres the tester tO perform further sophisticated analysis. B0ttom line: Risk-based security testing relies more on expertise and expenence than we would like—and not testlng expenence, sec 〃 〃 右 ア expenence. The software security field is maturing rapidly. I hope we can solve the experience problem by identifying best practices, gathering and categorizing knowledge, and embracing risk management as a critical soft 、 vare philoso- phy. 5 At the same time, academics are beginning tO teach the next genera- tion Of builders a bit more about security SO that we no longer build broken stuff that surprises us when it is spectacularly exploited. H ow Books, such as HO ル B 4 た SO ″ ル 4 立 り and E ズ 々 あ れ g So ″ ル 4 help educate testing professionals on how t0 think like an attacker during testing [Whittaker and Thompson 2003 ; Hoglund and McGraw 2004 ]. Nevertheless, software exploits are surprisingly sophisticated these days, and the level Of discourse found in bOOks and articles is only now coming intO alignment. White and black box testing and analysis methods both attempt to understand software, but they use different approaches depending on whether the analyst or tester has access tO source code. White bOX analysis involves analyzing and understanding both source code and the design. This kind of testing is typically very effective in finding programming errors ( bugs when automatically scanning code and flaws when doing risk analy- SiS); in S01 e cases, thiS approach amounts tO pattern matching and can even be automated with a static analyzer (the subject of Chapter 4 ). One draw- back t0 this kind 0f testing is that t001s might report a potential vulnerabil- ity where none actually exists (a false positive). Nevertheless, using static analysis methods on source COde is a good technique for analyzing certain kinds of software. Similarly, risk analysis is a white box approach based on a thorough understanding Of soft 、 vare architecture. 5The three pillars of software security.