Thursday, December 10, 2009

Microsoft SDL-Agile Presentation Slides

I wanted to thank everyone who came to the OWASP Kansas City Chapter meeting tonight. I had fun presenting.

A copy of the slides are available here: OWASP Kansas City, Microsoft SDL-Agile Presentation

Unfortunately the animations don't work in the PDF version, but I would be happy to present at other meetings, user groups, or for a group of developers/managers within a company. If you are interested, please feel free to email me. My contact information is listed in the sidebar of this blog.

Wednesday, November 18, 2009

OWASP Presentation on Dec. 10: Microsoft SDL-Agile

I will be giving an OWASP presentation on December 10th over the Microsoft Security Development Lifecycle for Agile Development. The presentation will be about 45 minutes and is scheduled to begin at 6PM in Regnier Center Room 270 at JCCC.

Here is the original announcement from the OWASP Kansas City List: https://lists.owasp.org/pipermail/owasp-kansascity/2009-November/000085.html

Tuesday, November 10, 2009

Microsoft SDL for Agile Development

Microsoft recently released a document describing how to apply the SDL process to Agile development. Take a look at their blog post or download the document here.

Monday, October 26, 2009

Observed Secure Software Development Stages

A secure software development process cannot be built overnight. Organizations gradually adopt security activities based on factors like culture, customer demand, regulations, budget, and security incidents. Each organization adds security practices at different rates; however, most organizations do so in a predictable order. This common order is a reflection of how businesses today use trial and error to find an appropriate set of processes and practices to grow a secure development process.

This order can be broken down into six stages. While few organizations fit exactly within one stage or another, this model can be used to facilitate discussions about an organization’s current progress. The model does not seek to validate whether the six stages constitute an appropriate secure software development roadmap, instead; it simply describes a common progression observed in organizations today. Models like the Software Assurance Maturity Model (SAMM) and Building Security In Maturity Model (BSIMM) are more appropriate models for determining the proper direction of an organization’s secure development process.

Stage 1: Focus on Functionality

Initially, organizations are fairly ignorant of secure development practices. Computer science curriculum often does not include a class on security best practices or ways prevent cross-site scripting vulnerabilities. Developers are taught how to write code to satisfy business requirements.

Secure software development also isn’t high on executives’ list of priorities. Their focus is on producing innovative products or services, being first to market, and making net income goals.

Security usually does not become a priority until an incident occurs, whether a competitor has a data breach or the organization itself is hacked. Once this tipping point occurs, security dollars quickly become available. Organizations spend their new security budget on third-party application assessments, which provide an insight into the security posture of information technology assets.

Stage 2: Assessments Alone

Once an organization starts performing security assessments in response to a breach, it typically extends this activity for use as an approval mechanism. The organization requires sensitive or business critical applications to be assessed prior to new releases being deployed to production. This approach greatly reduces the number and severity of vulnerabilities in external facing applications; however, it doesn’t identify security weaknesses until after the application is fully developed.

Vulnerabilities that highlight a systemic weakness or architectural flaw will often result in project delays and unanticipated costs. Additionally, this approach does not train developers to implement code securely during the initial development stage.

After performing assessments as the only software security activity, the organization eventually realizes that a proactive approach is needed. They determine that issues should be identified early in the development process and opt for purchasing automated code review or penetration testing tools.

Stage 3: Ad-hoc Use of Security Tools and Activities

After providing automated code review or penetration testing tools to developers, organizations expect all their application security challenges to be solved. They tell developers that they need to run the tool on their code and fix all the issues. The organization’s goal is to have production ready software at the conclusion of the development process. The actual results of this approach vary.

Development groups composed of security savvy members usually see an overall reduction in vulnerabilities. The other development groups may only see a moderate impact. There are a variety of reasons this happens. The primary reason is that the tools can identify plenty of problems, but the developers don’t have the knowledge necessary to understand all the risks or to apply security best practice recommendations. Other challenges include the inability of automated tools to find business logic, authorization, and authentication flaws; inconsistent company procedures and checkpoints associated with running the tools; and no minimum standard set for acceptable risk levels.

Organizations also may adopt security activities such as threat modeling, secure requirements specification, and design reviews. These activities produce greater awareness of security issues facing applications, but the developers’ still lack the knowledge and experience necessary to really take advantage of these proactive security activities.

Stage 4: Application Security Training

The next logical step for organizations is to provide application security training to development groups. This comes in the form of in person classes, on-boarding training, and annual refreshers. The class content often includes a general background in application security, introduction to common vulnerabilities and attacks, and best practice approaches for eliminating preventing and remediating issues.

Application security training greatly improves developers’ ability to succeed at the organizations continued use of automated tools and third party assessments. Developers gain a common language to discuss application security concerns, can understand and address vulnerabilities in a timely manner, and the training can inspire developers to pursue additional research.

One aspect most organizations leave out is reinforcing and supplementing training with internal resources. Many developers receive training once a year in application security. After six months, most of the knowledge gained during the class is forgotten.

Stage 5: Creation of Resources, Formal Policies, Procedures and Standards

In order to ensure consistent use of security tools and activities, organizations choose to formalize the policies, procedures, and standards developed over the previous four stages. Criteria is created for evaluating the sensitivity or importance of applications, security activities are formally required for each of these categories, and security gates are put in place to ensure a minimum standard of security is met before software advances in the development process.

An internal application security portal is also created to make these policies and additional resources available to developers. These resources communicate information about standardized methods for addressing vulnerabilities in code, approved development languages and frameworks, and internally developed secure libraries and architectures.

Ultimately, this results in the elimination of ad-hoc security activities and promotes consistent development of applications with fewer security vulnerabilities.

Stage 6: Secure Software Assurance

In the last stage, organizations tailor security activities and requirements to satisfy business goals and leverage efforts as a competitive advantage. Before an application is developed, a set of security requirements is established. For each security activity, the organization defines a test procedure and criteria for determining whether the application passes or fails the security requirement. Test results are recorded and reported across the application’s lifetime to form an overall picture of the application’s security posture.

Thursday, October 1, 2009

Turn Application Assessment Reports into Training Classes

So you had a third party application assessment and you have a report 10 miles long. There are cross-site scripting, SQL injection, authentication, authorization, and every other kind of vulnerability under the sun listed. Your development team gears up and remediates issues, often using trial and error (patch, retest, pray, and repeat) to fix issues over several iterations. Eventually, all the vulnerabilities have been addressed successfully and you file the report away forever...

Stop Right There! There's an opportunity to use a real application within your organization to train developers to write secure code THE FIRST TIME! Here's how:


Taking the Time to Analyze Root Causes and Develop Standards

Now that the fire is out (the issues are fixed), let's take some time to understand how the vulnerabilities were created in the first place. Was it a result of missing output encoding practices, inconsistent page-level access controls, or some other issue? Gather a list of root causes that resulted in the identified weakness.

Next, use security experts or online resources, like OWASP, to find security best practice solutions for eliminating these vulnerabilities. Some great examples are the OWASP XSS Prevention Cheat Sheet or the OWASP SQL Injection Prevention Cheat Sheet. Finally, create a centralized application security portal or wiki that developers can access and add these root causes and best practice solutions as official company standards.

Bullet Points:
  • Create a centralized application security portal or wiki
  • As you analyze root causes and find security best practice approaches to fix them, add them as standards to the portal

Archive the Vulnerable Application Code for Later Use

After completing the third party assessment, you now possess real world vulnerability examples and a report that lists each issue, including the page and parameters vulnerable and a guide for exploiting them. This report and the vulnerable application will be a great learning tool to be leveraged later. Archive the vulnerable application code and any other related components. Make sure it is possible to restore this application to a working state within a test environment at a later date.

Bullet Points:
  • Archive the application and related components to be deployed within a test environment at a later date

Conduct Developer Training

In the weeks before hosting a training course, generate developer interest by deploying the vulnerable application within a well controlled, internal, isolated, secure... you get the idea... test environment. Send application URLs and credentials to developers and tell them what classes of vulnerabilities can be found (refer to your assessment report). Encourage developers to test and discover security issues individually until the training class.

In the training class, go through each vulnerability class or root cause with developers. Demonstrate application security attacks against the weaknesses using the vulnerable application deployed to the test environment as a real world example. Once you have gone through each vulnerability type, ask developers to discuss other areas of the application they identified as vulnerable during the preceding weeks. After the discussion, have developers break up into groups to find any remaining issues. Give hints as the number of remaining vulnerabilities dwindles.

Once all the issues have been found by developers or demonstrated by the instructor, ask developers for methods of addressing each vulnerability class. Intentionally choose suggestions that are missing key security best practice concepts. Have developers come up to the presentation computer and code solutions on the spot; then, discuss reasons why the solution is flawed, and prove it with an example attack.

After going through a few proposed solutions, discuss the root cause that lead to the security weakness. Provide the best practice solution for eliminating the issue and preventing it in future code. Finally, show developers where they can access this company standard on the internal portal or wiki and have a developer implement the solution to fix the vulnerability on the spot.

Bullet Points:
  • Generate developer interest in the training course by allowing them to hack the vulnerable application
  • During the training course, discuss vulnerability classes, root causes, incorrect remediation solutions, security best practice based recommendations, and where to find company standards

Conclusion

Turning application security reports into company security standards and training courses is a great way to increase the return on investment for third party assessments. The suggestions discussed in the article above will greatly help developers succeed at writing secure code in future web applications. The process also uses meaningful real world applications to demonstrate the concepts and promote interest.

Some of these steps may require security savvy developers or security experts. If you would like assistance developing training courses, identifying root causes, or documenting security standards, please feel free to send me an email. I can be contacted at <My First Name>.<My Last Name>@gmail.com.

AT&T Acquires VeriSign's Global Security Consulting Business

We've been acquired! I am now an AT&T employee. Check out the press release here: http://www.att.com/gen/press-room?pid=4800&cdvn=news&newsarticleid=27183.

For a list of professional services offered related to security, see this page: http://www.corp.att.com/consulting/security/ (especially Application Security Services).

Saturday, September 19, 2009

Using Microsoft's AntiXSS Library 3.1

Microsoft recently released the AntiXSS Library Version 3.1. This library provides methods to output encode or escape untrusted user input within ASP.NET pages. The OWASP XSS (Cross Site Scripting) Prevention Cheat Sheet provides a significant amount of detail regarding theory and proper use of output encoding methods. The examples provided in this OWASP resource relate to the ESAPI library for Java and do not provide equivalent method calls for Microsoft's AntiXSS Library.

The sections below are an attempt to provide one-to-one mappings of the ESAPI Encoder calls and the AntiXSS calls needed to satisfy each section of the OWASP XSS Prevention Cheat Sheet.


Setup
Version 3.1 of the AntiXSS library can be obtained at the following URL:
http://www.microsoft.com/downloads/details.aspx?familyid=051EE83C-5CCF-48ED-8463-02F56A6BFC09&displaylang=en

By default, the installer places files in the "C:\Program Files\Microsoft Information Security\Microsoft Anti-Cross Site Scripting Library v3.1\" directory.

In Visual Studio, developers can add a reference to the AntiXSS Library by selecting the DLL located at "C:\<AntiXSS Library Base Directory>\Library\AntiXSSLibrary.dll".

Help files, complete with examples and theory, are located at "C:\<AntiXSS Library Base Directory>\Help\Anti-XSS_Library_Help.chm".

Usage
The following sections should map rules and OWASP ESAPI Encoder calls listed in the XSS Prevention Cheat Sheet to Microsoft AntiXSS Library Calls.

Rule #0: Never Insert Untrusted Data Except in Allowed Locations
This rule holds true as described by the Cheat Sheet. No mapping is required for the AntiXSS Library.


Rule #1: HTML Escape Before Inserting Untrusted Data into HTML Element Content
ESAPI Encoder Example:
String safe = ESAPI.encoder().encodeForHTML( request.getParameter( "input" ) );

AntiXSS Equivalent:
string safe = Microsoft.Security.Application.AntiXss.HtmlEncode( Request.QueryString[ "input" ] );


Rule #2: Attribute Escape Before Inserting Untrusted Data into HTML Common Attributes
ESAPI Encoder Example:
String safe = ESAPI.encoder().encodeForHTMLAttribute( request.getParameter( "input" ) );

AntiXSS Equivalent:
string safe = Microsoft.Security.Application.AntiXss.HtmlAttributeEncode( Request.QueryString[ "input" ] );


Rule #3: JavaScript Escape Before Inserting Untrusted Data into HTML JavaScript Data Values
ESAPI Encoder Example:
String safe = ESAPI.encoder().encodeForJavaScript( request.getParameter( "input" ) );


AntiXSS Equivalent:
string safe = Microsoft.Security.Application.AntiXss.JavaScriptEncode( Request.QueryString[ "input" ] );


Rule #4: CSS Escape Before Inserting Untrusted Data into HTML Style Property Values
ESAPI Encoder Example:
String safe = ESAPI.encoder().encodeForCSS( request.getParameter( "input" ) );

AntiXSS Equivalent:
No direct equivalent

Friday, August 28, 2009

Flash Remoting Support in Burp Suite Pro

Assessing applications that utilize flash remoting calls often require tools to analyze, manipulate, and replay requests. These tools are required because flash remoting request and response payloads are encoded using the Action Message Format.

Previously, I have used Deblaze and Charles Proxy to support these needs. On August 12, a new version of Burp Suite Pro was released. This version allows AMF messages to be encoded and decoded in the proxy, repeater, and other tabs (except Burp Intruder). Burp Scanner also supports placing attack payloads in flash remoting calls.

Wednesday, August 12, 2009

Amazon EC2 and PCI Compliance

I saw a very informative forum post regarding Amazon's position on EC2 and S3 PCI compliance via a twitter update from @beaker (http://twitter.com/Beaker/statuses/3277444460). The post states merchants can not achieve level 1 PCI compliance within Amazon's cloud infrastructure, because Amazon will not allow customers to perform on-site assessments. Amazon recommends using their Flixible Payments Service to successfully handle credit card data within their cloud. Mosso, now "Rackspace Cloud", took a similar approach as discussed in my March 2009 blog post.

Wednesday, July 22, 2009

Vulnerability Tracking, Workflow, and Metrics With Redmine

This article was inspired by real processes and software implemented in a client's environment. This client has a very proactive approach to application security. I would love to give specific attribution to some of these ideas, but I am not permitted in this case.

A functional defect is typically a set of undesirable behavior associated with an application feature. A security vulnerability (security bug) consists of undesirable behavior that weakens the application's ability to resist attacks or protect data. In terms of issue tracking and remediation, a security bug is really just a specific type of functional bug. This is apparent when you consider the basic workflow for a functional defect:
  1. A developer or user reports a defect.
  2. The project manager assigns the defect to a developer.
  3. The developer implements code to resolve the issue.
  4. The quality assurance team verifies that the implemented code successfully resolved the issue.
  5. The project manager or team provides communication to executives, clients, or other entities regarding the successful resolution of the issue.
  6. The issue is archived for use in metrics or other statistical analysis.
The workflow for a security bug contains the same steps but differs in the roles associated with each step. A security bug may require interaction or approval from security managers or security assessors in addition to developers and project managers.

Development teams already use bug tracking software during development, why not utilize the same systems for tracking security vulnerabilities? Project team's familiarity with the software and process will make it considerably easier to collaborate on remediation efforts. Additionally, most organizations already have methods of collecting metrics about software defects. These metrics can be extended to include vulnerabilities.

In order to effectively track security vulnerabilities, a centralized, web-based bug tracking system needs to support the following features:
  • Custom workflows per issue type
  • Custom fields within bug items
  • Roles and privileges controlling users' ability to change the status of security bugs
After a little research, I identified a bug tracking system called Redmine that satisfies all these requirements and more. In Redmine, I was able to create an issue type called "Vulnerability" and associated a specific workflow.


The diagram below illustrates the custom workflow, roles, and purpose of each step. This workflow can be created in Redmine and each transition can be associated with specific roles.


Since the software supports custom fields within issue items, a security assessor can enter additional vulnerability information such as:
  • The vulnerability category
  • Whether the issue has a security impact
  • Whether the issue has a privacy impact
  • Whether the issue has a compliance impact
  • Which group identified the issue
  • Whether the item was identified by an automated or manual process
  • Which activity was used to identify issues
Once many of these issues have been reported across an organization, this information can be used to evaluate the effectiveness of tools, processes, or security activities used throughout the development process. An example of a Vulnerability item being created in Redmine is shown in the screenshot below.

In addition to tracking vulnerabilities, this system could also be used to manage requests and the workflow associated with security services performed by an internal security team. Organizations often may utilize security teams to assist in specifying security, privacy and compliance requirements or to perform activities like penetration testing and code review. A custom workflow can be created in Redmine to handle this issue type as well.


Here is an example of a security service request in Redmine:


Appendix

Custom Fields:


Security Activities Custom Field:


Vulnerability Identification Method Source Custom Field:


Vulnerability Identification Method Custom Field:


Vulnerability Identified By Custom Field:


Vulnerability Category Custom Field:

Wednesday, July 1, 2009

Internal AppSec Portals: Resources

Attribution:
Many of these ideas build on Pravir Chandra's Software Assurance Maturity Model (Version 1.0) and the Building Security In Maturity Model by Gary McGraw, Brian Chess, and Sammy Migues. Both works are licensed under the Creative Commons Attribution-Share Alike 3.0 License.

This article was also heavily influenced by Microsoft's SDL process.

The next several blog entries will cover my current project: providing a template or starting point for organization's internal application security portal. This post is the second of many to come.

Previous Internal AppSec Portals Posts:
Introduction

This post will cover providing application security resources for developers, including
  • Policies
  • Guidance
  • Requirements
  • Vulnerabilities
  • and External Resources
The following image is a screenshot of the table of contents for my TikiWiki Secure Software Assurance Resources structure. As discussed in the previous post, a Wiki is a great way to document application security resources, because it allows for constant, collaborative updates and can link and organize information in user friendly way. I recommend providing the resources discussed in this post in a similar format for project teams.

Goals
The purpose behind this set of resources is to provide all the information a developer needs to write secure code. Developers cannot be expected to pull secure code out of the air. Guidelines, coding standards, and security requirements must be spelled out to ensure everyone understands their responsibilities and the organization's expectations.

Additionally, developers MUST be provided with security awareness training AND training against this material.

Policies
Security Policies
Most organizations define a set of security policies that govern acceptable use of information systems, methods for labeling and handling confidential data, and procedures for addressing policy violations. These same concepts should be extended to cover application security, compliance, and privacy policies.

Security policies should express the organization's dedication to the topics below. These topics do not necessarily have to define the process or implementation of each policy area, only statements mandating their use.

  • Mandatory, periodic application security training
  • Adherence to application security guidance and coding standards
  • Use of a formal risk management process
  • Risk categorization of data and applications
  • Creation and maintenance of application security portfolios
  • Use of approved secure development processes
  • Dedication to meeting regulatory and compliance standards in each application project
  • Inclusion and validation of security, privacy, and compliance requirements throughout the development process
  • Establishment of a minimum level of assurance for application security, privacy, and compliance
Privacy Policies
In addition to security policies, organizations should maintain policies governing how personally identifiable information such as social security numbers, account numbers, or other data is handled. These policies should send a clear message to project teams that protecting users' private data is important. These policies should cover topics such as:
  • Identification and categorization of private data
  • Collection, storage, and transmission of private data
  • Inclusion and validation of privacy requirements during the development process
  • Establishment of a minimum level of assurance for privacy data
Microsoft has released a great deal of resources on privacy related policies, requirements and process. Those resources can be found below.

Microsoft's Privacy Guidelines for Developing Software Products and Services
Microsoft SDL Privacy Questionnaire
Microsoft SDL Privacy Requirements
Microsoft SDL Privacy At A Glance

Compliance Policies
There are a wide variety of compliance and regulatory standards that apply to organizations, data, and functionality. Project teams cannot spend all of their time researching these standards. At the organization level, compliance standards should be identified and a process should be created to assist developers in determining which regulations apply to their project. Compliance policies should include the following topics:
  • Identification of compliance and regulatory standards
  • Process for determining standards that apply to each software project
  • Inclusion and validation of compliance requirements during the development process
  • Establishment of a minimum level of assurance for software compliance
Guidance
Organizations should collect and publish internal guidance to be consumed by project teams. Guidance should not only include secure coding standards, but also approved frameworks, security services, architectures, and environments. These items should be provided in a way that clearly communicates approaches or code that is approved, an organization standard, or unapproved.

Approved Libraries and Frameworks
Software can be developed in a variety of languages and often includes external third party libraries. ASP.NET applications often include libraries such as ASP.NET MVC and Microsoft's AntiXSS library. Java applications may include Struts, Spring, Hibernate, Velocity, and many others. Additionally, developers may want to develop software in PHP, Python, Ruby, Perl, and other languages.

Organizations must communicate which of these languages and frameworks are approved for use in software projects. Guidance should start with a simple list of languages and frameworks the organization has approved or disapproved. As development groups request approval for additional 3rd party libraries and develop successful applications, a list of standards should be created for specific architecture or project types.

For example, an organization may list the following standards for MVC applications in Java and ASP.NET (they typically would expand upon the descriptions as well):

Database Driven Java MVC Application

The organization has standardized on using the following frameworks for Database Driven J2EE MVC applications:
  • Language: Java 1.6
  • MVC Framework: Struts 2.x
  • Dependency Injection Framework: Spring 2.x
  • ORM Layer: Hibernate 3.x

Database Driven ASP.NET MVC Application

The organization has standardized on using the following frameworks for Database Drive
ASP.NET MVC applications:
  • Language: ASP.NET 3.5
  • MVC Framework: ASP.NET MVC 1.x
  • Other: Microsoft AntiXSS Library 2.x
Finally, as the organization matures, a set of secure, shared libraries or frameworks should be created and utilized within software projects. These shared libraries should be scrutinized for security defects and updated on a regular basis. Since assessments and verifications occur on these libraries, teams may not need to spend time and money re-verifying them in their own projects. Instead, only the appropriate usage or coupling of these libraries with custom code must be examined.

Examples of frameworks an organization may produce are:
  • Secure methods for accessing security services (discussed in the next section)
  • Secure methods for calling security resources (discussed in the next section)
  • Input validation frameworks
  • Unified authentication flows
  • Authorization or entitlement frameworks
  • and many more...

Security Services and Resources
A collection of applications often utilize common web services, authentication servers, LDAP servers, or other entities. Organizations should maintain a list of approved security services and resources, guidance informing project teams when it is appropriate to include the services or resources in a project, and the proper method for accessing or calling the service or resource.

Standardization on central services or resources can greatly reduce efforts required to validate applications' security. It also may eliminate the need to create an authentication or authorization strategy for each new project.

Examples of security services organizations may standardize on are:
  • Authentication/single sign-on servers
  • Web services providing entitlement or authorization details
  • Web services that serve as a central point for accessing encrypted credit card data
  • Web services that provide centralized auditing and logging capabilities
  • Web services that provide centralized key management and cryptography
Security resources are often centralized data stores that applications can connect to and query. A few examples are:
  • LDAP servers containing authentication and authorization information
  • Centralized, redundant file storage and backup
As the organization matures, custom frameworks should be created for accessing or calling functionality in these security services and resources (See the Approved Libraries and Frameworks section above).

Secure Coding Standards
Developers are very good a developing software and implementing business requirements quickly and effectively; however, college, their programming textbook, or their expert programmer friend probably never taught them how to write secure code. In order to ensure developers write secure and consistent code, organizations need to provide secure coding standards to teach and support secure coding practices.

Secure coding standards should be presented in manner that can both teach developers and be used as a quick reference during the development process. It should contain code examples in all approved languages and for each framework. It should also provide examples of what NOT to do. Here is a list of items to consider including within a secure coding standard:
  • Description of the standard
  • Statement of why its important
  • Explanation of when to use the approach or standard
  • Vulnerabilities that may result if the standard is not observed
  • Code examples in each language and framework
  • Code examples of what NOT to do
  • Links to external resources that provide additional information
A brief example is provided in my previous post "Secure Development Jump Start."

Once these standards are written, they can be matched up with security, compliance, and privacy requirements, which are discussed in the next section. These coding standards allow organizations to hold project teams accountable for writing code that satisfies requirements.

Requirements
A set of common security requirements should be created and shared throughout the organization. These requirements should provide discrete, testable assertions which can be verified throughout the application development process (More on this idea in a later post). An abbreviated example of a security requirement is:

"Applications must use parameterized queries or prepared statements when querying relational databases. Untrusted data must not be concatenated within dynamic SQL query strings."

Another example related to integrating security services guidance is:

"All external facing applications must utilize the organization's standard, centralized authentication server."

The focus of these requirements is to provide a set of rules that developers can be and are held accountable for. Developers often cannot be security experts, but they can be trained to follow and execute on software project requirements. Assuming the organization documents the appropriate guidance and links this guidance to security requirements, development teams can be held accountable for security requirements in the same way they are held to business requirements.

As business requirements are typically implemented based on a prioritized list, it will also be important to allow a member of the organization's security department help prioritize security requirements with project managers.

In addition to security related requirements, privacy and compliance requirements must also be identified. These requirements should be written to satisfy the policies discussed in the "Compliance Policies" and "Privacy Policies" sections above.

Once a reasonable set of security, privacy, and compliance requirements have been established, a set of requirements profiles should be created for various project types. For example, applications that must be PCI compliant will have many compliance requirements that overlap with security and privacy requirements. The requirements profile "High Risk PCI Application" should contain a preprioritized list of requirements that combine and simplify items from the each category.

Vulnerabilities
During the development process, application vulnerabilities are often identified and reported to project teams. Typically, these reports provide a set of recommendations that will eliminate the vulnerability. Depending on the source of these recommendations (a penetration testing tool, code review tool, internal security team, or third party consulting company) the prescriptive advice may or may not coincide with the organization's approved method for eliminating a vulnerability. While general technical flaws like cross-site scripting are fairly straight forward, business logic, authentication, and authorization related issues may require organization specific approaches.

Organizations should maintain a list of vulnerabilities and should link each vulnerability to security, compliance, and privacy requirements that address the issues. This list of vulnerabilities should provide a short explanation of each issue and should label requirements with "Required", "Recommended", and "Optional." The explanation for each issue does not need to be long, many application security sites like OWASP already provide detailed descriptions for many vulnerabilities. Below is an example of how an organization can document vulnerabilities within an internal AppSec Portal:

SQL Injection

SQL injection occurs when untrusted data is interpreted by the database as SQL commands. This issue may allow users to read, modify, or destroy data without authorization.

The following security, privacy, and compliance requirements should be used to address this vulnerability:

Required:
  • Security: <link to parameterized queries and prepared statements requirement>
  • Compliance: <link to compliance requirement A>
Recommended:
  • Security: <link to input validation framework requirement>
Optional:
  • Security & Compliance: <link to auditing and logging requirement>
Resources:
External Resources
Finally, the organization should provide a set of external resources that project and security teams can use to research application security topics and news.

Friday, June 26, 2009

Internal AppSec Portals: Introduction

When creating an application security program, it can be difficult to make all the resources, policies, procedures, and expectations available to employees. There should be a centralized location for developers, project managers, and auditors to look up application security best practices, the organization's secure development processes, and time lines for remediating vulnerabilities.

The Software Assurance Maturity Model (SAMM) and Building Security In Maturity Model (BSIMM) recommend addressing these needs using an application security portal (See Software Assurance Maturity Model 1.0, EG3 "Create formal application security support portal" and Building Security In Maturity Model, SR1.2 "Create security portal." This centralized internal website or application should be a one-stop shop for all the organization's secure development needs.

So what kind of characteristics should this portal have? Well, employees should be able to easily create and update information on the website. Access controls need to be applied to specific content to ensure only approved guidance, policies, and procedures are included. The portal should also allow collaboration within development groups as well as between development groups. It would also be nice to be able to version documents to see how and when information changes over time.

After reviewing these characteristics, I realized that a Wiki would provide all these features and could easily be placed within an organization's internal network. Specifically, TikiWiki provides collaboration through user pages, forums, blogs, chat, internal messages, and newsletters. It also allows access controls to be applied to individual categories. For example, a "Guidance" category can be created and pages can be grouped within this category. Read only access can be granted to all users, and write access can be granted to specific individuals responsible for updating the organization's guidance documents. A wiki also automatically versions pages so users can see when information is updated and how it changed. Finally, TikiWiki also provides the concept of structures. Structures group pages in a meaningful way allowing easy navigation and well defined organization of information.

The next several blog entries will cover my current project: providing a template or starting point for organization's internal application security portal. The images below give you a sneak peek at the information that will be discussed in future posts. Click on the images below to see each table of contents.





Monday, June 15, 2009

*Repost* Web Application Security Portfolios

In anticipation of my article being published in the May 2009 ISSA Journal, I removed posts for:
  • Application Security Portfolios: Part 1
  • Application Security Portfolios: Part 2
Now that the journal article has been out for a while, I wanted to repost those two blog entries. The content in the blog entries is somewhat different than the journal article. The blog entries include a few more images, examples, and additional discussion. Here is that content:

Part 1

Managing an application security program can be a complex responsibility. Applications have a large number of moving parts and potential security risks. Security directors and managers must gather and organize a mountain of information in order to make informed decisions regarding allocating budget money for security and compliance efforts.

This two part blog suggests types of information a security directory might collect about an organization's applications and introduces one of many methods to organize that information. The first article focuses on the collection of detailed information for one single application. The second article attempts to combine relevant information from each application into one single document in order to aide in make decisions.

The goal is for these documents to be useful in at least the following situations:
  • Maintaining a list of all web applications within the organization.
  • Prioritizing application security assessment needs based on business and data importance, compliance requirements, and risk.
  • Identifying key personnel responsible for the security of systems or code associated with a particular application.
  • Determining network devices, servers, and components to target in an incident response investigation.
  • Identifying low importance applications that should be assessed due to the shared use of a database or other high importance component.
  • Understanding the flow of sensitive data between applications and other components.
Part 1: Loan Application Security Portfolio

First, one should gather a list of web applications within the organization. This should be done in a variety of ways including interviewing development managers and web server admins, logging into web servers and inventorying web applications, and by performing network scans over the internal and external network.

Once applications have been identified, basic information should be collected such as the application's name and purpose, who developed the code, where the application is hosted, and business importance. This information can be organized in a variety of ways. A simple excel spreadsheet is shown below for simplicity.

(Click the image to enlarge)

Detailed technical information should also be gathered. This includes items such as the language and framework the application was developed with and the authorization levels that exist. The information shown below is helpful for scoping application assessments with third parties or can be used to estimate time needed for an internal review.

(Click the image to enlarge)

Once the technical information has been documented, security staff can dig into the type of data handled by the application and its data flow. In the example loan application, a table listing the data or event, data type, and relevant compliance requirements was created.


(Click the image to enlarge)

Through interviews with developers and direct observation, a data flow diagram can be created. The method used to collect and present this information was taken directly from Branden R. Williams' article in the ISSA Journal, March 2008 titled "Data Flows Made Easy." In the loans application, individual data flow diagrams were created for key functionality. Once individual diagrams were complete, the diagrams were combined into one compound diagram.

(Click the image to enlarge)

(Click the image to enlarge)

Next, the network devices, servers, and components that the application depends upon should be documented. These assets are also color coded based on how important the application or data is on the asset (this will be more important in part two of the article). Instructions and an example for the loans application is shown below.

(Click the image to enlarge)

Using the dependency table above, pseudo firewall rules can also be defined.

(Click the image to enlarge)

A couple other pieces of information that may be helpful to track are past, present, and future code bases, location of log files, and security related history.

(Click the image to enlarge)

(Click the image to enlarge)

(Click the image to enlarge)

Using the information in the following spreadsheet, one should be able to answer the following questions:
  • Do we host that application or does a third party host it for us?
  • Who developed the application?
  • Does this application need to be assessed?
  • What additional network devices, systems, or components need to be assessed to assure the security of this application and its data?
  • Are there compliance requirements associated with this application?
  • What risk does this application present to the organization?
  • We've been hacked! Which development manager do I call? Where are the log files? What other systems might also be affected?
  • Where is the information I can use during scoping and the technical interview process of an assessment from VeriSign Global Security Consulting?
Google Docs Version: http://dl.getdropbox.com/u/1132296/Web%20Application%20Security%20Portfolios/CCANCSA%20-%20Application%20Portfolio.xls

Part 2

Managing an application security program can be a complex responsibility. Applications have a large number of moving parts and potential security risks. Security directors and managers must gather and organize a mountain of information in order to make informed decisions regarding allocating budget money for security and compliance efforts.

This two part blog suggests types of information a security directory might collect about an organization's applications and introduces one of many methods to organize that information. The first article focuses on the collection of detailed information for one single application. The second article attempts to combine relevant information from each application into one single document in order to aide in make decisions.

The goal is for these documents to be useful in at least the following situations:
  • Maintaining a list of all web applications within the organization.
  • Prioritizing application security assessment needs based on business and data importance, compliance requirements, and risk.
  • Identifying key personnel responsible for the security of systems or code associated with a particular application.
  • Determining network devices, servers, and components to target in an incident response investigation.
  • Identifying low importance applications that should be assessed due to the shared use of a database or other high importance component.
  • Understanding the flow of sensitive data between applications and other components.
Part 2: Application Security Portfolios Summary
In part 1 of this series, an application security portfolio was created for an example loan application. Detailed information about the application was gathered including the sensitivity of data within the application, the data flow, and the application's dependencies on other network devices, servers, and components.

In part 2, we will try to organize information about all the organization's applications into one high-level document. The aim is for this document to aid us in answering questions like:
  • What applications do I have?
  • What data do I have?
  • How important is the application or its data to my business?
  • What risk level is that application or data at?
  • What Systems and network paths do these applications depend on?
  • How are these applications and its data interrelated?
  • Which applications, systems, and networks should I spend security budget money on for assessments?
  • If an incident occurs or an issue is identified, who is the contact person and what other related systems need to be analyzed?
  • What compliance regulations apply to my applications?
  • When was the last time these applications were found to be compliant with relevant regulations and standards?
In order to create this document, the effort described in Part 1 of this series needs to be completed for all the organization's applications. Once that data has been gathered, we can combine the high-level portions into a spreadsheet like the one below.

(Click to enlarge the image)

If we are evaluating this information to determine which applications need assessments, we may make the observations listed below.

Loans Application
The loans application and its data are critical to the business. We completed an application assessment recently on version 1.0, however a whole new version was pushed to production in the last few days (version 2.0). Since this application is so important and we have recently completed an assessment, it may be a good idea to engage the same third party to perform a follow up assessment. We will provide that third part with a list of changes or new features and ensure those items are assessed in depth. In addition, that third party will briefly review the rest of the application to ensure no security issues were introduced in existing functionality by the changes or new features.

If we need a higher level of assurance, need to re-certify our PCI compliance, or drastic changes to the application were made in version 2.0 we may even have a whole new assessment completed.

Company Home Page
An assessment was completed approximate three years ago, and no new changes or features have been introduced since then. While it is important that a public facing website for the company is accessible externally, the data within the application is not terribly valuable.

Depending on the level of assurance needed, we may want to run an automated web application scanner tool just to verify our assumption that the site is relatively secure. If issues are identified, it may be a good idea to perform an assessment internally. Since the company home page does not require users to login and contains only public information, an automated tool is a good choice because the types of vulnerabilities that are challenging to identify using these tools (authentication, authorization, and business logic rules) should not be present.

Online Banking
The online banking application also has not been assessed in a while. This application and its data are critical to the business. The previous assessment occurred on version 3.0. Bug fixes, security updates, and other minor changes were introduced recently in version 3.1. A third party should be engaged to perform follow up testing to verify issues identified in the previous assessment have been addressed. The third party should also assess the minor changes to the application to ensure no additional issues have been introduced.

Internal Wiki
The company's wiki page contains items such as HR policies, processes and procedures for completing day to day tasks, and also contains protected application areas containing private company information or intellectual property. The data associated with this application is critical. This application has never been assessed before. While this application is not a client-facing application, employees, contractors, and other users all access this critical information. This situation may warrant an assessment by a third party.

Employment Application
The employment application is developed and hosted by a third party. Ideally, before this application/service was purchased, a third party assessment should have been performed, and the company should verify that the third party has a secure development process in place. Additionally, the contract between the third party and the company should include details about how assessments are handled, how the third party will respond to the identification of security issues, and other related topics.

As is often the case, a business unit negotiated a contract and purchased service from the third party prior to an assessment being performed. While the employment application does not generate revenue for the company and will not hinder day to day operations if the application goes down, the data within the application includes PII. The compromise of this application and its data will affect the company's reputation and will require the company to spend resources on incident response.

It is a good idea if this application undergoes a third party review.

Compound Dependency Table

In addition to gathering the high-level data above, a dependency table can be created to show how all the applications, data, network devices, servers, and components are interrelated. This table follows the same rules as introduced in Part 1 of this series, and can be used to determine how data flows between systems and networks. Additionally, this information may help to identify key systems that need to be assessed.

For example, if a low importance application accesses data within a database that is also accessed by a high importance application, it may be important to assess the low importance application in terms of introducing or manipulating data to the detriment of the high importance application.

(Click to enlarge the image)

This spreadsheet can be accessed via Google docs here:
http://dl.getdropbox.com/u/1132296/Web%20Application%20Security%20Portfolios/CCANCSA%20-%20Portfolios%20Summary.xls

Sunday, June 7, 2009

SAMM Inteview Template Version 1.0

Several individuals (including me) plan on proposing an effort to evaluate the OWASP organization using the Software Assurance Maturity Model (SAMM). One of the action items I took on was to create an interview template to help determine the organization's current maturity level.

The first release of the SAMM Interview Template is available below.

View the SAMM Interview Template here: http://spreadsheets.google.com/pub?key=rYpVqQR3026Zu4DNg8LBIwg&output=html

Download the SAMM Interview Template XLS here (Some formatting is lost): http://spreadsheets.google.com/pub?key=rYpVqQR3026Zu4DNg8LBIwg&output=xls

If you have questions or comments about this template or you wish to help assess OWASP using SAMM, please send a message out on the OWASP SAMM Mailing List.

Friday, May 29, 2009

Preparing For a Third Party Application Assessment

Organizations often contract with third party consulting companies to perform application assessments. These companies usually have a predefined window for assessing applications and may charge by the hour. These characteristics make it important for development groups to ensure the application and staff are adequately prepared for the assessment.

For this discussion, we will assume an application assessment has already been scoped and scheduled. Before the consulting company begins any testing, the development group should use a checklist to ensure the following items have been covered:
  • Appoint a technical contact to handle any questions about code, functionality, or security controls.
  • Appoint a contact to handle account lockouts or other technical difficulties with the environment or application.
  • Send contact information to the consulting company or consultants.
  • Identify and configure a test environment that closely mirrors production.
  • Create appropriate credentials for a range of organizations and privileged levels.
  • Populate the environment with adequate data to allow for testing of all functionality and features.
  • Provide a demonstration of the application and answer technical questions.
Identify and Configure a Test Environment

The test environment should mirror production as closely as possible including the configuration of the operating systems, application servers, back-end components, and the application itself. However, the environment should not persist any transactions or changes in the real world. For example, stock trades, money transfers, etc should appear to complete, but the transaction should not be persisted to any banks.

Create Appropriate Credentials


Each consultant assigned to assess the application needs a range of accounts that allow for testing of horizontal and vertical access controls. This means if the application separates data by organization, company, institution, or some other group, the consultants will need accounts in two or three of these organizational units.

Additionally, within each of these organizational units, consultants require accounts that span several roles, permission, or entitlements. If there are a small set of roles within the application, it may be possible to create test accounts for each role. Otherwise, it may be sufficient to create a sample of accounts, one with no entitlements, one with all entitlements, and a handful of other accounts with varying permission-levels.

Populate the Environment with Adequate Data


In most applications, consultants cannot test functionality without having data associated with their user account. Before consultants begin testing, the application should be populated with test data that allows users to interact with all functionality.

Tuesday, May 19, 2009

Microsoft SDL Process Template

Microsoft has released a Visual Studio module that helps developers adhere to Microsoft's SDL process. This tool has a whole lot of things right such as:
  • Ensuring developers complete security activities before checking in code
  • Providing a workflow for developers to follow
  • Providing SDL process steps, instructions, descriptions, and resources to developers
Tools, such a the SDL Process Template released by Microsoft, can greatly increase the success rate of an organization's migration towards a secure software development process. Once organizations define their own custom secure development process, a similar approach should be used to help make adherence easier.

Check out the video on the following page for more information:
http://msdn.microsoft.com/en-us/security/dd670265.aspx

Secure Development Jump Start

Creating a secure development process for an organization is a huge undertaking. There is a tremendous array of options for getting started and no certain metric for determining how long it should take to adopt the process. Some of those options include:
There are some components that all of these processes agree on. Executive level support is a must and security training is required (each process differs on the amount of training, however).

In companies with a small number of developers that have been there for a long period of time, it may make sense to dedicate a large amount of time and money to make them both developers and security experts. For organizations with a large number of developers or high developer turn over rate, it may be more cost efficient to simply provide security awareness training and a set of policies and coding standards to follow.

In any of these situations, several steps you can take to jump start a secure development process for your organization are listed below. It is assumed that your organization values and desires to develop secure code.
  1. Create a policy document addressing application security.
  2. Create a secure coding standard stating the organization's established, secure method for carrying out specific functions.
  3. Provide security awareness training.
  4. Provide training that specifically aims to introduce developers to the application security policies and secure coding standards for the organization.
These steps should fit in to any future secure development process and do not require organizations to spend any security budget dollars on tools. These steps are a starting point and should be joined with a larger, strategic process once the appropriate research and planning is performed.

Application Security Policies

An application security policy document should provide statements or policies that are as specific as possible. A statement such as "All applications should use sufficiently strong cryptographic algorithms" does not provide a developer with enough information to select a secure algorithm. Instead, a statement such as "ACME Bank Corp standardizes on the use of SHA256 as a secure symmetric cryptographic algorithm" should be used.

Other examples include:
"ACME Bank Corp requires all database queries to use parameterized queries or prepared statements. Dynamic or concatenated SQL is prohibited. The ACME Bank Corp secure coding standard provides examples of parameterized queries or prepared statements."

"Untrusted data should be properly output encoded before being included within a web browser page. The appropriate encoding method should be selected based on the context in which the data is being included. The secure coding standard provides example contexts and methods."

The authors of the application security policy document can get policy ideas from resources such as:
OWASP Top 10
2009 CWE/SANS Top 25 Most Dangerous Programming Errors
OWASP Guide Project
ASP.NET 2.0 Check List
ADO.NET 2.0 Check List
.NET 2.0 Check List

Secure Coding Standard

Developers should be able to use the secure coding standard document as a reference guide for writing secure code. The standard should provide the developer with enough information to know when and how to apply a particular code example. An entry such as the following is a good starting point:

Parameterized Queries and Prepared Statements

Addressed Application Security Policy: Parameterized Queries or Stored Procedures, Section 2.1.3
Prevents: SQL Injection
References OWASP Top 10, CWE/SANS Top 25, Security Guidelines: ADO.NET 2.0, OWASP Guide
When to Apply: Anytime an application queries an SQL database
Code Examples:

.NET Parameterized Query, SELECT Statement (example taken from http://msdn.microsoft.com/en-us/library/ms998264.aspx#pagguidelines0002_sqlinjection)
using System.Data;
using System.Data.SqlClient;

using (SqlConnection connection = new SqlConnection(connectionString))
{
DataSet userDataset = new DataSet();
SqlDataAdapter myDataAdapter = new SqlDataAdapter(
"SELECT au_lname, au_fname FROM Authors WHERE au_id = @au_id",
connection);
myDataAdapter.SelectCommand.Parameters.Add("@au_id", SqlDbType.VarChar, 11);
myDataAdapter.SelectCommand.Parameters["@au_id"].Value = SSN.Text;
myDataAdapter.Fill(userDataset);
}


.NET Parameterized Query, UPDATE Statement

...

.NET Parameterized Query, INSERT Statement

...


Java Prepared Statement, SELECT
String sql = "SELECT * FROM movies WHERE year_made = ?";
prest = con.prepareStatement(sql);
prest.setInt(1,2002);
ResultSet rs1 = prest.executeQuery();
Java Prepared Statement, UPDATE

...

Security Awareness Training

Security awareness classes are typically used to introduce developers and managers to the types of vulnerabilities found in applications as well as the impact of those issues. When a developer sees for the first time that an SQL injection attack on SQL Server can be used to read arbitrary files and execute DOS commands, a light bulb seems to come on inside their head and they realize they really do need to pay attention and prevent these vulnerabilities.

While these classes often do not arm developers with the proper tools and knowledge for preventing vulnerabilities, a well written application security policy and secure coding standards document should be a great start.

Application Security Policies and Secure Coding Standard Training

Following a security awareness class, it is beneficial to provide a more targeted training opportunity for developers. This course should be focused upon going through the organizations application security policies and coding standards to ensure all developers are aware of these resources and understand how to use and apply them. Following the course, developers can be held accountable for applying the examples in the secure coding standards to their projects.

Process Improvement

It is likely that an application security policy and secure coding standard document will not include all the possible vulnerabilities that could be introduced into a web application. As new issues are identified as part of an assessment, peer review process, or threat model (these steps are usually included within a complete secure development process), additions should be made to both documents. These additions should reflect the organization's recommended approach for developing code without introducing the newly identified flaw. The organization should also periodically review application security concepts and new additions to the policies and standards document with its developers.

Friday, May 8, 2009

ISSA Journal: Web Application Security Portfolios

My article "Web Application Security Portfolios" was published in the May ISSA Journal!

Check it out here (Must be an ISSA member): http://www.issa.org/Members/Journals-Archive/2009.html#May

Here is another version of the same information.