Wednesday, July 22, 2009

Vulnerability Tracking, Workflow, and Metrics With Redmine

This article was inspired by real processes and software implemented in a client's environment. This client has a very proactive approach to application security. I would love to give specific attribution to some of these ideas, but I am not permitted in this case.

A functional defect is typically a set of undesirable behavior associated with an application feature. A security vulnerability (security bug) consists of undesirable behavior that weakens the application's ability to resist attacks or protect data. In terms of issue tracking and remediation, a security bug is really just a specific type of functional bug. This is apparent when you consider the basic workflow for a functional defect:
  1. A developer or user reports a defect.
  2. The project manager assigns the defect to a developer.
  3. The developer implements code to resolve the issue.
  4. The quality assurance team verifies that the implemented code successfully resolved the issue.
  5. The project manager or team provides communication to executives, clients, or other entities regarding the successful resolution of the issue.
  6. The issue is archived for use in metrics or other statistical analysis.
The workflow for a security bug contains the same steps but differs in the roles associated with each step. A security bug may require interaction or approval from security managers or security assessors in addition to developers and project managers.

Development teams already use bug tracking software during development, why not utilize the same systems for tracking security vulnerabilities? Project team's familiarity with the software and process will make it considerably easier to collaborate on remediation efforts. Additionally, most organizations already have methods of collecting metrics about software defects. These metrics can be extended to include vulnerabilities.

In order to effectively track security vulnerabilities, a centralized, web-based bug tracking system needs to support the following features:
  • Custom workflows per issue type
  • Custom fields within bug items
  • Roles and privileges controlling users' ability to change the status of security bugs
After a little research, I identified a bug tracking system called Redmine that satisfies all these requirements and more. In Redmine, I was able to create an issue type called "Vulnerability" and associated a specific workflow.

The diagram below illustrates the custom workflow, roles, and purpose of each step. This workflow can be created in Redmine and each transition can be associated with specific roles.

Since the software supports custom fields within issue items, a security assessor can enter additional vulnerability information such as:
  • The vulnerability category
  • Whether the issue has a security impact
  • Whether the issue has a privacy impact
  • Whether the issue has a compliance impact
  • Which group identified the issue
  • Whether the item was identified by an automated or manual process
  • Which activity was used to identify issues
Once many of these issues have been reported across an organization, this information can be used to evaluate the effectiveness of tools, processes, or security activities used throughout the development process. An example of a Vulnerability item being created in Redmine is shown in the screenshot below.

In addition to tracking vulnerabilities, this system could also be used to manage requests and the workflow associated with security services performed by an internal security team. Organizations often may utilize security teams to assist in specifying security, privacy and compliance requirements or to perform activities like penetration testing and code review. A custom workflow can be created in Redmine to handle this issue type as well.

Here is an example of a security service request in Redmine:


Custom Fields:

Security Activities Custom Field:

Vulnerability Identification Method Source Custom Field:

Vulnerability Identification Method Custom Field:

Vulnerability Identified By Custom Field:

Vulnerability Category Custom Field:

Wednesday, July 1, 2009

Internal AppSec Portals: Resources

Many of these ideas build on Pravir Chandra's Software Assurance Maturity Model (Version 1.0) and the Building Security In Maturity Model by Gary McGraw, Brian Chess, and Sammy Migues. Both works are licensed under the Creative Commons Attribution-Share Alike 3.0 License.

This article was also heavily influenced by Microsoft's SDL process.

The next several blog entries will cover my current project: providing a template or starting point for organization's internal application security portal. This post is the second of many to come.

Previous Internal AppSec Portals Posts:

This post will cover providing application security resources for developers, including
  • Policies
  • Guidance
  • Requirements
  • Vulnerabilities
  • and External Resources
The following image is a screenshot of the table of contents for my TikiWiki Secure Software Assurance Resources structure. As discussed in the previous post, a Wiki is a great way to document application security resources, because it allows for constant, collaborative updates and can link and organize information in user friendly way. I recommend providing the resources discussed in this post in a similar format for project teams.

The purpose behind this set of resources is to provide all the information a developer needs to write secure code. Developers cannot be expected to pull secure code out of the air. Guidelines, coding standards, and security requirements must be spelled out to ensure everyone understands their responsibilities and the organization's expectations.

Additionally, developers MUST be provided with security awareness training AND training against this material.

Security Policies
Most organizations define a set of security policies that govern acceptable use of information systems, methods for labeling and handling confidential data, and procedures for addressing policy violations. These same concepts should be extended to cover application security, compliance, and privacy policies.

Security policies should express the organization's dedication to the topics below. These topics do not necessarily have to define the process or implementation of each policy area, only statements mandating their use.

  • Mandatory, periodic application security training
  • Adherence to application security guidance and coding standards
  • Use of a formal risk management process
  • Risk categorization of data and applications
  • Creation and maintenance of application security portfolios
  • Use of approved secure development processes
  • Dedication to meeting regulatory and compliance standards in each application project
  • Inclusion and validation of security, privacy, and compliance requirements throughout the development process
  • Establishment of a minimum level of assurance for application security, privacy, and compliance
Privacy Policies
In addition to security policies, organizations should maintain policies governing how personally identifiable information such as social security numbers, account numbers, or other data is handled. These policies should send a clear message to project teams that protecting users' private data is important. These policies should cover topics such as:
  • Identification and categorization of private data
  • Collection, storage, and transmission of private data
  • Inclusion and validation of privacy requirements during the development process
  • Establishment of a minimum level of assurance for privacy data
Microsoft has released a great deal of resources on privacy related policies, requirements and process. Those resources can be found below.

Microsoft's Privacy Guidelines for Developing Software Products and Services
Microsoft SDL Privacy Questionnaire
Microsoft SDL Privacy Requirements
Microsoft SDL Privacy At A Glance

Compliance Policies
There are a wide variety of compliance and regulatory standards that apply to organizations, data, and functionality. Project teams cannot spend all of their time researching these standards. At the organization level, compliance standards should be identified and a process should be created to assist developers in determining which regulations apply to their project. Compliance policies should include the following topics:
  • Identification of compliance and regulatory standards
  • Process for determining standards that apply to each software project
  • Inclusion and validation of compliance requirements during the development process
  • Establishment of a minimum level of assurance for software compliance
Organizations should collect and publish internal guidance to be consumed by project teams. Guidance should not only include secure coding standards, but also approved frameworks, security services, architectures, and environments. These items should be provided in a way that clearly communicates approaches or code that is approved, an organization standard, or unapproved.

Approved Libraries and Frameworks
Software can be developed in a variety of languages and often includes external third party libraries. ASP.NET applications often include libraries such as ASP.NET MVC and Microsoft's AntiXSS library. Java applications may include Struts, Spring, Hibernate, Velocity, and many others. Additionally, developers may want to develop software in PHP, Python, Ruby, Perl, and other languages.

Organizations must communicate which of these languages and frameworks are approved for use in software projects. Guidance should start with a simple list of languages and frameworks the organization has approved or disapproved. As development groups request approval for additional 3rd party libraries and develop successful applications, a list of standards should be created for specific architecture or project types.

For example, an organization may list the following standards for MVC applications in Java and ASP.NET (they typically would expand upon the descriptions as well):

Database Driven Java MVC Application

The organization has standardized on using the following frameworks for Database Driven J2EE MVC applications:
  • Language: Java 1.6
  • MVC Framework: Struts 2.x
  • Dependency Injection Framework: Spring 2.x
  • ORM Layer: Hibernate 3.x

Database Driven ASP.NET MVC Application

The organization has standardized on using the following frameworks for Database Drive
ASP.NET MVC applications:
  • Language: ASP.NET 3.5
  • MVC Framework: ASP.NET MVC 1.x
  • Other: Microsoft AntiXSS Library 2.x
Finally, as the organization matures, a set of secure, shared libraries or frameworks should be created and utilized within software projects. These shared libraries should be scrutinized for security defects and updated on a regular basis. Since assessments and verifications occur on these libraries, teams may not need to spend time and money re-verifying them in their own projects. Instead, only the appropriate usage or coupling of these libraries with custom code must be examined.

Examples of frameworks an organization may produce are:
  • Secure methods for accessing security services (discussed in the next section)
  • Secure methods for calling security resources (discussed in the next section)
  • Input validation frameworks
  • Unified authentication flows
  • Authorization or entitlement frameworks
  • and many more...

Security Services and Resources
A collection of applications often utilize common web services, authentication servers, LDAP servers, or other entities. Organizations should maintain a list of approved security services and resources, guidance informing project teams when it is appropriate to include the services or resources in a project, and the proper method for accessing or calling the service or resource.

Standardization on central services or resources can greatly reduce efforts required to validate applications' security. It also may eliminate the need to create an authentication or authorization strategy for each new project.

Examples of security services organizations may standardize on are:
  • Authentication/single sign-on servers
  • Web services providing entitlement or authorization details
  • Web services that serve as a central point for accessing encrypted credit card data
  • Web services that provide centralized auditing and logging capabilities
  • Web services that provide centralized key management and cryptography
Security resources are often centralized data stores that applications can connect to and query. A few examples are:
  • LDAP servers containing authentication and authorization information
  • Centralized, redundant file storage and backup
As the organization matures, custom frameworks should be created for accessing or calling functionality in these security services and resources (See the Approved Libraries and Frameworks section above).

Secure Coding Standards
Developers are very good a developing software and implementing business requirements quickly and effectively; however, college, their programming textbook, or their expert programmer friend probably never taught them how to write secure code. In order to ensure developers write secure and consistent code, organizations need to provide secure coding standards to teach and support secure coding practices.

Secure coding standards should be presented in manner that can both teach developers and be used as a quick reference during the development process. It should contain code examples in all approved languages and for each framework. It should also provide examples of what NOT to do. Here is a list of items to consider including within a secure coding standard:
  • Description of the standard
  • Statement of why its important
  • Explanation of when to use the approach or standard
  • Vulnerabilities that may result if the standard is not observed
  • Code examples in each language and framework
  • Code examples of what NOT to do
  • Links to external resources that provide additional information
A brief example is provided in my previous post "Secure Development Jump Start."

Once these standards are written, they can be matched up with security, compliance, and privacy requirements, which are discussed in the next section. These coding standards allow organizations to hold project teams accountable for writing code that satisfies requirements.

A set of common security requirements should be created and shared throughout the organization. These requirements should provide discrete, testable assertions which can be verified throughout the application development process (More on this idea in a later post). An abbreviated example of a security requirement is:

"Applications must use parameterized queries or prepared statements when querying relational databases. Untrusted data must not be concatenated within dynamic SQL query strings."

Another example related to integrating security services guidance is:

"All external facing applications must utilize the organization's standard, centralized authentication server."

The focus of these requirements is to provide a set of rules that developers can be and are held accountable for. Developers often cannot be security experts, but they can be trained to follow and execute on software project requirements. Assuming the organization documents the appropriate guidance and links this guidance to security requirements, development teams can be held accountable for security requirements in the same way they are held to business requirements.

As business requirements are typically implemented based on a prioritized list, it will also be important to allow a member of the organization's security department help prioritize security requirements with project managers.

In addition to security related requirements, privacy and compliance requirements must also be identified. These requirements should be written to satisfy the policies discussed in the "Compliance Policies" and "Privacy Policies" sections above.

Once a reasonable set of security, privacy, and compliance requirements have been established, a set of requirements profiles should be created for various project types. For example, applications that must be PCI compliant will have many compliance requirements that overlap with security and privacy requirements. The requirements profile "High Risk PCI Application" should contain a preprioritized list of requirements that combine and simplify items from the each category.

During the development process, application vulnerabilities are often identified and reported to project teams. Typically, these reports provide a set of recommendations that will eliminate the vulnerability. Depending on the source of these recommendations (a penetration testing tool, code review tool, internal security team, or third party consulting company) the prescriptive advice may or may not coincide with the organization's approved method for eliminating a vulnerability. While general technical flaws like cross-site scripting are fairly straight forward, business logic, authentication, and authorization related issues may require organization specific approaches.

Organizations should maintain a list of vulnerabilities and should link each vulnerability to security, compliance, and privacy requirements that address the issues. This list of vulnerabilities should provide a short explanation of each issue and should label requirements with "Required", "Recommended", and "Optional." The explanation for each issue does not need to be long, many application security sites like OWASP already provide detailed descriptions for many vulnerabilities. Below is an example of how an organization can document vulnerabilities within an internal AppSec Portal:

SQL Injection

SQL injection occurs when untrusted data is interpreted by the database as SQL commands. This issue may allow users to read, modify, or destroy data without authorization.

The following security, privacy, and compliance requirements should be used to address this vulnerability:

  • Security: <link to parameterized queries and prepared statements requirement>
  • Compliance: <link to compliance requirement A>
  • Security: <link to input validation framework requirement>
  • Security & Compliance: <link to auditing and logging requirement>
External Resources
Finally, the organization should provide a set of external resources that project and security teams can use to research application security topics and news.