Wednesday, January 16, 2013
Non-Negotiable Elements of a Secure Software Development Process: Part 2 - Secure Architecture, Configuration, and Coding Patterns
This article is part 2 in the series discussing non-negotiable elements of a secure software development process. In part 1 of the series, we discussed how security requirements set clear and reasonable expectations that development teams can plan for and meet to satisfy a specific level of security assurance. This article focuses on secure architecture, configuration, and coding patterns that equip development teams to meet those requirements.
What are Secure Architecture, Configuration, and Coding Patterns?
Secure architecture, configuration, and coding patterns are language specific implementations of code, frameworks, configuration, and application designs that satisfy a security requirement. They provide development teams with positive examples and instructions to successfully adhere to security practices without requiring them to be a security expert... [Article Posted on the Security PS Blog: Non-Negotiable Elements of a Secure Software Development Process: Part 2 - Secure Architecture, Configuration, and Coding Patterns]
Friday, September 10, 2010
OWASP AppSec Ireland and DC
http://www.owasp.org/index.php/OWASP_IRELAND_2010#Agenda_and_Presentations_-_September_17
I was also accepted to speak at OWASP AppSec DC on November 10th or 11th. This will be the first time I will give the SDL-Agile presentation at a conference in the United States! The conference home page is: http://www.owasp.org/index.php/OWASP_AppSec_DC_2010
Wednesday, May 12, 2010
I'm Presenting at OWASP AppSec Research 2010 Conference
http://www.owasp.org/index.php/OWASP_AppSec_Research_2010_-_Stockholm,_Sweden#tab=June_24
I hope to see lots of people there!
Wednesday, March 24, 2010
.NET User Group Presentation - Microsoft SDL-Agile
Many also wanted links to the tools I mentioned during the presentation. Here is a list of those tools:
- SDL Process Guidance 4.1a (includes SDL-Agile towards the bottom of the document
- CAT.NET v2, Web Protection Library (Includes Anti-XSS Library and Security Runtime Engine), and the Web Application Configuration Analyzer
- web.config security analyzer
- Microsoft FxCop 1.36
- Microsoft Code Analysis Tool .NET (CAT.NET) v1 CTP - 32 bit (Old version of CAT.NET)
- Microsoft Code Analysis Tool .NET (CAT.NET) v1 CTP - 64 bit (Old version of CAT.NET)
- BinScope Binary Analyzer
- MiniFuzz File Fuzzer
- MSF-Agile plus Security Development Lifecycle Process Template for VSTS 2008
- Microsoft SDL Process Template For Visual Studio Team System
- Microsoft SDL Threat Modeling Tool
Please feel free to email me with any questions or comments about the presentation.
Wednesday, January 20, 2010
How Often Should I Reassess My Web Applications?
There are a couple approaches for determining when an application should be undergo a security assessment. First, organizations often require new tests after a fixed period of time. This period of time may vary based on the risk level the organization has attributed to each application or application type. It's common for organizations to conduct security assessments of high and medium risk applications every six months to one year. For low risk applications, the time period is often a year to two years. The risk level of an application is often determined based on the type of data and functionality within the application. For example, an Internet facing application that handles credit card transactions would be considered high risk; while an application that simply provides product information and is not subject to regulatory, compliance, or legal requirements may be low risk. Periodic assessments usually supplement the next two approaches.
The second approach is associated with major changes implemented in the application. There are a variety of changes that should trigger a new application assessment. Any changes to a security mechanism should undergo validation. Security mechanisms usually include things like authentication processes, authorization controls, session management features, and data validation and encoding components (think cross-site scripting, SQL injection, etc.). Changes that add or modify a feature in the application should trigger a retest based on the risk level of the application and the sensitivity of data or functionality that the change effects. So, a new feature that adds an “about” tab to a website probably doesn’t need to undergo rigorous security testing; however, a new feature that collects users’ social security numbers absolutely should undergo testing (including evaluation whether collection of this PII complies with the organization’s policies, that there is a need to know for this sensitive data, that the sensitive data is adequately protected at rest and in transit, and that only authorized, authenticated parties can reach this data). Organization’s typically set up criteria and standards around PII, credit card information, and other sensitive data types that automatically trigger a reassessment when an application change related to those data types occurs.
The third approach is to use application security testing as a security gate within an organization’s development process. In this case, specific application types or risk levels would require an assessment before they can be deployed to production. This is usually one of the final steps in a secure software development process and acts as a sanity check or a process improvement opportunity rather than a catch-all for security issues. These types of tests are often highly targeted and would not encompass an assessment of the whole application; instead the security assessment would focus on high risk, new, or updated components introduced in the application.
Previously, I wrote an ISSA article, Titled “Web Application Security Portfolios”, which covered some of this detail. An expanded version of this article can be found in my post here. The article discusses ideas around managing portfolios for each application within an organization, identifying data types and compliance requirements, and tracking security activities.
Thursday, December 10, 2009
Microsoft SDL-Agile Presentation Slides
A copy of the slides are available here: OWASP Kansas City, Microsoft SDL-Agile Presentation
Unfortunately the animations don't work in the PDF version, but I would be happy to present at other meetings, user groups, or for a group of developers/managers within a company. If you are interested, please feel free to email me. My contact information is listed in the sidebar of this blog.
Wednesday, November 18, 2009
OWASP Presentation on Dec. 10: Microsoft SDL-Agile
Here is the original announcement from the OWASP Kansas City List: https://lists.owasp.org/pipermail/owasp-kansascity/2009-November/000085.html
Tuesday, November 10, 2009
Microsoft SDL for Agile Development
Monday, October 26, 2009
Observed Secure Software Development Stages
This order can be broken down into six stages. While few organizations fit exactly within one stage or another, this model can be used to facilitate discussions about an organization’s current progress. The model does not seek to validate whether the six stages constitute an appropriate secure software development roadmap, instead; it simply describes a common progression observed in organizations today. Models like the Software Assurance Maturity Model (SAMM) and Building Security In Maturity Model (BSIMM) are more appropriate models for determining the proper direction of an organization’s secure development process.
Stage 1: Focus on Functionality
Initially, organizations are fairly ignorant of secure development practices. Computer science curriculum often does not include a class on security best practices or ways prevent cross-site scripting vulnerabilities. Developers are taught how to write code to satisfy business requirements.
Secure software development also isn’t high on executives’ list of priorities. Their focus is on producing innovative products or services, being first to market, and making net income goals.
Security usually does not become a priority until an incident occurs, whether a competitor has a data breach or the organization itself is hacked. Once this tipping point occurs, security dollars quickly become available. Organizations spend their new security budget on third-party application assessments, which provide an insight into the security posture of information technology assets.
Stage 2: Assessments Alone
Once an organization starts performing security assessments in response to a breach, it typically extends this activity for use as an approval mechanism. The organization requires sensitive or business critical applications to be assessed prior to new releases being deployed to production. This approach greatly reduces the number and severity of vulnerabilities in external facing applications; however, it doesn’t identify security weaknesses until after the application is fully developed.
Vulnerabilities that highlight a systemic weakness or architectural flaw will often result in project delays and unanticipated costs. Additionally, this approach does not train developers to implement code securely during the initial development stage.
After performing assessments as the only software security activity, the organization eventually realizes that a proactive approach is needed. They determine that issues should be identified early in the development process and opt for purchasing automated code review or penetration testing tools.
Stage 3: Ad-hoc Use of Security Tools and Activities
After providing automated code review or penetration testing tools to developers, organizations expect all their application security challenges to be solved. They tell developers that they need to run the tool on their code and fix all the issues. The organization’s goal is to have production ready software at the conclusion of the development process. The actual results of this approach vary.
Development groups composed of security savvy members usually see an overall reduction in vulnerabilities. The other development groups may only see a moderate impact. There are a variety of reasons this happens. The primary reason is that the tools can identify plenty of problems, but the developers don’t have the knowledge necessary to understand all the risks or to apply security best practice recommendations. Other challenges include the inability of automated tools to find business logic, authorization, and authentication flaws; inconsistent company procedures and checkpoints associated with running the tools; and no minimum standard set for acceptable risk levels.
Organizations also may adopt security activities such as threat modeling, secure requirements specification, and design reviews. These activities produce greater awareness of security issues facing applications, but the developers’ still lack the knowledge and experience necessary to really take advantage of these proactive security activities.
Stage 4: Application Security Training
The next logical step for organizations is to provide application security training to development groups. This comes in the form of in person classes, on-boarding training, and annual refreshers. The class content often includes a general background in application security, introduction to common vulnerabilities and attacks, and best practice approaches for eliminating preventing and remediating issues.
Application security training greatly improves developers’ ability to succeed at the organizations continued use of automated tools and third party assessments. Developers gain a common language to discuss application security concerns, can understand and address vulnerabilities in a timely manner, and the training can inspire developers to pursue additional research.
One aspect most organizations leave out is reinforcing and supplementing training with internal resources. Many developers receive training once a year in application security. After six months, most of the knowledge gained during the class is forgotten.
Stage 5: Creation of Resources, Formal Policies, Procedures and Standards
In order to ensure consistent use of security tools and activities, organizations choose to formalize the policies, procedures, and standards developed over the previous four stages. Criteria is created for evaluating the sensitivity or importance of applications, security activities are formally required for each of these categories, and security gates are put in place to ensure a minimum standard of security is met before software advances in the development process.
An internal application security portal is also created to make these policies and additional resources available to developers. These resources communicate information about standardized methods for addressing vulnerabilities in code, approved development languages and frameworks, and internally developed secure libraries and architectures.
Ultimately, this results in the elimination of ad-hoc security activities and promotes consistent development of applications with fewer security vulnerabilities.
Stage 6: Secure Software Assurance
In the last stage, organizations tailor security activities and requirements to satisfy business goals and leverage efforts as a competitive advantage. Before an application is developed, a set of security requirements is established. For each security activity, the organization defines a test procedure and criteria for determining whether the application passes or fails the security requirement. Test results are recorded and reported across the application’s lifetime to form an overall picture of the application’s security posture.
Thursday, October 1, 2009
Turn Application Assessment Reports into Training Classes
Stop Right There! There's an opportunity to use a real application within your organization to train developers to write secure code THE FIRST TIME! Here's how:
Taking the Time to Analyze Root Causes and Develop Standards
Now that the fire is out (the issues are fixed), let's take some time to understand how the vulnerabilities were created in the first place. Was it a result of missing output encoding practices, inconsistent page-level access controls, or some other issue? Gather a list of root causes that resulted in the identified weakness.
Next, use security experts or online resources, like OWASP, to find security best practice solutions for eliminating these vulnerabilities. Some great examples are the OWASP XSS Prevention Cheat Sheet or the OWASP SQL Injection Prevention Cheat Sheet. Finally, create a centralized application security portal or wiki that developers can access and add these root causes and best practice solutions as official company standards.
Bullet Points:
- Create a centralized application security portal or wiki
- As you analyze root causes and find security best practice approaches to fix them, add them as standards to the portal
Archive the Vulnerable Application Code for Later Use
After completing the third party assessment, you now possess real world vulnerability examples and a report that lists each issue, including the page and parameters vulnerable and a guide for exploiting them. This report and the vulnerable application will be a great learning tool to be leveraged later. Archive the vulnerable application code and any other related components. Make sure it is possible to restore this application to a working state within a test environment at a later date.
Bullet Points:
- Archive the application and related components to be deployed within a test environment at a later date
Conduct Developer Training
In the weeks before hosting a training course, generate developer interest by deploying the vulnerable application within a well controlled, internal, isolated, secure... you get the idea... test environment. Send application URLs and credentials to developers and tell them what classes of vulnerabilities can be found (refer to your assessment report). Encourage developers to test and discover security issues individually until the training class.
In the training class, go through each vulnerability class or root cause with developers. Demonstrate application security attacks against the weaknesses using the vulnerable application deployed to the test environment as a real world example. Once you have gone through each vulnerability type, ask developers to discuss other areas of the application they identified as vulnerable during the preceding weeks. After the discussion, have developers break up into groups to find any remaining issues. Give hints as the number of remaining vulnerabilities dwindles.
Once all the issues have been found by developers or demonstrated by the instructor, ask developers for methods of addressing each vulnerability class. Intentionally choose suggestions that are missing key security best practice concepts. Have developers come up to the presentation computer and code solutions on the spot; then, discuss reasons why the solution is flawed, and prove it with an example attack.
After going through a few proposed solutions, discuss the root cause that lead to the security weakness. Provide the best practice solution for eliminating the issue and preventing it in future code. Finally, show developers where they can access this company standard on the internal portal or wiki and have a developer implement the solution to fix the vulnerability on the spot.
Bullet Points:
- Generate developer interest in the training course by allowing them to hack the vulnerable application
- During the training course, discuss vulnerability classes, root causes, incorrect remediation solutions, security best practice based recommendations, and where to find company standards
Conclusion
Turning application security reports into company security standards and training courses is a great way to increase the return on investment for third party assessments. The suggestions discussed in the article above will greatly help developers succeed at writing secure code in future web applications. The process also uses meaningful real world applications to demonstrate the concepts and promote interest.
Some of these steps may require security savvy developers or security experts. If you would like assistance developing training courses, identifying root causes, or documenting security standards, please feel free to send me an email. I can be contacted at <My First Name>.<My Last Name>@gmail.com.
Friday, August 28, 2009
Flash Remoting Support in Burp Suite Pro
Previously, I have used Deblaze and Charles Proxy to support these needs. On August 12, a new version of Burp Suite Pro was released. This version allows AMF messages to be encoded and decoded in the proxy, repeater, and other tabs (except Burp Intruder). Burp Scanner also supports placing attack payloads in flash remoting calls.
Wednesday, July 22, 2009
Vulnerability Tracking, Workflow, and Metrics With Redmine
A functional defect is typically a set of undesirable behavior associated with an application feature. A security vulnerability (security bug) consists of undesirable behavior that weakens the application's ability to resist attacks or protect data. In terms of issue tracking and remediation, a security bug is really just a specific type of functional bug. This is apparent when you consider the basic workflow for a functional defect:
- A developer or user reports a defect.
- The project manager assigns the defect to a developer.
- The developer implements code to resolve the issue.
- The quality assurance team verifies that the implemented code successfully resolved the issue.
- The project manager or team provides communication to executives, clients, or other entities regarding the successful resolution of the issue.
- The issue is archived for use in metrics or other statistical analysis.
Development teams already use bug tracking software during development, why not utilize the same systems for tracking security vulnerabilities? Project team's familiarity with the software and process will make it considerably easier to collaborate on remediation efforts. Additionally, most organizations already have methods of collecting metrics about software defects. These metrics can be extended to include vulnerabilities.
In order to effectively track security vulnerabilities, a centralized, web-based bug tracking system needs to support the following features:
- Custom workflows per issue type
- Custom fields within bug items
- Roles and privileges controlling users' ability to change the status of security bugs

The diagram below illustrates the custom workflow, roles, and purpose of each step. This workflow can be created in Redmine and each transition can be associated with specific roles.

Since the software supports custom fields within issue items, a security assessor can enter additional vulnerability information such as:
- The vulnerability category
- Whether the issue has a security impact
- Whether the issue has a privacy impact
- Whether the issue has a compliance impact
- Which group identified the issue
- Whether the item was identified by an automated or manual process
- Which activity was used to identify issues


Here is an example of a security service request in Redmine:

Appendix
Custom Fields:

Security Activities Custom Field:

Vulnerability Identification Method Source Custom Field:

Vulnerability Identification Method Custom Field:

Vulnerability Identified By Custom Field:

Vulnerability Category Custom Field:

Wednesday, July 1, 2009
Internal AppSec Portals: Resources
Many of these ideas build on Pravir Chandra's Software Assurance Maturity Model (Version 1.0) and the Building Security In Maturity Model by Gary McGraw, Brian Chess, and Sammy Migues. Both works are licensed under the Creative Commons Attribution-Share Alike 3.0 License.
This article was also heavily influenced by Microsoft's SDL process.
The next several blog entries will cover my current project: providing a template or starting point for organization's internal application security portal. This post is the second of many to come.
Previous Internal AppSec Portals Posts:
Introduction
This post will cover providing application security resources for developers, including
- Policies
- Guidance
- Requirements
- Vulnerabilities
- and External Resources
The purpose behind this set of resources is to provide all the information a developer needs to write secure code. Developers cannot be expected to pull secure code out of the air. Guidelines, coding standards, and security requirements must be spelled out to ensure everyone understands their responsibilities and the organization's expectations.
Additionally, developers MUST be provided with security awareness training AND training against this material.
Policies
Security Policies
Most organizations define a set of security policies that govern acceptable use of information systems, methods for labeling and handling confidential data, and procedures for addressing policy violations. These same concepts should be extended to cover application security, compliance, and privacy policies.
Security policies should express the organization's dedication to the topics below. These topics do not necessarily have to define the process or implementation of each policy area, only statements mandating their use.
- Mandatory, periodic application security training
- Adherence to application security guidance and coding standards
- Use of a formal risk management process
- Risk categorization of data and applications
- Creation and maintenance of application security portfolios
- Use of approved secure development processes
- Dedication to meeting regulatory and compliance standards in each application project
- Inclusion and validation of security, privacy, and compliance requirements throughout the development process
- Establishment of a minimum level of assurance for application security, privacy, and compliance
In addition to security policies, organizations should maintain policies governing how personally identifiable information such as social security numbers, account numbers, or other data is handled. These policies should send a clear message to project teams that protecting users' private data is important. These policies should cover topics such as:
- Identification and categorization of private data
- Collection, storage, and transmission of private data
- Inclusion and validation of privacy requirements during the development process
- Establishment of a minimum level of assurance for privacy data
Microsoft's Privacy Guidelines for Developing Software Products and Services
Microsoft SDL Privacy Questionnaire
Microsoft SDL Privacy Requirements
Microsoft SDL Privacy At A Glance
Compliance Policies
There are a wide variety of compliance and regulatory standards that apply to organizations, data, and functionality. Project teams cannot spend all of their time researching these standards. At the organization level, compliance standards should be identified and a process should be created to assist developers in determining which regulations apply to their project. Compliance policies should include the following topics:
- Identification of compliance and regulatory standards
- Process for determining standards that apply to each software project
- Inclusion and validation of compliance requirements during the development process
- Establishment of a minimum level of assurance for software compliance
Organizations should collect and publish internal guidance to be consumed by project teams. Guidance should not only include secure coding standards, but also approved frameworks, security services, architectures, and environments. These items should be provided in a way that clearly communicates approaches or code that is approved, an organization standard, or unapproved.
Approved Libraries and Frameworks
Software can be developed in a variety of languages and often includes external third party libraries. ASP.NET applications often include libraries such as ASP.NET MVC and Microsoft's AntiXSS library. Java applications may include Struts, Spring, Hibernate, Velocity, and many others. Additionally, developers may want to develop software in PHP, Python, Ruby, Perl, and other languages.
Organizations must communicate which of these languages and frameworks are approved for use in software projects. Guidance should start with a simple list of languages and frameworks the organization has approved or disapproved. As development groups request approval for additional 3rd party libraries and develop successful applications, a list of standards should be created for specific architecture or project types.
For example, an organization may list the following standards for MVC applications in Java and ASP.NET (they typically would expand upon the descriptions as well):
Database Driven Java MVC Application
The organization has standardized on using the following frameworks for Database Driven J2EE MVC applications:
- Language: Java 1.6
- MVC Framework: Struts 2.x
- Dependency Injection Framework: Spring 2.x
- ORM Layer: Hibernate 3.x
The organization has standardized on using the following frameworks for Database Drive ASP.NET MVC applications:
- Language: ASP.NET 3.5
- MVC Framework: ASP.NET MVC 1.x
- Other: Microsoft AntiXSS Library 2.x
Examples of frameworks an organization may produce are:
- Secure methods for accessing security services (discussed in the next section)
- Secure methods for calling security resources (discussed in the next section)
- Input validation frameworks
- Unified authentication flows
- Authorization or entitlement frameworks
- and many more...
Security Services and Resources
A collection of applications often utilize common web services, authentication servers, LDAP servers, or other entities. Organizations should maintain a list of approved security services and resources, guidance informing project teams when it is appropriate to include the services or resources in a project, and the proper method for accessing or calling the service or resource.
Standardization on central services or resources can greatly reduce efforts required to validate applications' security. It also may eliminate the need to create an authentication or authorization strategy for each new project.
Examples of security services organizations may standardize on are:
- Authentication/single sign-on servers
- Web services providing entitlement or authorization details
- Web services that serve as a central point for accessing encrypted credit card data
- Web services that provide centralized auditing and logging capabilities
- Web services that provide centralized key management and cryptography
- LDAP servers containing authentication and authorization information
- Centralized, redundant file storage and backup
Secure Coding Standards
Developers are very good a developing software and implementing business requirements quickly and effectively; however, college, their programming textbook, or their expert programmer friend probably never taught them how to write secure code. In order to ensure developers write secure and consistent code, organizations need to provide secure coding standards to teach and support secure coding practices.
Secure coding standards should be presented in manner that can both teach developers and be used as a quick reference during the development process. It should contain code examples in all approved languages and for each framework. It should also provide examples of what NOT to do. Here is a list of items to consider including within a secure coding standard:
- Description of the standard
- Statement of why its important
- Explanation of when to use the approach or standard
- Vulnerabilities that may result if the standard is not observed
- Code examples in each language and framework
- Code examples of what NOT to do
- Links to external resources that provide additional information
Once these standards are written, they can be matched up with security, compliance, and privacy requirements, which are discussed in the next section. These coding standards allow organizations to hold project teams accountable for writing code that satisfies requirements.
Requirements
A set of common security requirements should be created and shared throughout the organization. These requirements should provide discrete, testable assertions which can be verified throughout the application development process (More on this idea in a later post). An abbreviated example of a security requirement is:
"Applications must use parameterized queries or prepared statements when querying relational databases. Untrusted data must not be concatenated within dynamic SQL query strings."
Another example related to integrating security services guidance is:
"All external facing applications must utilize the organization's standard, centralized authentication server."
The focus of these requirements is to provide a set of rules that developers can be and are held accountable for. Developers often cannot be security experts, but they can be trained to follow and execute on software project requirements. Assuming the organization documents the appropriate guidance and links this guidance to security requirements, development teams can be held accountable for security requirements in the same way they are held to business requirements.
As business requirements are typically implemented based on a prioritized list, it will also be important to allow a member of the organization's security department help prioritize security requirements with project managers.
In addition to security related requirements, privacy and compliance requirements must also be identified. These requirements should be written to satisfy the policies discussed in the "Compliance Policies" and "Privacy Policies" sections above.
Once a reasonable set of security, privacy, and compliance requirements have been established, a set of requirements profiles should be created for various project types. For example, applications that must be PCI compliant will have many compliance requirements that overlap with security and privacy requirements. The requirements profile "High Risk PCI Application" should contain a preprioritized list of requirements that combine and simplify items from the each category.
Vulnerabilities
During the development process, application vulnerabilities are often identified and reported to project teams. Typically, these reports provide a set of recommendations that will eliminate the vulnerability. Depending on the source of these recommendations (a penetration testing tool, code review tool, internal security team, or third party consulting company) the prescriptive advice may or may not coincide with the organization's approved method for eliminating a vulnerability. While general technical flaws like cross-site scripting are fairly straight forward, business logic, authentication, and authorization related issues may require organization specific approaches.
Organizations should maintain a list of vulnerabilities and should link each vulnerability to security, compliance, and privacy requirements that address the issues. This list of vulnerabilities should provide a short explanation of each issue and should label requirements with "Required", "Recommended", and "Optional." The explanation for each issue does not need to be long, many application security sites like OWASP already provide detailed descriptions for many vulnerabilities. Below is an example of how an organization can document vulnerabilities within an internal AppSec Portal:
SQL Injection
SQL injection occurs when untrusted data is interpreted by the database as SQL commands. This issue may allow users to read, modify, or destroy data without authorization.
The following security, privacy, and compliance requirements should be used to address this vulnerability:
Required:
- Security: <link to parameterized queries and prepared statements requirement>
- Compliance: <link to compliance requirement A>
- Security: <link to input validation framework requirement>
- Security & Compliance: <link to auditing and logging requirement>
External Resources
Finally, the organization should provide a set of external resources that project and security teams can use to research application security topics and news.
Friday, June 26, 2009
Internal AppSec Portals: Introduction
The Software Assurance Maturity Model (SAMM) and Building Security In Maturity Model (BSIMM) recommend addressing these needs using an application security portal (See Software Assurance Maturity Model 1.0, EG3 "Create formal application security support portal" and Building Security In Maturity Model, SR1.2 "Create security portal." This centralized internal website or application should be a one-stop shop for all the organization's secure development needs.
So what kind of characteristics should this portal have? Well, employees should be able to easily create and update information on the website. Access controls need to be applied to specific content to ensure only approved guidance, policies, and procedures are included. The portal should also allow collaboration within development groups as well as between development groups. It would also be nice to be able to version documents to see how and when information changes over time.
After reviewing these characteristics, I realized that a Wiki would provide all these features and could easily be placed within an organization's internal network. Specifically, TikiWiki provides collaboration through user pages, forums, blogs, chat, internal messages, and newsletters. It also allows access controls to be applied to individual categories. For example, a "Guidance" category can be created and pages can be grouped within this category. Read only access can be granted to all users, and write access can be granted to specific individuals responsible for updating the organization's guidance documents. A wiki also automatically versions pages so users can see when information is updated and how it changed. Finally, TikiWiki also provides the concept of structures. Structures group pages in a meaningful way allowing easy navigation and well defined organization of information.
The next several blog entries will cover my current project: providing a template or starting point for organization's internal application security portal. The images below give you a sneak peek at the information that will be discussed in future posts. Click on the images below to see each table of contents.
Monday, June 15, 2009
*Repost* Web Application Security Portfolios
- Application Security Portfolios: Part 1
- Application Security Portfolios: Part 2
Part 1
Managing an application security program can be a complex responsibility. Applications have a large number of moving parts and potential security risks. Security directors and managers must gather and organize a mountain of information in order to make informed decisions regarding allocating budget money for security and compliance efforts.
This two part blog suggests types of information a security directory might collect about an organization's applications and introduces one of many methods to organize that information. The first article focuses on the collection of detailed information for one single application. The second article attempts to combine relevant information from each application into one single document in order to aide in make decisions.
The goal is for these documents to be useful in at least the following situations:
- Maintaining a list of all web applications within the organization.
- Prioritizing application security assessment needs based on business and data importance, compliance requirements, and risk.
- Identifying key personnel responsible for the security of systems or code associated with a particular application.
- Determining network devices, servers, and components to target in an incident response investigation.
- Identifying low importance applications that should be assessed due to the shared use of a database or other high importance component.
- Understanding the flow of sensitive data between applications and other components.
First, one should gather a list of web applications within the organization. This should be done in a variety of ways including interviewing development managers and web server admins, logging into web servers and inventorying web applications, and by performing network scans over the internal and external network.
Once applications have been identified, basic information should be collected such as the application's name and purpose, who developed the code, where the application is hosted, and business importance. This information can be organized in a variety of ways. A simple excel spreadsheet is shown below for simplicity.
Detailed technical information should also be gathered. This includes items such as the language and framework the application was developed with and the authorization levels that exist. The information shown below is helpful for scoping application assessments with third parties or can be used to estimate time needed for an internal review.
Once the technical information has been documented, security staff can dig into the type of data handled by the application and its data flow. In the example loan application, a table listing the data or event, data type, and relevant compliance requirements was created.
Through interviews with developers and direct observation, a data flow diagram can be created. The method used to collect and present this information was taken directly from Branden R. Williams' article in the ISSA Journal, March 2008 titled "Data Flows Made Easy." In the loans application, individual data flow diagrams were created for key functionality. Once individual diagrams were complete, the diagrams were combined into one compound diagram.
Next, the network devices, servers, and components that the application depends upon should be documented. These assets are also color coded based on how important the application or data is on the asset (this will be more important in part two of the article). Instructions and an example for the loans application is shown below.
Using the dependency table above, pseudo firewall rules can also be defined.
A couple other pieces of information that may be helpful to track are past, present, and future code bases, location of log files, and security related history.
Using the information in the following spreadsheet, one should be able to answer the following questions:
- Do we host that application or does a third party host it for us?
- Who developed the application?
- Does this application need to be assessed?
- What additional network devices, systems, or components need to be assessed to assure the security of this application and its data?
- Are there compliance requirements associated with this application?
- What risk does this application present to the organization?
- We've been hacked! Which development manager do I call? Where are the log files? What other systems might also be affected?
- Where is the information I can use during scoping and the technical interview process of an assessment from VeriSign Global Security Consulting?
Part 2
Managing an application security program can be a complex responsibility. Applications have a large number of moving parts and potential security risks. Security directors and managers must gather and organize a mountain of information in order to make informed decisions regarding allocating budget money for security and compliance efforts.
This two part blog suggests types of information a security directory might collect about an organization's applications and introduces one of many methods to organize that information. The first article focuses on the collection of detailed information for one single application. The second article attempts to combine relevant information from each application into one single document in order to aide in make decisions.
The goal is for these documents to be useful in at least the following situations:
- Maintaining a list of all web applications within the organization.
- Prioritizing application security assessment needs based on business and data importance, compliance requirements, and risk.
- Identifying key personnel responsible for the security of systems or code associated with a particular application.
- Determining network devices, servers, and components to target in an incident response investigation.
- Identifying low importance applications that should be assessed due to the shared use of a database or other high importance component.
- Understanding the flow of sensitive data between applications and other components.
In part 1 of this series, an application security portfolio was created for an example loan application. Detailed information about the application was gathered including the sensitivity of data within the application, the data flow, and the application's dependencies on other network devices, servers, and components.
In part 2, we will try to organize information about all the organization's applications into one high-level document. The aim is for this document to aid us in answering questions like:
- What applications do I have?
- What data do I have?
- How important is the application or its data to my business?
- What risk level is that application or data at?
- What Systems and network paths do these applications depend on?
- How are these applications and its data interrelated?
- Which applications, systems, and networks should I spend security budget money on for assessments?
- If an incident occurs or an issue is identified, who is the contact person and what other related systems need to be analyzed?
- What compliance regulations apply to my applications?
- When was the last time these applications were found to be compliant with relevant regulations and standards?
If we are evaluating this information to determine which applications need assessments, we may make the observations listed below.
Loans Application
The loans application and its data are critical to the business. We completed an application assessment recently on version 1.0, however a whole new version was pushed to production in the last few days (version 2.0). Since this application is so important and we have recently completed an assessment, it may be a good idea to engage the same third party to perform a follow up assessment. We will provide that third part with a list of changes or new features and ensure those items are assessed in depth. In addition, that third party will briefly review the rest of the application to ensure no security issues were introduced in existing functionality by the changes or new features.
If we need a higher level of assurance, need to re-certify our PCI compliance, or drastic changes to the application were made in version 2.0 we may even have a whole new assessment completed.
Company Home Page
An assessment was completed approximate three years ago, and no new changes or features have been introduced since then. While it is important that a public facing website for the company is accessible externally, the data within the application is not terribly valuable.
Depending on the level of assurance needed, we may want to run an automated web application scanner tool just to verify our assumption that the site is relatively secure. If issues are identified, it may be a good idea to perform an assessment internally. Since the company home page does not require users to login and contains only public information, an automated tool is a good choice because the types of vulnerabilities that are challenging to identify using these tools (authentication, authorization, and business logic rules) should not be present.
Online Banking
The online banking application also has not been assessed in a while. This application and its data are critical to the business. The previous assessment occurred on version 3.0. Bug fixes, security updates, and other minor changes were introduced recently in version 3.1. A third party should be engaged to perform follow up testing to verify issues identified in the previous assessment have been addressed. The third party should also assess the minor changes to the application to ensure no additional issues have been introduced.
Internal Wiki
The company's wiki page contains items such as HR policies, processes and procedures for completing day to day tasks, and also contains protected application areas containing private company information or intellectual property. The data associated with this application is critical. This application has never been assessed before. While this application is not a client-facing application, employees, contractors, and other users all access this critical information. This situation may warrant an assessment by a third party.
Employment Application
The employment application is developed and hosted by a third party. Ideally, before this application/service was purchased, a third party assessment should have been performed, and the company should verify that the third party has a secure development process in place. Additionally, the contract between the third party and the company should include details about how assessments are handled, how the third party will respond to the identification of security issues, and other related topics.
As is often the case, a business unit negotiated a contract and purchased service from the third party prior to an assessment being performed. While the employment application does not generate revenue for the company and will not hinder day to day operations if the application goes down, the data within the application includes PII. The compromise of this application and its data will affect the company's reputation and will require the company to spend resources on incident response.
It is a good idea if this application undergoes a third party review.
Compound Dependency Table
In addition to gathering the high-level data above, a dependency table can be created to show how all the applications, data, network devices, servers, and components are interrelated. This table follows the same rules as introduced in Part 1 of this series, and can be used to determine how data flows between systems and networks. Additionally, this information may help to identify key systems that need to be assessed.
For example, if a low importance application accesses data within a database that is also accessed by a high importance application, it may be important to assess the low importance application in terms of introducing or manipulating data to the detriment of the high importance application.
This spreadsheet can be accessed via Google docs here:
http://dl.getdropbox.com/u/1132296/Web%20Application%20Security%20Portfolios/CCANCSA%20-%20Portfolios%20Summary.xls
Sunday, June 7, 2009
SAMM Inteview Template Version 1.0
The first release of the SAMM Interview Template is available below.
View the SAMM Interview Template here: http://spreadsheets.google.com/pub?key=rYpVqQR3026Zu4DNg8LBIwg&output=html
Download the SAMM Interview Template XLS here (Some formatting is lost): http://spreadsheets.google.com/pub?key=rYpVqQR3026Zu4DNg8LBIwg&output=xls
If you have questions or comments about this template or you wish to help assess OWASP using SAMM, please send a message out on the OWASP SAMM Mailing List.
Friday, May 29, 2009
Preparing For a Third Party Application Assessment
For this discussion, we will assume an application assessment has already been scoped and scheduled. Before the consulting company begins any testing, the development group should use a checklist to ensure the following items have been covered:
- Appoint a technical contact to handle any questions about code, functionality, or security controls.
- Appoint a contact to handle account lockouts or other technical difficulties with the environment or application.
- Send contact information to the consulting company or consultants.
- Identify and configure a test environment that closely mirrors production.
- Create appropriate credentials for a range of organizations and privileged levels.
- Populate the environment with adequate data to allow for testing of all functionality and features.
- Provide a demonstration of the application and answer technical questions.
The test environment should mirror production as closely as possible including the configuration of the operating systems, application servers, back-end components, and the application itself. However, the environment should not persist any transactions or changes in the real world. For example, stock trades, money transfers, etc should appear to complete, but the transaction should not be persisted to any banks.
Create Appropriate Credentials
Each consultant assigned to assess the application needs a range of accounts that allow for testing of horizontal and vertical access controls. This means if the application separates data by organization, company, institution, or some other group, the consultants will need accounts in two or three of these organizational units.
Additionally, within each of these organizational units, consultants require accounts that span several roles, permission, or entitlements. If there are a small set of roles within the application, it may be possible to create test accounts for each role. Otherwise, it may be sufficient to create a sample of accounts, one with no entitlements, one with all entitlements, and a handful of other accounts with varying permission-levels.
Populate the Environment with Adequate Data
In most applications, consultants cannot test functionality without having data associated with their user account. Before consultants begin testing, the application should be populated with test data that allows users to interact with all functionality.
Tuesday, May 19, 2009
Microsoft SDL Process Template
- Ensuring developers complete security activities before checking in code
- Providing a workflow for developers to follow
- Providing SDL process steps, instructions, descriptions, and resources to developers
Check out the video on the following page for more information:
http://msdn.microsoft.com/en-us/security/dd670265.aspx
Secure Development Jump Start
- Software Assurance Maturity Model
- Comprehensive, Lightweight Application Security Process
- Software Security Touchpoints
- Microsoft Secure Development Lifecycle
- Building Security In Maturity Model
- Create your own custom process
In companies with a small number of developers that have been there for a long period of time, it may make sense to dedicate a large amount of time and money to make them both developers and security experts. For organizations with a large number of developers or high developer turn over rate, it may be more cost efficient to simply provide security awareness training and a set of policies and coding standards to follow.
In any of these situations, several steps you can take to jump start a secure development process for your organization are listed below. It is assumed that your organization values and desires to develop secure code.
- Create a policy document addressing application security.
- Create a secure coding standard stating the organization's established, secure method for carrying out specific functions.
- Provide security awareness training.
- Provide training that specifically aims to introduce developers to the application security policies and secure coding standards for the organization.
Application Security Policies
An application security policy document should provide statements or policies that are as specific as possible. A statement such as "All applications should use sufficiently strong cryptographic algorithms" does not provide a developer with enough information to select a secure algorithm. Instead, a statement such as "ACME Bank Corp standardizes on the use of SHA256 as a secure symmetric cryptographic algorithm" should be used.
Other examples include:
"ACME Bank Corp requires all database queries to use parameterized queries or prepared statements. Dynamic or concatenated SQL is prohibited. The ACME Bank Corp secure coding standard provides examples of parameterized queries or prepared statements."
"Untrusted data should be properly output encoded before being included within a web browser page. The appropriate encoding method should be selected based on the context in which the data is being included. The secure coding standard provides example contexts and methods."
The authors of the application security policy document can get policy ideas from resources such as:
OWASP Top 10
2009 CWE/SANS Top 25 Most Dangerous Programming Errors
OWASP Guide Project
ASP.NET 2.0 Check List
ADO.NET 2.0 Check List
.NET 2.0 Check List
Secure Coding Standard
Developers should be able to use the secure coding standard document as a reference guide for writing secure code. The standard should provide the developer with enough information to know when and how to apply a particular code example. An entry such as the following is a good starting point:
Parameterized Queries and Prepared Statements
Addressed Application Security Policy: Parameterized Queries or Stored Procedures, Section 2.1.3
Prevents: SQL Injection
References OWASP Top 10, CWE/SANS Top 25, Security Guidelines: ADO.NET 2.0, OWASP Guide
When to Apply: Anytime an application queries an SQL database
Code Examples:
.NET Parameterized Query, SELECT Statement (example taken from http://msdn.microsoft.com/en-us/library/ms998264.aspx#pagguidelines0002_sqlinjection)
using System.Data; using System.Data.SqlClient; using (SqlConnection connection = new SqlConnection(connectionString)) { DataSet userDataset = new DataSet(); SqlDataAdapter myDataAdapter = new SqlDataAdapter( "SELECT au_lname, au_fname FROM Authors WHERE au_id = @au_id", connection); myDataAdapter.SelectCommand.Parameters.Add("@au_id", SqlDbType.VarChar, 11); myDataAdapter.SelectCommand.Parameters["@au_id"].Value = SSN.Text; myDataAdapter.Fill(userDataset); }
.NET Parameterized Query, UPDATE Statement
...
.NET Parameterized Query, INSERT Statement
...
Java Prepared Statement, SELECT
String sql = "SELECT * FROM movies WHERE year_made = ?"; prest = con.prepareStatement(sql); prest.setInt(1,2002); ResultSet rs1 = prest.executeQuery();Java Prepared Statement, UPDATE
...
Security Awareness Training
Security awareness classes are typically used to introduce developers and managers to the types of vulnerabilities found in applications as well as the impact of those issues. When a developer sees for the first time that an SQL injection attack on SQL Server can be used to read arbitrary files and execute DOS commands, a light bulb seems to come on inside their head and they realize they really do need to pay attention and prevent these vulnerabilities.
While these classes often do not arm developers with the proper tools and knowledge for preventing vulnerabilities, a well written application security policy and secure coding standards document should be a great start.
Application Security Policies and Secure Coding Standard Training
Following a security awareness class, it is beneficial to provide a more targeted training opportunity for developers. This course should be focused upon going through the organizations application security policies and coding standards to ensure all developers are aware of these resources and understand how to use and apply them. Following the course, developers can be held accountable for applying the examples in the secure coding standards to their projects.
Process Improvement
It is likely that an application security policy and secure coding standard document will not include all the possible vulnerabilities that could be introduced into a web application. As new issues are identified as part of an assessment, peer review process, or threat model (these steps are usually included within a complete secure development process), additions should be made to both documents. These additions should reflect the organization's recommended approach for developing code without introducing the newly identified flaw. The organization should also periodically review application security concepts and new additions to the policies and standards document with its developers.