Let's build a better relationship between security assessors and software developers. Instead, of having security teams act like an external, neutral audit group that simply finds problems and reports them, let's make security assessors problem solvers, advocates, and advisors!
Typically, assessors identify security defects, and then reports issues to the application development team. Defects may be accompanied by a best practice approach or description for remediating each vulnerability, but that advice often isn't customized for the relevant framework, language, or libraries being used in the software package. Assessments typically occur after specific milestones like a release or after an elapsed time period. I want to shake up these patterns!
First, let's assign a relationship, development team, and set of applications to each assessor within the security group. This assessor will partner with software developers and really get to know the application over time through repeated interaction and review. Next, let's give the assessors read only access to the source code repositories for each application they are assessing. Now, instead of providing security services (assessments, code reviews, architecture reviews, design reviews, etc.) once an application reaches a specific milestone, let's make the assessor responsible for guiding the team on a continuous basis. The assessor attends important meetings, gets to know the project goals, identifies and executes on security needs continuously, provides training, advice, and gets out in front of potential privacy, compliance, and security concerns while the application is still being designed and architected.
The organization as a whole should have specific security tools and activities that are required for all applications (may be a tiered approach based on an application's risk profile and valuation) identified in advance and the security assessor is responsible for setting up, configuring and running these tools and activities (often with cooperation of the development team). Let's assume that the organization uses a static code analysis tool to identify security defects in a software package. The tool is installed on a continuous integration server (automatically monitors code repositories, checks out, builds, and then assesses code for quality and security) and as new defects are found, the security assessor is notified. The security assessor is then responsible for reviewing and validating the findings (alternatively, a filter could be set up to notify developers of security issues already mastered and the assessor could receive only new issues). Once a finding is validated, the assessor develops an example code patch that would remediate the vulnerability. He or she then brings that solution to the software team, and provides a mini training session with the whole team covering: information about the vulnerability, the specific best practice used to remediate it, and then the code proposed to fix the issue. The team (security and development) discusses the cause, effect, and fix, and then the team as a whole agrees upon an appropriate secure coding standard for that vulnerability class (based on the code example above). Finally, the development team applies that standard for all instances of the issue in the application and uses it for developing similar code in the future.
This approach allows teams to identify and fix security defects quickly, it allows developers to focus on developing code rather than understanding security tools, and creates a relationship in which the security team brings solutions to the table rather than problems.
Taking it further: If the organization has a formalized secure software development process and a central repository for application security requirements, then the knowledge should be captured within this repository in the form of application security requirements and secure coding standards. These added requirements and secure coding standards should be evangelized to other software development teams to help them avoid similar vulnerabilities.
Related:
Turn Application Assessment Reports into Training Classes
Security Testing Roles - Expanding on Integrating Security Testing into the QA Process
Monday, January 16, 2012
Wednesday, November 30, 2011
Web Application Vulnerability Unit Testing Cheat Sheet (Capybara and Watir-WebDriver)
For an example template for Watir-WebDriver/RSpec or Capybara Test Case, start here:
For common questions with Watir-WebDriver or Capybara, look here next:
Detect and Close JavaScript Alert Boxes:
Watir-WebDriver
Capybara:
Perform Arbitrary HTTP POST Requests:
Watir-WebDriver
Send Keyboard Commands:
Watir-WebDriver
Capybara:
Search For Text/HTML Within a Nested Frame:
Watir-WebDriver
Returning Content From Javascript:
Watir-WebDriver
Include JQuery: Watir-WebDriver
Return HTTP Headers:
Watir-WebDriver
For common questions with Watir-WebDriver or Capybara, look here next:
- http://watirwebdriver.com/web-elements/ (and the "advanced interactions" items)
- https://github.com/jnicklas/capybara
Detect and Close JavaScript Alert Boxes:
Watir-WebDriver
begin
browser.driver.switch_to.alert.accept #raises an exception if no alert is present
puts 'alert box found'
rescue
puts 'alert box not found'
end
Capybara:
page.driver.browser.switch_to.alert.accept
Perform Arbitrary HTTP POST Requests:
Watir-WebDriver
Send Keyboard Commands:
Watir-WebDriver
browser.text_field(:name,/login/i).send_keys(:arrow_down)
Capybara:
page.find_field('login').native.send_keys(:arrow_down)
Search For Text/HTML Within a Nested Frame:
Watir-WebDriver
Returning Content From Javascript:
Watir-WebDriver
browser.execute_script(‘return “asdf”’) #=> returns “asdf” to ruby
Include JQuery: Watir-WebDriver
browser.execute_script(%q|
var el = document.createElement("script");
el.setAttribute("src","http://code.jquery.com/jquery-1.6.4.min.js");
document.body.appendChild(el);|)
Return HTTP Headers:
Watir-WebDriver
Tuesday, November 22, 2011
Using Watir-WebDriver or Capybara For Web Application Vulnerability Unit Testing
Back in October, I gave a Security B-Sides presentation filled with demos showing how to construct and execute unit tests for web application security vulnerabilities. The goal was to:
To install everything on Windows, here's what I did:
1. Install Ruby (1.9.x) (http://rubyinstaller.org/downloads/)
2. Install Watir (http://watir.com/installation/#win)
3. Install RSpec and escape_utils
4. Install Capybara
To run the test cases, use the following commands:
Presentation Materials:
- Allow QA teams or developers to execute unit tests to demonstrate that a web application vulnerability remains fixed 1 day, 1 week, 1 month, or even 1 year from the date it was remediated (For example, security unit tests run as part of a continuous integration process).
- Provide a mechanism for security teams to demonstrate a vulnerability instance to web application stakeholders. One that can be run by the stakeholders themselves, as many times as needed, with little or no knowledge of security testing techniques.
- Allow security testers to write testing tools or scripts that interact directly with the browser, eliminating many false positives occurring due to the inability to execute JavaScript or other similar browser dependent components.
To install everything on Windows, here's what I did:
1. Install Ruby (1.9.x) (http://rubyinstaller.org/downloads/)
2. Install Watir (http://watir.com/installation/#win)
Get an admin command prompt
gem update --system
gem install watir
gem install watir-webdriver
3. Install RSpec and escape_utils
gem install rspec
gem install escape_utils
4. Install Capybara
gem install capybara
To run the test cases, use the following commands:
rspec -f d "OWASP Broken WebApps RSpec.rb"
rspec -f d "OWASP Broken WebApps Capybara.rb"
Presentation Materials:
- Using Watir & Ruby for Web Application Vulnerability Unit Testing
- OWASP Broken WebApps Capybara.rb
- OWASP Broken WebApps RSpec.rb
Subscribe to:
Posts (Atom)