Thursday, October 13, 2011

Security Testing Roles - Expanding on Integrating Security Testing into the QA Process

If you haven't read my previous post: Integrating Security into the QA Process and/or listened to the podcast from Rafal Los:, do that first.  The content below breaks down my thoughts on the type of security testing that can be integrated into each role (developer, QA, Security). This covers security testing, and does not include any other activities like threat modeling, architecture review, design review, following secure coding standards during development, etc.

Development Team:
  • Use an automated static code analysis tool  (for example: Ounce, Fortify, or Veracode)
  • Perform peer code reviews
  • Construct and run unit/functional/automated tests for previously identified security issues using libraries like Watir/WatiN/Watij/Capybara/etc
QA Team:
  • Test security controls that can be broken down into positive, concrete tests with a clear start and end (may require step by step directions and training) -- possible focus mainly on controls that ALL application should have in place
  • Construct (or have developers construct) and run unit/functional/automated tests for previously identified security issues using libraries like Watir/WatiN/Watij/Capybara/etc (listed for both QA and Dev.)
  • Perform testing for business logic or domain specific vulnerabilities (a list of these business logic rules should be specified in the application's requirements)
Security Team (possibly a team within QA!?!):
  • Manually Perform negative testing and using penetration testing experience to identify issues
  • Perform automated static code analysis (for example: Ounce, Fortify, or Veracode)
  • Perform manual code review
  • Perform automated security scanning (for example: AppScan, Web Inspect, or Burp Suite)
I also want to emphasize that identifying a vulnerability through testing should ALWAYS result in the organization creating a new or correlating to an existing application security requirement (like a functional or business requirement) and an associated secure coding standard (specific code/configuration example and discussion showing how to accomplish the application security requirement in a particular development language, framework, and/or library).  The goal is to proactively avoid vulnerabilities in the future, decreasing costs and improving security for all projects within the organization rather than for one specific application.  Feeding solutions back into the requirements specification phase of software development will help accomplish this.

Tuesday, October 11, 2011

Integrating Security into the QA Process

I just listened to the latest podcast from Rafal Los's site "Down the Rabbit Hole":

On the show, they discuss integrating security testing into the QA process, which isn't terribly new, but what I got really excited about was the discussion about *how* to integrate it.  Security testing is difficult for QA teams. Security testing requires specific skill sets and knowledge.  QA teams don't know how to run automated security testing tools, have trouble interpreting the assessment results, and usually focus on testing defined requirements rather than having an open-ended mandate to look for security defects.

On the podcast, they recommend breaking security testing up and distributing it across the organization as much as possible.  Security testing should be a normal part of developing and testing software.  In regards to QA testing, define specific security tests, in a similar manner to the way functional or business requirements are tested, and provide those to the QA team.  Step by step directions, training on a particular tool (like the web developer toolbar plugin for Firefox, not necessarily a static code analysis tool), or custom scripts may be required and included in these test definitions (use templates if possible).  Link all of these security defect QA tests to functional or business requirements (this may require the organization to define new ones).  Then, add these requirements back into the software development process, so they are included during the planning, design, or coding phases.  Finally, provide secure coding standards that show step by step directions or specific code examples for accomplishing the functional or business requirements in the team's language, framework, library, or other software component.

Tuesday, October 4, 2011

Security B-Sides Kansas City Presentation

Security B-Sides Kansas City is happening Wednesday, October 26, 2011.  I will be presenting there at 10AM.  The topic is using Watir-WebDriver (a browser automation framework/driver) and Ruby to perform web application security unit testing.  While doing the research, I also got a chance to use another framework, Capybara, so I included examples from it as well.  The presentation will mostly consist of demos.  I will discuss what Watir-WebDriver is, alternate frameworks and languages to use, and how to apply them to web application security unit testing.  Then, I will walk through specific examples/demos showing how to use the frameworks to exploit/unit test vulnerabilities in the OWASP Broken Web Applications Project/VMware image.  The unit testing framework I chose is RSpec. I have demos ready for the following issues:
  • SQL Injection
    • error message based
    • matching contents of a union select over the application usernames/passwords
  • Cross-site Scripting
    • Reflected - URL Based
    • Reflected - Using a custom POST request
    • Stored
  • Autocomplete
  • Session Fixation
  • Open Redirect
  • Enumerating authorization/access controls
  • Information Disclosure through HTTP headers
After the presentation, I can provide anyone that asks with completed unit tests for all of the vulnerabilities listed above for both Ruby/Watir-WebDriver and Ruby/Capybara, using RSpec as the unit testing framework.  To run these demos at home, simply download and run the OWASP Broken Web Application VMware image using VMware Player.

Monday, October 3, 2011

Potential BlackBerry Playbook Application Permissions Vulnerability - Need Help Confirming

I'm posting some details about a potential BlackBerry Playbook application permissions vulnerability with the idea that others out there may help confirm and test the issue's existence.  I can only take the exploit so far, and I need help from those with a BlackBerry AppWorld Vendor Account and a Playbook device (I have neither of these at the moment).

A completed exploit/application is available here:

Vulnerability Discussion

What I *think* I have found is a vulnerability that will allow an attacker to develop a BB Playbook application (think malware) with specific device permissions without those permissions being reported by App World.  What I have discovered is when you compile a WebWorks SDK application, the SDK uses a config file containing permissions as an input, and compiles those permissions into a flash file. The flash file manages, fullfills, and grants access to device APIs like accessing the camera, accelerometers, or sensitive user/device information.   I suspect that AppWorld determines the permissions for the application via that config file OR possibly a method call like getPermissions() in the flash file, but I have no way to confirm this.  I can alter the config file so it shows zero permissions, but then deploy an application with full permissions on the BB Playbook simulator successfully.  Now I need to confirm that AppWorld reports that no permissions are granted to the application despite the fact that these privileged APIs are being called successfully.  If it is successful, It will tell me that AppWorld relies on the config file to notify users of application permissions (which means I can get arbitrary permissions without the user ever knowing).

If that first attempt is unsuccessful, then I have already successfully disassembled, altered, and reassembled parts of the flash file and reincluded it within the application, so its just a matter of discovering how to obfuscate those permissions from AppWorld.  Most, if not all, the permissions enforcement occurs in the compiled flash file rather than by the underlying operating system itself.

The specific help I need is in publishing the application to AppWorld, observing the permissions reported for the application via AppWorld, and observing the permissions reported for the application via the Permissions Module on the Playbook control panel, and then running the application on a Playbook itself (rather than the simulator, which works just fine, but doesn't report permissions at all).  I don't own a BB Playbook and I don't currently have a Vendor account, so it is difficult to confirm whether this issue really exists.

If you have a BB AppWorld Vendor Account and a BB Playbook that you can confirm my results on, please read the details below and publish your results.


First, install all the prerequisites needed to compile and deploy a BB WebWorks SDK application.  For a walk through, take a look at this guidance:

Next, we will create a very simple WebWorks App that uses a few privileged API calls (specifically the API:  We will use the example provided in the API documentation to create an index.html page:

Then, add the JQuery libraries referenced by the index.html page to the /js folder.

Now, we will create two configuration files, one containing the proper permissions and one without any permissions.  config.xml should read:


Compile the application into a .bar file as you would any other WebWorks application. First create a zip archive at the root level of the application, then run the following command:

"c:\Program Files (x86)\Research In Motion\BlackBerry WebWorks SDK for TabletOS\bbwp\bbwp.exe" -d

In the /bin directory, a file will be created. Unzip this file (its just a zip file containing the compiled flash file and html, JavaScript, and XML files. Open the /bin/PermissionsTest/META-INF/MANIFEST.MF file. There will be two lines we will modify. They are (the SHA-512-Digest value will differ):

Archive-Asset-Name: air/config.xml Archive-Asset-SHA-512-Digest: 9x6Jp-WRWEV14A0DGdO7rIIXxEB7V-Ya6Ke1pRnM9oeckJB6GzS9EqzoDosXyaUEEaLDebxE6o36UalIqtv2gQ

Archive-Asset-Name: air/config2.xml Archive-Asset-SHA-512-Digest: Y5hA0NxFXOCHpfy5utM-9oMWG5elciLxKNWl0AcU4azyXWDBOrq6v4tw9cU0coG3jXzWqg4Od3OtZsEcqxNLwA

Delete the lines for config.xml, then rename config2.xml to config.xml

Next, go to the /bin/PermissionsTest/air/ directory and delete "config.xml", and rename "config2.xml" to "config.xml"

Go back to the /bin/PermissionsTest directory and make a new zip archive.  Change the file extension from ".zip" to ".bar".

Deploy the application to the simulator (or AppWorld):

"c:\Program Files (x86)\Research In Motion\BlackBerry WebWorks SDK for TabletOS\bbwp\blackberry-tablet-sdk\bin\blackberry-deploy.bat" -installApp -device <device ip here> -package bin\ -password <simulator device password>

When you open the application, and click "Populate - APP" it should retrieve the author and title of the application (which require privileged API access).

So Why does this work?

Permissions seem to be enforced by the Adobe AIR application executable rather than the protected operating system itself.  if you decompile the swf file generated by the compilation process, you will see the following interesting code fragments:

public static const values:Object={"configXML":"config.xml", "version":"1.0.0", "content":"index.html", "author":"Nick Coblentz", "description":"This application tests whether privileged APIs can be called without AppWorld reporting those permissions to the user.", "name":"Permissions Test", "foregroundSource":"index.html", "hasMultiAccess":true, "onFirstLaunch":false, "onRemotePageLoad":false, "onLocalPageLoad":false, "debugEnabled":true, "accessList":new Array(new webworks.access.Access(webworks.config.ConfigConstants.WIDGET_LOCAL_DOMAIN, true, new Array(new webworks.access.Feature("", true, "", null)))), "widgetExtensions...lots more...

and the isFeatureAllowed() method in webworks.config.ConfigData which is used by webworks.FunctionBroker to determine whether to service JavaScript JSON requests for privileged API access.

Request For Help

If anyone chooses to lend a hand, I would welcome help further understanding the inner workings of the generated flash file, validation that the vulnerability exists, and validation of the permissions reported by AppWorld and the Playbook device itself.  Feel free to add comments on the blog or email me directly.