In September, I presented "QA Teams Can Improve Software Security!" at the Kansas City Quality Assurance Association chapter meeting. Here is the PowerPoint presentation from that meeting:
Tuesday, October 2, 2012
Monday, August 6, 2012
QA Teams Can Improve Software Security! - Sept. 13, 2012 Presentation at the Kansas City Quality Assurance Association Meeting
Next month, I will be giving a presentation at the Kansas City Quality Assurance Association Meeting. The talk is titled "QA Teams Can Improve Software Security!" The presentation will be on September 13, 2012 at 11:15am at Manny's. Take a look at the KCQAA Website for more details on the location and time.
During the presentation, I will talk about what role QA teams can play in improving software security, how their continuing education, structure, and composition can be improved to make them more effective at finding vulnerabilities, and I will give examples of test case techniques for finding security weaknesses.
In my opinion, testing for security weaknesses (whether its positive or negative testing) is very similar to what QA already does. Security vulnerabilities are just another type of quality defect, and QA teams are well suited to this role. Come check out my presentation and join in the conversation!
Note: As always, testing is just a very small part of software security (which I talk about in the presentation). Good software security programs include portfolio management; training; security requirements; secure architecture, configuration, and coding patterns (think design patterns); validation (positive and negative testing), metrics, and continuous improvement.
Wednesday, April 4, 2012
Higher Expectations for Security Assessments
We've all heard the saying that you can't test security into software, so if we must do these tests, why not demand more value from them. If someone performed a security assessment on my software, I would expect them to contribute knowledge in a meaningful way to help me build better software in the future. I don't think a report with best practices meets that expectation.
Existing Knowledge Base and Patterns
Before we get much further, let's talk about the knowledge and resources I would start with. I know this may not be common in many organizations, but I would expect to have a specific set of secure architecture, configuration, and coding patterns that satisfy security requirements for my application, organization, data protection needs, and the threat actors targeting my application AND these patterns are tailored for the language and frameworks I'm using. These patterns may not be very complete or robust, but they are at least a starting point to be added upon in the future.
For example, if I'm using ASP.NET MVC 3, I will have concrete examples of how to switch out the default HTML encoder within the Web.config file with the Microsoft Web Protection Library (AntiXSS Library) and then have example code that implements HTML or JavaScript encoding within my view such that cross-site scripting attacks are rendered inert (you could also mention input validation, but I skipped it in this example). I would also have examples for configuring the application to require SSL/TLS, secure and HTTPOnly flags on cookies, and other similar security related configuration settings. There are many more possibilities that should be included within this library of patterns. Each would also include one or more validation patterns that ensure a security control is present and working properly such as:
Finally, I would expect the assessor to help link specific weaknesses in my application to techniques commonly used by the threat actors targeting my organization and application.
Existing Knowledge Base and Patterns
Before we get much further, let's talk about the knowledge and resources I would start with. I know this may not be common in many organizations, but I would expect to have a specific set of secure architecture, configuration, and coding patterns that satisfy security requirements for my application, organization, data protection needs, and the threat actors targeting my application AND these patterns are tailored for the language and frameworks I'm using. These patterns may not be very complete or robust, but they are at least a starting point to be added upon in the future.
For example, if I'm using ASP.NET MVC 3, I will have concrete examples of how to switch out the default HTML encoder within the Web.config file with the Microsoft Web Protection Library (AntiXSS Library) and then have example code that implements HTML or JavaScript encoding within my view such that cross-site scripting attacks are rendered inert (you could also mention input validation, but I skipped it in this example). I would also have examples for configuring the application to require SSL/TLS, secure and HTTPOnly flags on cookies, and other similar security related configuration settings. There are many more possibilities that should be included within this library of patterns. Each would also include one or more validation patterns that ensure a security control is present and working properly such as:
- Unit tests
- Automated tests using Capybara, Watir, or other similar frameworks
- QA test cases
- Automated static analysis rules
- Dynamic testing test cases
- Many more...
Security Assessment Expectations
After I receive an assessment report, what do I expect the assessor to provide to me (in addition to a report discussing the issue, risk rating, best practice, etc). First, I would expect a completed test case of some kind to reproduce the issue. This may require collaboration between the developers, QA testers, and the security tester, but I should receive automated (preferable) or manual test cases (detailed instructions for QA to follow for example, probably created by QA while walking through reproduction with the assessor) that I can run at any time to validate whether the vulnerability has been fixed successfully. These test cases may reflect the capabilities of the organization and the validation patterns that already exist.
Second, I would expect the assessor to help evaluate and update existing secure coding, configuration, architecture, and validation patterns to ensure a solution exists for future development.
Third, I would expect the assessor to provide a mini training session (may only be 10 minutes using a web based conferencing solution), explaining the vulnerability, and then support the development lead or architect in training the developers and QA testers in applying the new patterns.
Finally, I would expect the assessor to help link specific weaknesses in my application to techniques commonly used by the threat actors targeting my organization and application.
Friday, March 23, 2012
Rugged Software – Telling Your Security Story
First, read John Wilander’s post about the Rugged Summit
In my opinion, a Rugged software organization builds applications, services, infrastructure, business processes, and people that expect to be attacked. The organization gathers intelligence and understands the types of threats and adversaries targeting them and intentionally includes strategic solutions within every layer to resist, detect, respond, and recover from those attacks.
I also think that a Rugged software organization should share important aspects of this philosophy in the context of their products and services. One possible form is the “Security Story”. The Security Story is a public facing narrative showcasing how and why the organization chooses to be resilient. This could include statements describing the types of issues that the software and infrastructure is designed to protect against, the controls and countermeasures in place, the process by which the organization validates these controls are effective, and the results.
A great non-software example is the Apple Supplier Responsibility page (originally pointed out by one of the other Rugged Summit attendees). Apple clearly defines its values and expectations, how it ensures its suppliers adhere to those expectations, and their results. Customers can view this information and decide whether they want to do business with Apple based on whether their values match up with the company’s expectations and fulfillment with said expectations.
There are many similar examples in the software industry. One of those examples is CrashPlan’s “/security” page, Security FAQ, and in depth description of its file encryption options. CrashPlan shows very plainly the types of concerns they considered in the development of their data backup service as well as the controls put in place to address them. Additionally, they provide specific details about the implementation. All of these help a customer decide whether CrashPlan’s data security vision matches or exceed their own, and whether the vendor can back up their claims with some evidence (in this case, the evidence is details about the implementation, but some kind of verification might be even more convincing).
As we become more and more dependent upon “smart meters” to deliver reliable energy to our homes and businesses, USB or Bluetooth enabled health care devices within our bodies to sustain life, or mobile devices to organize and run our lives, we must have the ability to seek out and select the product, solution, or company that will protect us the best. Whether you’re a business owner buying software, a CTO contracting for outsourced software development, or an individual looking for a safe backup solution, we all deserve to have Rugged software!
Monday, January 16, 2012
Security and Development: Building A Better Relationship
Let's build a better relationship between security assessors and software developers. Instead, of having security teams act like an external, neutral audit group that simply finds problems and reports them, let's make security assessors problem solvers, advocates, and advisors!
Typically, assessors identify security defects, and then reports issues to the application development team. Defects may be accompanied by a best practice approach or description for remediating each vulnerability, but that advice often isn't customized for the relevant framework, language, or libraries being used in the software package. Assessments typically occur after specific milestones like a release or after an elapsed time period. I want to shake up these patterns!
First, let's assign a relationship, development team, and set of applications to each assessor within the security group. This assessor will partner with software developers and really get to know the application over time through repeated interaction and review. Next, let's give the assessors read only access to the source code repositories for each application they are assessing. Now, instead of providing security services (assessments, code reviews, architecture reviews, design reviews, etc.) once an application reaches a specific milestone, let's make the assessor responsible for guiding the team on a continuous basis. The assessor attends important meetings, gets to know the project goals, identifies and executes on security needs continuously, provides training, advice, and gets out in front of potential privacy, compliance, and security concerns while the application is still being designed and architected.
The organization as a whole should have specific security tools and activities that are required for all applications (may be a tiered approach based on an application's risk profile and valuation) identified in advance and the security assessor is responsible for setting up, configuring and running these tools and activities (often with cooperation of the development team). Let's assume that the organization uses a static code analysis tool to identify security defects in a software package. The tool is installed on a continuous integration server (automatically monitors code repositories, checks out, builds, and then assesses code for quality and security) and as new defects are found, the security assessor is notified. The security assessor is then responsible for reviewing and validating the findings (alternatively, a filter could be set up to notify developers of security issues already mastered and the assessor could receive only new issues). Once a finding is validated, the assessor develops an example code patch that would remediate the vulnerability. He or she then brings that solution to the software team, and provides a mini training session with the whole team covering: information about the vulnerability, the specific best practice used to remediate it, and then the code proposed to fix the issue. The team (security and development) discusses the cause, effect, and fix, and then the team as a whole agrees upon an appropriate secure coding standard for that vulnerability class (based on the code example above). Finally, the development team applies that standard for all instances of the issue in the application and uses it for developing similar code in the future.
This approach allows teams to identify and fix security defects quickly, it allows developers to focus on developing code rather than understanding security tools, and creates a relationship in which the security team brings solutions to the table rather than problems.
Taking it further: If the organization has a formalized secure software development process and a central repository for application security requirements, then the knowledge should be captured within this repository in the form of application security requirements and secure coding standards. These added requirements and secure coding standards should be evangelized to other software development teams to help them avoid similar vulnerabilities.
Related:
Turn Application Assessment Reports into Training Classes
Security Testing Roles - Expanding on Integrating Security Testing into the QA Process
Typically, assessors identify security defects, and then reports issues to the application development team. Defects may be accompanied by a best practice approach or description for remediating each vulnerability, but that advice often isn't customized for the relevant framework, language, or libraries being used in the software package. Assessments typically occur after specific milestones like a release or after an elapsed time period. I want to shake up these patterns!
First, let's assign a relationship, development team, and set of applications to each assessor within the security group. This assessor will partner with software developers and really get to know the application over time through repeated interaction and review. Next, let's give the assessors read only access to the source code repositories for each application they are assessing. Now, instead of providing security services (assessments, code reviews, architecture reviews, design reviews, etc.) once an application reaches a specific milestone, let's make the assessor responsible for guiding the team on a continuous basis. The assessor attends important meetings, gets to know the project goals, identifies and executes on security needs continuously, provides training, advice, and gets out in front of potential privacy, compliance, and security concerns while the application is still being designed and architected.
The organization as a whole should have specific security tools and activities that are required for all applications (may be a tiered approach based on an application's risk profile and valuation) identified in advance and the security assessor is responsible for setting up, configuring and running these tools and activities (often with cooperation of the development team). Let's assume that the organization uses a static code analysis tool to identify security defects in a software package. The tool is installed on a continuous integration server (automatically monitors code repositories, checks out, builds, and then assesses code for quality and security) and as new defects are found, the security assessor is notified. The security assessor is then responsible for reviewing and validating the findings (alternatively, a filter could be set up to notify developers of security issues already mastered and the assessor could receive only new issues). Once a finding is validated, the assessor develops an example code patch that would remediate the vulnerability. He or she then brings that solution to the software team, and provides a mini training session with the whole team covering: information about the vulnerability, the specific best practice used to remediate it, and then the code proposed to fix the issue. The team (security and development) discusses the cause, effect, and fix, and then the team as a whole agrees upon an appropriate secure coding standard for that vulnerability class (based on the code example above). Finally, the development team applies that standard for all instances of the issue in the application and uses it for developing similar code in the future.
This approach allows teams to identify and fix security defects quickly, it allows developers to focus on developing code rather than understanding security tools, and creates a relationship in which the security team brings solutions to the table rather than problems.
Taking it further: If the organization has a formalized secure software development process and a central repository for application security requirements, then the knowledge should be captured within this repository in the form of application security requirements and secure coding standards. These added requirements and secure coding standards should be evangelized to other software development teams to help them avoid similar vulnerabilities.
Related:
Turn Application Assessment Reports into Training Classes
Security Testing Roles - Expanding on Integrating Security Testing into the QA Process
Subscribe to:
Posts (Atom)