On July 10, 2014 the Office of Inspector General (OIG) released an evaluation of the effectiveness of NASA’s efforts to secure its publicly accessible web applications. While OIG noted some improvement over the past few years, it found “deficiencies… that leave the Agency’s publicly available web applications at risk of compromise.” Of specific concern was that the identification of vulnerabilities were not prioritized by seriousness of impact (i.e risk rating) nor was the underlying cause of the vulnerabilities determined. The OIG also identified weaknesses in NASA’s IT security practices and felt its process for ensuring timely mitigation of vulnerabilities was lacking.
NASA manages about 1,200 publicly facing web sites and applications to share information with the public, collaborate with research partners and provide remote access for employees an contractors. Because the Agency maintains technical and other sensitive information, it is a target for hackers and cyber-criminals. In FY2013 NASA reported 61 exploits of web applications and an 850% increase in the number of SQL injection attacks. Some notable compromises included hackers gaining access to a NASA website that included personally identifiable information of Agency civil servants and contractors in July 2013. Less than 3 months later, several NASA websites hosted by the Ames Research Center were taken offline after an international hacker posted political statements opposing U.S. policy.
Redspin’s application security assessment process follows three guiding principles that can assist any organization in better securing web applications.
First, Redspin’s web application methodology is based on the Open Web Application Security Program (OWASP) Top 10 classes of web application security vulnerabilities, the same methodology on which NASA based its own Web Application Security Program (WASP). The top 10 includes injection, cross-site scripting, broken authentication and session management, security misconfiguration and other categories.
Second, Redspin utilizes both automated scanning and manual testing techniques, with a heavier weighting on manual testing and analysis. As the OIG states “Experience has shown that detecting vulnerabilities in web applications requires a combination of automated and manual testing.” Using NASA as an example, their WASP manual testing identified 90% of the vulnerabilities in NASA systems compared to 10% from automated scans.
Lastly, Redspin delivers risk-rated findings and includes recommendations in its web application assessment reports. This enables our clients to prioritize their remediation efforts so that they can have the most impact on lowering risk. Too often we see organizations working from straight scanner output with dozens if not hundreds of findings including false positives. At NASA, the OIG found that NASA had not correlated its high-or-moderate impact web applications with its IT security database. Without prioritization by risk level, they found increased risk that vulnerabilities in publicly accessible web applications could go undetected and unmitigated.
NASA is not unique in its trials and tribulations in securing web applications. It is a difficult challenge. As Verizon’s 2014 Data Breach Report concluded “Web applications remain the proverbial punching bag of the internet.” Redspin has found that the best approach is to follow the industry-standard methodology, use both automated and extensive manual testing, report findings by risk, and then prioritize and implement remediation efforts accordingly.
We may be able to send a man to the moon but we still have a long way to go before all of our web applications are sufficiently protected from hackers. For more information on Redspin’s web application security assessments or to send me direct feedback on this post, please use the form below.