Vulnerability Research
April 22, 2020
By Leo Dorrendorf

Head-to-Head: Penetration testing vs. vulnerability scanning

In order to release embedded devices into the market with a reasonable level of security, vendors need some form of security process that needs to be integrated with their development lifecycle. Optimally, security considerations would be involved across all stages of the embedded device’s development lifecycle: from initial product architecture and design, to implementation and verification, through deployment and monitoring in the field, and all the way back again to design in order to make adjustments based on the changing threat landscape, market needs, and any device issues encountered while in the field.

In this article, we will focus on the verification phase of the security process. Similar to the verification phase of the development process which checks the functional implementation, the security verification phase ensures that the security features have been implemented correctly. The process includes finding known weaknesses and vulnerabilities in the product along with the relevant exploits, identifying the security gaps, and gaining an overall picture of the product’s security profile. This needs to be done for the finished product as a whole, as well as for its individual components, whether they were developed entirely in-house, built with some open source code, or obtained from third party suppliers.

The same responsibilities apply whether you are the vendor that introduced the product to the market, an integrator of another party’s product, or an OEM/service provider using an off-the-shelf product under your brand. All of the parties above can be liable for security issues that are found in the connected product.

For those that purchase connected products, it can be very difficult to establish each product’s security posture. Vendor statements alone cannot help, since small vendors are prone to overstating their product’s security, as was recently demonstrated in the FTC settlement against Tapplock. But larger vendors are not impervious either with researchers finding and publishing severe security issues in big-name products on a regular basis (D-Link, Linksys, Android and MikroTik to name just a few). The security verification process can assist them by estimating the product’s security standing and providing tangible proof to substantiate (or refute) the vendor’s security claims.

However, there are different ways to achieve these goals. Traditional approaches involve internal quality assurance during the development and verification stages; penetration testing by independent external organizations; and external certification, while newer approaches focus on automated testing and vulnerability scanning. Each of these methods has its pros and cons, and a combination of some or all of them could be necessary to address all relevant issues.

To understand what is appropriate for your specific needs, we will examine each approach in detail.

Dedicating Quality Assurance for Security Functions

Quality Assurance (QA) is an established stage of the development process which is typically performed by an internal team. Depending on the organizational structure, the QA staff may be part of the development team, or it may be a separate team, possibly even under separate management, which gives it a degree of independence. How the QA team is structured can affect their approach, how much they are influenced by input from the developers, and what tests they run in practice.

A good QA team will take an adversarial approach to testing, trying to come up with ways to break the product code and make it fail (negative tests), which is very similar to the approach taken by potential attackers or pen-testers. More commonly, QA teams tend towards testing whether the product code performs the required functionality as expected (positive tests).

To give an example, when testing a software update mechanism positive tests check the robustness of the code and its ability to correctly apply valid updates, whereas negative tests include checking for invalid update contents, incorrect signatures, or invalid certificate chains. These negative cases are the ones that are more likely to turn up in an attack scenario.

It is much easier to exhaustively list the positive tests in this example compared to listing all the negative edge cases which could easily take a full page, if we go into detail on all the ways certificate verification can fail. For similar reasons, the more QA teams becomes overloaded with work (a common situation), the more they tend to focus on the positive tests which are necessary to get the product to market. This usually means that they also tend to sacrifice the negative tests which are required verify that the product is secure.

In order to properly perform the security function, QA teams need to have dedicated resources and to develop sufficient specialization in security. At the very least, other security professionals in the organization would have to get involved on a regular basis in order to instruct the QA team and collaborate on the testing plan. The key problem with this is that it takes away from QA’s usual (and necessary!) focus on functional testing.

For these reasons, few organizations commit their QA resources required to ensure that they are releasing secure connected products. Instead, in order to establish the security standing of a product, most organizations opt for external penetration-testing.

Performing Penetration Testing for Deep Analysis

Penetration testing is security verification that is performed by an external team of specialists with an offensive approach. Instead of validating product functionality, penetration testing focuses only on finding security vulnerabilities and weaknesses.

Depending on the agreement with the client, pen-testers use either the white-box or black-box methodology which have different levels of exposure to the internal documentation and even to the product source code. In the white-box approach, testers have access to internal information, similar to an internal QA team. In the black-box approach, all they receive is the live product and any publicly available documentation, which means that they only have the information that a real-world attacker would have access to.

The pen-testers then set up a testing environment which, at the minimum, includes a subject device, possibly with a network connection. In the most complex scenario, the setup includes an entire system instance, including cloud or server accounts. This enables the testers to put the device through various onboarding and update flows, and test invalid inputs submitted to the cloud without risking the vendor’s production deployment. For many common connected embedded products, this kind of setup also includes the mobile applications that users need in order to manage the device.

Pen-testers are security professionals whose knowledge and skills lean towards the offensive side, which helps to simulate the attitude of a real-world attacker. To produce a full vulnerability assessment, they examine the product’s external properties, such as the network interface and all communications passing through it, as well as its internal properties, such as the firmware image contents. They look for ways to compromise the product, starting from milder attacks such as denial of service and data exposure, through more intrusive ways of gaining unauthorized access and hijacking control, and all the way to permanent modification of the device’s logic, as well as ways to corrupt data and/or logic in the cloud.

Penetration testing by an external team or organization can have major benefits over internal testing, because of the specialized skill set they provide and the organizational independence. Although a good test report comprehensively covers the product’s security issues, in practice it can fall short. Because most penetration testing is performed in black-box settings, the testers often focus on the product’s externally exposed components, such as web applications and remote login interfaces, at the expense of vulnerable internal features. In addition, testers are incentivized to find the most impressive vulnerabilities in the limited time allotted them, so their findings tend towards “low-hanging fruit” which focus mostly on easily achievable attacks. A deeper analysis of the product’s security architecture may not be in their interest unless the findings can be easily exploited.

Another downside is that penetration testing is highly subjective and depends to a large extent on the previous experience of the pen-testers. Two different teams will produce very different reports based on variables such as their strengths and the tools they used in their process.

Good penetration testing teams do use automated tools, starting with port scanning tools such as nmap and ending with tool suites such as Metasploit or Detectify. Automated tools make the initial reconnaissance process easier, create an overall picture of a product’s attack surface, find initial points of entry for attackers to use, and so on. Software scanning tools can help the pen-testers find valuable security vulnerabilities they can use in their report, including known vulnerabilities in third-party libraries and open-source code. These can point them to promising areas for more thorough investigations or help them gain a foothold in the product’s code which they can then use for further attacks. More advanced tools will turn up more sophisticated results such as deeper architectural issues. We’ll take a closer look at automated tools below.

Although penetration testing reports can be used a stamp of independent certification which may help convince customers of the product’s security standing, it is usually better to achieve certification using a dedicated process. We will review that option in the next section.

Receiving Independent Security and Compliance Certification

Some markets require certification by an independent laboratory or compliance to a standard. This is most obvious where safety is a large concern. For example, in the automotive, medical and industrial sectors, compliance to different certifications is required by law. Most markets still don’t define clear requirements for cybersecurity compliance when it comes to connected products, such as embedded or IoT devices, but regulators are increasingly introducing legislation and labeling schemes to that effect. In other verticals, certification may not be mandatory, but can still confer a distinct competitive advantage especially as customer demand for secure devices is continuously increasing. For these reasons, vendors often submit their products for independent certification.

Cybersecurity certification programs are usually defined around a standard document. Relevant standards run the gamut from closed and proprietary ones, typically those developed by the certifying organization itself (such as the UL 2900 family of standards), to free and open ones (such as the NIST CMVP documents, ENISA or IoTSF standards), and everything in between (for example, the documents for ISA/IEC 62443 which are available following email registration).

For each standard, the vendor typically has to go over its content and implement all procedural and technical requirements. The vendor is then expected to submit detailed evidence, documenting how the product complies to the standard, or where it deviates from it. The certification process itself varies widely. On one end of the spectrum you have self-certification which only requires the vendors to fill out a questionnaire and publish the answers so that prospective customers have access to it. On the other end there is compliance which involves testing by an independent organization, with a laboratory performing its own exhaustive product tests and reviewing the documentation submitted by the vendors.

Either way, much of the burden of proof remains with the vendors since they must prepare the product and the accompanying documentation. This typically requires considerable development efforts, as well as payments to the certifying body and to any laboratories or consultants involved. Even after certification has been completed, additional costs may be incurred as the it may need to be maintained or renewed when the product is updated or when the product line is extended.

The subject matter of cybersecurity standards varies widely. Some standards are dedicated to the vendor’s documentation and the security process itself (some even mandate using penetration testing or automated tools to find security vulnerabilities and known exploits), some focus on secure coding techniques, while others are even more technical in that they address device architecture and configuration, or at least include some technical chapters.

Most standards keep their technical requirements at a relatively high level, with only a few providing the actual technical instructions necessary to meet them (for some positive examples, see CIS Benchmarks, DoD STIGs or the AGL Security Blueprint). This makes it far more difficult for vendors to even estimate their initial level of compliance before they begin the certification process, and significantly increases the costs required to complete it. This is where automated tools can help reduce the efforts and costs involved in the process.

Automating Vulnerability Scanning for Objective Results

There are many types of automated tools, each covering different aspects of product security verification. There are a multitude of tools available for the complete IoT ecosystem, which includes embedded, cloud, web and mobile components. Some use the dynamic approach where a live device is scanned over the network in order to diagnose its web server and communication security. Others use the static approach where the device’s source code or binary image are scanned.

Dynamic tools require a working device, whereas static tools are more flexible since all they require is a file dropped through a web interface. Another difference is that dynamic tools are limited to the device’s external behavior, whereas static tools also examine the device’s internals. Static tools can cover secure coding practices, find known security vulnerabilities and exploits, identify potential zero-day vulnerabilities, and even highlight various configuration and architectural issues from the lowest application layers (bootloader and the operating system internals) to the upper ones.

A well-developed tool can run hundreds or even thousands of individual scans or tests, and static testing has the advantage of producing in minutes the kind of results that would take days for manual penetration testers. The entire testing process can be applied automatically and seamlessly integrated into a continuous integration/continuous development flow, potentially scanning every product and every version, which is in itself an impossible feat without the use automation.

The development of such tools comes at a cost since every individual scanning feature needs to be ported to each operating system, and coverage needs to be added for each filesystem type or software component. However, once the effort is made, it pays off as the scanners are applied again and again, providing rapid on-demand analysis.

A good automated vulnerability scanning tool must be based on years of research and development, including significant contributions by penetration testers. For example, VDOO’s world-class research team employs its offensive skills and extensive experience to analyze numerous new products, platforms and software components. These findings feed into the entire VDOO platform, and are translated into vulnerability scanners which then run automatically.

Another benefit of automating security testing is that results are no longer subjective as they are with a manual penetration testing report. The same tool always returns objective results regarding a device or software component, making automated tools very useful for external certification.

And automated tools can indeed help in certification if they include support for external standards. They can analyze a device’s firmware and output a gap report with respect to a given standard; this provides an understanding of the time and effort required to achieve compliance. This can be done in minutes, while the same process, when done manually, can take weeks of documentation and analysis.

Security Verification is a Must-Have

To summarize, security verification is a necessity, but there are different ways to achieve it. Manual methods can achieve excellent results but require a dedicated team, as well as time and effort, and can produce skewed results in some cases, especially if incentives are misaligned. In the connected (IoT) device market, where extensive product lines and quick time to market are the rule, vendors should consider complementing their manual efforts with automated testing.

Independent certification has its own downsides, primarily high costs and lengthy work processes, which are made even worse by the need to do it all over again for each version. On the other hand, independent certification is sometimes necessary for entry to market and is probably the most convincing mark of security quality that vendors can use for competitive advantage. Automated tools can aid with certification as well.

As the sorry state of security in IoT products continues to make headlines, and regulators get more and more involved in mandating security norms, security verification is increasingly becoming a pressing need. Because automated tools can provide on-demand, detailed security verification for a wide range of connected products, their share of security verification tasks will continue to rise.


Ensure optimal security for your connected products across their entire lifecycle See VDOO in action

Share this post

You may also like