SDLC is an acronym that commonly resolves to one of two meanings, either Systems Development Lifecycle or Software Development Lifecycle. Today, the third turning point of software experience is upon us, Security Experience (SX), as the risks in the software powering our software defined world have become unacceptably high. The emphasis on software SX has led to a growing recognition of another definition for SDLC: Secure Development Lifecycle, with an emphasis on supply chain risk management (SCRM).
A supply chain is simply the collection of ingredients and activities required to produce something. For example, consider plywood. Plywood is a key ingredient in building homes. Plywood manufacturing requires adhesives, and adhesive manufacturing requires chemicals. Storms disrupting refinery operations on the Gulf Coast that create certain chemicals cascade through the system impacting the ability to build homes in New England. Supply chains are messy and complicated, and where there is only one manufacturer operating one plant that creates the specialty adhesive required for plywood manufacturing, the supply chain breaks.
In the 1990s, software development required nothing more than your tools of choice, say PowerBuilder and a Microsoft SQL Server database. Tools like PowerBuilder were complete development monoliths. It was your code editor, your only code editor. If there was a problem with the code, it offered the integrated debugger. There was only one type of developer and a database administrator.
Modern software development is profoundly different. Popular integrated development environments (IDEs) are language agnostic. While git is the dominant software configuration management tool, the git experience varies across implementations. Languages like Go rely on a suite of command line tools with their own unique flags, and there are multiple debuggers to choose from. Containerization and orchestration with Kubernetes introduces another discrete set of tools. Modern web applications sit behind web application firewalls (WAFs) and load balancers, and application performance monitoring (APM) tooling is a prerequisite for successful debugging in a complex microservices architecture. The Cloud provider selected to host the application introduces yet another layer of complexity.
The modern software supply chain is big and messy. The supply has gone from requiring only two pieces of software and two roles to now requiring dozens of highly specialized tools split across multiple disciplines.
Herein lies the problem in a world where cyber threat actors are looking for ways to compromise software. Today, software defaults to a permissive state and hardening guides are available for download for folks looking to deploy the software more securely. The majority of these developer tools are open source, and vetting the authenticity of an open-source contributor is particularly challenging. Each tool needs to be uniquely tuned and vetted to become production ready. As DevOps folks seek to automate tasks and activities that are done repeatedly, they often glue tools together in ways that the individual tool creators may not have considered. All of this creates an attack surface that is so expansive that no one person can track the complete software supply chain of the modern software application. And that is a huge problem.
If you do not know what you have, how can you ensure it is secured?
Can you point to a singular location that captures with 100% accuracy every single tool used across every single member of the software team to design, create, test, deploy, and operate the application? If a development cannot provide this on-demand with full accuracy and it cannot attest to the veracity and authenticity of each of those tools, then it cannot be sure that the software touching the source code, intermediate artifacts, or final executable is not creating an attack surface that could be or is being exploited by a cyber attacker.
The journey to a secure development lifecycle capable of creating resilient software cannot be depicted as a marathon. Why? Because a marathon has a clear start and an end, and we all know that software is never done. Software teams must begin the arduous process of assembling a complete list of tools used across all roles and across all environments (dev/test/prod), and their nuanced SDLC needs to establish a governance process to ensure that when a backend, frontend, architect, DevOps, or Cloud engineer decides to introduce a new tool into the process, that tool is vetted and captured in the tooling inventory list.
In the next blog in this series, we’ll take a deeper dive into the specifics of software supply chain risk management and we’ll also explore the cardinality of the software bill of materials (SBOM).