avatarTeri Radichel

Summarize

Deployment systems — danger or defense?

How the systems that deploy software can dramatically increase or decrease cybersecurity risk

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

🔒 Related Stories: Cybersecurity for Executives | DevOps.

💻 Free Content on Jobs in Cybersecurity | ✉️ Sign up for the Email List

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A security scanning analogy explained below from my presentation about Top Priorities for Cloud Application Security originally at Countermeasure IT in Ottawa, Canada in November 2018

If you’ve been following along in this series of blog posts, soon to become a book on Cybersecurity for Executives, you have reached one of my favorite topics, and something I feel has been overlooked far too long in the realm of cybersecurity — deployment systems. As of late, this topic has been getting more attention, and I’m happy about that. However, I don’t think we’ve done enough yet, as I explain below.

Deployment systems — not a definition of DevOps and DevSecOps

This topic is related to DevOps and DevSecOps. Unfortunately, those terms have been misinterpreted, reinterpreted, abused, overused, debated incessantly, misunderstood, despised, and cartoon-ized in less than ideal ways (i.e., unicorns excreting rainbows). So instead of using trendy words with varying interpretations, I’m going to write about something tangible — how your deployment system will either be a powerful gateway for an attacker or one of your best defenses. For this post, I define deployment systems as the applications and processes you use to make changes to software in your organization.

Get the full book by Teri Radichel in paperback or ebook format on Amazon: Cybersecurity for Executives in the Age of Cloud

My recent posts covered security policies, and exceptions and how tracking them helps organizations measure and evaluate risk. Before that, I talked about penetration testing and security assessments. I explained how CVEs could be a source of attacker infiltration, proper use of encryption, and problems with secrets and passwords. Your deployment system can help you with all those security concerns. It is one of the best tools you have to prevent security misconfigurations and non-compliant software deployments.

Unfortunately, the same tools you use to deploy applications and update software throughout your company may also be leveraged by attackers if they can gain access. Read on to learn how attackers leveraged deployment systems in infamous data breaches and massive ransomware attacks. These systems may have security flaws like anything else. All the security controls I have explained to you need to be applied to them as well. A weak security architecture allows attackers to obtain access via pivoting from other vulnerable systems on the network. Hopefully, your network does not expose your deployment systems to the entire Internet!

The topic of this post is related to all the modern development buzzwords in the following box because they involve deploying software. Please refer to the links for definitions and take them with a grain of salt. Depending on whom you talk to, you may get a different opinion, but I’ve never been a fan of arguing about the definition of words. I much prefer to fix problems and get things done. This post is about securing your deployment system, and fixing common software security misconfigurations, while still letting people get their work done efficiently.

Modern software and IT operations terminology:
DevOps 
DevSecOps 
GitOps 
Rugged DevOps
All the other -Ops
Site Reliability Engineering
Continuous Integration (CI)
Continuous Delivery (CD)
CI/CD
Infrastructure as Code

If you want to know my impression of DevSecOps, how I deployed a secure DevSecOps pipeline at a company as Cloud Architect and later Director of SAAS Engineering for a security vendor, read My History of DevSecOps.

What does your deployment system have to do with security?

Do you remember the movie Office Space? If not, I’ll tell you the critical part of the plot that relates to what I am about to tell you without giving away the whole story. Two guys come up with a plan to steal money from a company in small increments. They figure that siphoning a small amount of money from the company goes unnoticed, just like the weird guy in the basement talking about his stapler that no one realizes still sits down there. I’ll let you watch the movie for all the rest, but that idea of stealing small bits of money at a time that go unnoticed has a real-word story behind it.

The system they try to perform this theft is called the TPS system in the movie. Ironically I worked on something called a TPS system for one company — the transfer processing system. I often wondered as I addressed the integrity of the system and fixed numerous reconciliation issues if the name was indeed a coincidence. If anyone is concerned, I am pretty sure that the system no longer exists. Even if it did, it is significantly different by this time as another company bought that line of business and related systems and operations.

In a large financial company, somewhere along the process of building and deploying software, a developer can leverage rounding or a reconciliation error to moves a tiny amount of money to an alternate bank account. The insignificant individual amount goes unnoticed. However, over time, with millions of transactions, the thief makes off with a large sum. This type of attack is known as a Salami Attack. For an example of this attack, here is an excerpt from http://self.gutenberg.org/articles/salami_technique (I refer you to this website because I haven’t read the book referenced here myself, but I’ve seen it referenced in multiple places.)

Thomas Whiteside’s 1978 book, Computer Capers, documents how a programmer at a mail-order company diverted money from rounded-down sales commissions into a phony account for three years before he was caught.

Whenever I worked on financial systems, which I did for the majority of my career in one way or another, I was always concerned about the deployment systems in large companies. What if someone altered my code and inserted an attack like this, or something else, and made it look like I did it? Then when the crime came to light, I might be blamed! Don’t ask me how my mind thinks of these nefarious scenarios. I don’t know.

At one company, I watched how changes took place on deployment nights when a bunch of developers, operations, and QA people would have to stay in the office on a Friday night until about 2 a.m. Although the person who typically ran the deployments was the most organized person I know and thoroughly impressed me, she only maintained the schedule of items to be completed. She was also exceedingly calm under pressure when something that went wrong and knew a lot about the systems so she could fix those problems.

However, I thought about how easy it would be for someone to alter the code during the whole convoluted process, some of which was manual. Updating configurations often resulted in fat-fingering (typing the wrong thing), which, in turn, resulted in failed batch processing jobs the next day. One time a database administrator (DBA) who was supposed to make a backup of the database before deploying the new code backed up the databases in reverse by accident. Other times, files were copied to the wrong location. One time in a pre-deployment meeting, the ops person exposed a production password to everyone in the room. Anyone can make these mistakes. They are all preventable with secure deployment systems, automated deployments, and proper testing in advance of the code release.

Even worse, one-time code deployed as part of one of my projects when I was a development team lead was altered on during an evening deployment to remove a database integrity check because something broke. I wasn’t present at that particular deployment. Luckily the person in charge of QA contacted me about it the next time we were in the office to ask if it was OK, and I immediately resolved the problem and restored the proper control to make sure invalid data could not enter the system.

The removal of this data integrity check would allow the insertion of an invalid beneficiary for an account in the database. The beneficiary is someone who gets money from an account after someone dies. A fake call could be placed to the support team, providing instructions to release funds to the beneficiary on that account, and the wrong person would get the money. I am not saying this was the intent of the change, or that the person was not just trying to fix the problem, but that was a possible outcome.

It always amazed me that in many companies I worked for (not all), they were so concerned about the security of the code they were deploying but paid little attention to the security systems that deployed them. So many manual steps occur where someone can insert rogue code or subvert intended controls. If someone can alter the logs, they can be changed to point the blame to the wrong person. If too many people can change the code, someone could insert a last-minute unnoticed change. A person could alter previously tested and approved code on its way to the live systems if integrity checking is not in place to ensure that it can’t happen.

It could be that changes are so minute that no one ever notices. The person who intentionally did something could claim it was an accident, and they didn’t know better, which may or may not be accurate. Of course, someone can just make a mistake or not know what they are doing, and add your organization to the list of those who have exposed billions of records via misconfigured database security controls. Regardless of the reason, you need to architecture your deployment system in such a way that these types of things cannot happen.

Massive cyberattacks involving compromised deployment systems

In addition to all the potential internal security problems related to deployment systems, you also have to worry about external threats. If you have automated your deployments, that automated system is a prime target for attackers. Attackers can use that same automated deployment system that helps you move code into live systems if they can obtain access to deploy malware.

After thinking this was a possibility for years and trying to tell people about it and having them ignore my concerns, I finally got a real-world example. Right about the time, I started the SANS masters degree program in 2013, the Target Breach occurred. The attackers stole 40 million credit cards and accessed the personal data of 70 million people. The CIO resigned, and the CEO lost his job. The company did not have a CISO, but some people told me they had many security controls in place. The problem with those controls seems to be that they were security products and an out-sourced security operations center (SOC). Though I don’t know what the Target internal network looked like, from my research, it appeared to be lacking network segmentation and well-architected internal security infrastructure. Their staff could have potentially used additional training based on their response to the breach.

It’s easy to criticize after the fact, and from the outside. I’m sure the staff did the best they could in the given scenario. Security is challenging, so I am not going to judge anyone that went through a breach, because I’ve been there myself. I didn’t know anything about security when I faced my first security incident. I never figured everything out because I had no security training, and I knew no security people. I just knew I had eradicated the attackers and prevented them from coming back. I learned how to defend my website, but I wanted to know more about how beaches were happening. That’s why I was taking the master’s program. I used this breach as a case study and wrote a white paper about it called Critical Controls that Could Have Prevented the Target Breach.

When I started researching the paper, I didn’t know it involved a deployment system, because many news articles had published misinformation and blamed the HVAC system. In reality, it was a sophisticated breach with many factors, but the point related to this post has to do with the fact that the attackers got access to the system that pushed software updates to the point-of-sale (POS) systems. If you’ve never seen a POS system, it is often a Windows machine with some extra software and facilitates credit card processing at retail stores and other places that accept credit cards.

Because the attackers got access to the automated deployment system, they could quickly deploy malware all at once to many systems at stores just as the company could do itself. They did so right in the middle of the holiday season rush. This breach led to one of the most infamous cases of stolen credit cards and news organizations widely reported. I was not happy about the Target breach, but I did feel vindicated that my suspicions about the essential nature of deployment system security were accurate.

The Target breach cost close to $300 million according to their financial statements and a report by The SSL Store. Also, the breach occurred in 2013, and that article from May 26, 2017, cites a settlement that same week. That means Target was still spending time and money resolving issues related to that breach four years later.

Another attack involving deployment systems was a ransomware attack that took down companies around the world called NotPetya. This breach occurred when attackers reportedly associated with the Russian government infiltrated the software update systems of a company called MeDoc in Ukraine. MeDoc produces tax and accounting software used by just about anyone who pays taxes or does business in the country. All those customers of MeDoc need to get software updates. The attackers leveraged the fact that all these systems would allow the MeDoc systems to transfer files to them but instead pushed out malware to all those systems. The malware then automatically tried to pivot on the internal networks of those companies and install the malware on other systems.

WIRED calls NotPetya “the Most Devastating Cyberattack in History.” The malware was ransomware which spread from systems to systems and took entire large companies offline. The rapid spread of this malware was facilitated by a deployment system. If you think about how malware works, as I wrote about previously, it typically needs to deploy files. If you have a system that is allowed to deploy files to all the other systems in your environment or send updates to many systems from a software vendor, that’s a prime target for attackers. You need to ensure you have a robust security architecture for your deployment system and proper security controls. I’m not going to go into all of that here, because that’s a more in-depth technical topic. I cover that in my cloud security class. Following all the recommendations in this book gets you most of the way there.

How your deployment system can help your cybersecurity efforts

Now that you know how important it is to secure your deployment system, I want to explain how your deployment system can help you with security. I recognized this possibility from the moment I started getting serious about cloud security and looked at how you could deploy software and applications on cloud platforms. These concepts apply anywhere, not just on platforms that provide secure automation (when used correctly) like AWS. You may not have much control over the systems that push code to you from your software vendors. For example, you can’t change the way Microsoft and security vendors send software updates. You can implement the controls I’ve already mentioned for network security. Additionally, you should vet your vendors, which I address in my next post.

However, the systems that deploy the custom code your developers write can automatically improve compliance with your security policies. I just explained several things that can go wrong with software deployments and the importance of securing the systems that deploy code. At the same time, if you correctly architect and automate your software deployment processes, you can mitigate many of the security issues I have written about in this book. You cannot fix architecture and design flaws that lead to security vulnerabilities, but you can prevent basic misconfigurations and blatantly insecure code from entering your environment.

Think of your deployment system as the luggage scanner at the airport. No one gets into the airport without the security staff at the airport scanning their luggage. It creates a delay, but it is necessary to ensure people do not bring illegal substances onto the plane, in the worst-case bombs or weapons, to hurt people. Although people may grumble when there is a long line for the luggage scanning system at the airport, they understand the risks associated with letting everyone bypass the scanner, so they put up with it. The scanner is automated, though humans are involved. When people are following the process correctly, things go smoothly, and the scanning doesn’t take too long.

What slows down the scanning process? For one thing, the requirement to take off shoes and belts and take laptops out of bags takes time. Some people can bypass this by getting TSA Precheck in the United States. They still must have their bags scanned, but they don’t have to take some of the extra steps that those who have not gone through the background checks and identity verification that others have. You can do the same thing in your deployment system. Figure out which deployments are low-risk, and potentially they go through faster checks. Higher-risk deployments may need more scrutiny. You can even take the analogy one step further and select random deployments for additional scrutiny.

The first step in creating a deployment system that can check security is to have a way to deploy software in an automated fashion. It should be easy for people to use and get their jobs done. You should test your deployments the same way you test your system functionality, so they do not fail when they run in a production environment (your live systems). The deployment system needs to support building from your development environment to your QA (test) environment without changing the code the developers have written. Then when it goes from QA to production, it should never be altered. If your teams cannot do that, the system is not designed correctly.

Once you have an automated system, you need to ensure that no one can bypass it. If people can bypass it, all the issues I mentioned above could still happen. When people get frustrated with the automated system, they take shortcuts and bypass it. Then you may end up with security errors and defeats the whole purpose of the system. Of course, you may have an issue that needs to fix at all costs, and you make an exception, but this should require the appropriate approvals. Refer to all the information I provided on handling security exceptions.

After you have everyone using the secure, automated system, the security team can inspect the deployments and the logs without interrupting developers to find any security issues. They could do this in the development, QA, and production environments. The security team should be monitoring the deployment system logs along with all the other log they monitor. Look for anomalies and suspicious access patterns. Ensure that people are not misusing or bypassing the deployment system. If violations occur, assume good intent. The developer or QA person may have made a simple mistake or doesn’t understand why what they are doing causes a security risk. If you see many violations, there could be a problem with the whole process, as I explained in my blog post on exceptions. It could be that the development and QA teams need additional security training.

If (when) the security team finds things that are out of compliance, undesirable configurations, architectures, or insecure code, they can work with the team responsible for maintaining the deployment system to build in security checks to prevent those issues. Implement security checks carefully, so they do not end up blocking everyone who is trying to deploy code and impact productivity. Implement them iteratively and ensure developers and the security team are working together to make sure they work correctly and in a developer-friendly manner. Test with a small group of developers well-versed in security before rolling out blocking security checks to an entire organization.

The security checks in your deployment pipeline can include things like scanning software for security flaws, blocking known CVEs from entering production, automated basic penetration testing, and disallowing risky configurations. Adding security checks to deployment pipelines can be very tricky because if implemented poorly or in a draconian manner, this becomes a bottleneck on your productivity. If implemented too loosely and people are free to deploy whatever they want, you have an increased chance for security problems. Someone who can balance these objectives and has proper training should be involved in prioritizing and making security decisions.

Avoid rules that are too strict and end up completely blocking productivity. Blocking work from proceeding often generates animosity in your environment and causes people to try to bypass security controls. You need controls that provide feedback to train people as they make mistakes, prevent egregious errors, and track compliance and risk. At the same time, you need to allow people to make progress and build things. You need to know when and were to block configuration errors throughout the system, and when to create an alert to let someone know that they made a mistake. Additionally, security scanning needs to be inserted at the correct point in the process because some of the scanning tools can take a long time to complete. Don’t execute them every single time a developer checks in a small piece of code in the development environment.

Communication is another crucial factor. If you apply the rules and don’t clearly explain what people need to do to fix the problem, the result may be a great deal of conflict. I used to say to my DevOps team, “If people are complaining, we aren’t building it right, or communicating properly.” The communication could involve training when people don’t understand why the controls exist. Going back to the airport analogy, once you understand the importance of scanning your bags, you are more likely to put up with it. I often tell security people, the reason you are having problems with developers is that you’re telling them what to do, but you’re not explaining why.

Finally, the thing most companies aren’t doing is tracking and measuring risk associated with applications via their deployment and change control systems, if they have one. Your deployment systems produce a great deal of information that can help you monitor and reduce risk in your organization. Many organizations make assumptions about the risk of software deployed within their organizations, rather than measuring it so they can do something about it. Use your deployment system to help track changes that are compliant with security policies and security exceptions.

Change control systems can track changes and corresponding details. You might require people to enter data into a system specifically for tracking changes. Alternatively, you may allow changes that are not blocked by security checks because of all the details about who wrote the code and why are in a ticketing system and source control systems have that information. Make sure wherever you track this change information, it cannot be altered at any point after deployment by the people making the changes. If exceptions exist, document those. Use all of the data you gather during code deployments to help generate risk reports.

Use your risk reports to determine what high-risk items need to fix right away. I’ll have another post on this subject and dive a bit deeper on that topic, but make sure you are feeding deployment information, exceptions, violations, and other compliance problems into your overall risk reporting. Then work to reduce those risks.

Scanning is not perfect!

Be aware that these tactics are not perfect. Someone with purely malicious intentions can obfuscate the code, which is a fancy way of saying they change the code, so it does the same thing when it executes, but the scanners cannot tell what it is doing.

For example, you may want to block any JavaScript commands that use the keyboard eval:

eval(something)

You configure your scanner to flag any code that has the word eval in it. The attacker can change the code to write out and execute the eval command instead, which has the same result:

document.write(“e” + “val”)

This pseudo-code gives you the idea of how code can trick your scanners. If people are doing this intentionally, the policy should enforce the appropriate repercussions.

There are many variations of code that can bypass that same check. Another example would be to turn malicious code into bytecode. If your scanner spots an ampersand (&) in code, and a web browser processes the code, the attacker might use a character code instead. SQL injection or XSS scanners may look for invalid characters. In that case, the person attempting to bypass the scanner escapes the character so the scanner can’t find it. If the scanner adds a check for an escape character, the attacker double escapes the check for SQL injection.

The other thing you’ll need to worry about is malware that your developers don’t know the author embedded in the third-party code they downloaded off the Internet. In some cases, fake code libraries with names and links very similar to valid software trick developers into including malicious code into their deployments. Additionally, sometimes they use software because it helps them get things done faster, but that software embeds things like key-loggers, cryptominers, and credit card stealing malware. Your scanners may or may not catch these issues, depending on how crafty your attacker is.

As you can see, it’s going to be challenging to count on scanners when someone is genuinely out to insert something malicious into your code. This is why all the other controls in this book are essential, along with the security checks in your deployment systems and security training for developers, DevOps, QA, and other types of cloud and software engineers.

Another issue with scanners is that some of them produce a lot of false positives. You need someone on staff who understands what is and is not a security problem. Sometimes people brush off issues they don’t understand. In other cases, the scanner produces too many false positives, and you spend an inordinate amount of time looking through them. At this point, someone may disregard the scanning and proclaim it is useless. It’s not, but it does need tuning to eliminate false reports without eliminating actual problems. I’ll be talking about false positives and false negatives in an upcoming post on the efficacy of your security products — in other words, how well they perform their intended function.

Application security resources

For those that want to know more about application security so that you can include in your security checks, there are many great books, resources, and security classes that can help. Here are a few resources you can use to learn about application security problems. Then you can work on training developers about these flaws and building security checks into your deployment pipeline to prevent them.

One of the most well know is the Web Application Hacker’s Handbook, which helped me when I was creating my homegrown WAF (Web Application Firewall) when no such term existed. The OWASP Top Ten is a great resource, along with many other projects and tools provided by the Open Web Application Security Project. PortSwigger, the company that makes the BurpSuite software, also has many resources for pentesters and developers interested in application security.

My friend Tanya Janca, aka SheHacksPurple, is also writing a book called Alice and Bob Learn Application Security, which is sure to be a great read. She is an excellent presenter and storyteller with a background in incident response and application security. She explains many application security problems and security checks you can add to your DevOps pipeline. I can’t wait to read it! She also has free information on her application security blog.

Getting started with your secure deployment pipeline

The information in this post is a high-level executive overview of things you need to think about when creating a security DevOps pipeline. In my class, I go into a lot more detail on how to architect the deployment system in a cloud environment and provide some sample code and labs. Your team needs to carefully construct and monitor the deployment system network architecture as it may have access to dev, QA, and production environments. It will typically involve a number of different systems with specific purposes. In class, we also look at ways cloud systems can be misconfigured, attacked, and how deployment strategies such as immutable infrastructure (things that cannot change after deployment but must be re-deployed) can prevent those configuration and security issues.

If you want to know more about DevOps and deployment systems from a developer perspective, I highly recommend Gene Kim’s book, aptly named The Unicorn Project. This book explains how and why developers and QA teams are bypassing security and IT teams in many cases to get to the cloud. It demonstrates how a top executive might support an initiative to get things done. I am friends with the person who got Nordstrom, one of the biggest high-end retailers in the United States, to start using AWS. He convinced some top executives to let him try out the cloud on his corporate credit card. The software rebellion in this book resonates with my experiences and stories from students in my classes!

This exact scenario is happening in many companies today. I highly recommend that security teams read it. If you don’t want this to happen in your company in this manner, get on board! One of my favorite slides that I show in presentations and training on cloud and DevSecOps is the following cat and dog photo. I’ll let you decide who is the dog and who is the cat, but security people do like cats! That’s some popular pentesting tools end with -cat. Instead of saying security teams that always say “No,” try to get involved with the DevOps team and help design a secure deployment pipeline. Get those developers some security training. If possible, send security teams and developers to training together — my favorite kind of security classes to teach. These types of classes get these two teams with different perspectives, objectives, and priorities working together. If you train developers about security they will become some of the best red team and blue team members in your organization.

This image is from my talk on Top Priorities in Cloud Application Security originally presented at Countermeasure IT in Canada in 2018

Follow for updates.

Teri Radichel | © 2nd Sight Lab 2019

About Teri Radichel:
~~~~~~~~~~~~~~~~~~~~
⭐️ Author: Cybersecurity Books
⭐️ Presentations: Presentations by Teri Radichel
⭐️ Recognition: SANS Award, AWS Security Hero, IANS Faculty
⭐️ Certifications: SANS ~ GSE 240
⭐️ Education: BA Business, Master of Software Engineering, Master of Infosec
⭐️ Company: Penetration Tests, Assessments, Phone Consulting ~ 2nd Sight Lab
Need Help With Cybersecurity, Cloud, or Application Security?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
🔒 Request a penetration test or security assessment
🔒 Schedule a consulting call
🔒 Cybersecurity Speaker for Presentation
Follow for more stories like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
❤️ Sign Up my Medium Email List
❤️ Twitter: @teriradichel
❤️ LinkedIn: https://www.linkedin.com/in/teriradichel
❤️ Mastodon: @teriradichel@infosec.exchange
❤️ Facebook: 2nd Sight Lab
❤️ YouTube: @2ndsightlab
DevOps
Devsecops
Cloud Security
Deployment Pipelines
Ci Cd Pipeline
Recommended from ReadMedium