During the initial phases of cybersecurity, the aim and focus of organizations and their security teams were to monitor and prevent attacks. The approach was mostly defensive. With the advancement in the cyberattacks performed by attackers, security personnel started to follow a new path defending using offensive security. Organizations thus formed two security teams, namely Blue and Red.
Blue Team act as “Defenders,” whose aim is to protect the organization from the real attackers as well as the Red Team. It should be noted that there is a thin line between the security operations team and the Blue team. The security teams vigilantly monitor the network for any suspicious activity. A blue team can more or less be considered as a combination of teams like security operations, incident management, and risk management teams that are involved in the analysis, detection, prevention, and handling of an incident.
Red Team is the “Offenders” or “Attackers” who utilize TTPs (Tactics, Techniques, and Practices) to mimic the threat actors. This emulation tests the existing policies and defenses of the organization, along with the proactiveness of the Blue Team. Red Teams are sometimes confused with Penetration testers. While Penetration testers perform a technical assessment of the scope to validate the security configurations in a limited time, red team campaigns are mostly the assessment of the entire security posture of the organization that can run continuously.
Both teams were working perfectly in their realms, but it was felt over time that a gap exists between them. Many times even after working for the same organization, flaws/discrepancies observed by one team were taken as a personal victory over the other rather than going over the lesson learned. An integration was thus required, which led to the origin of “Purple Team.”
The “Purple team” is not a dedicated team; it’s an engagement between the Red and Blue teams where both can share and discuss their findings, suggest improvement scopes and work together to strengthen the security posture of the organization. The idea is to help the Red and Blue teams bond towards achieving a common goal, not to increase overhead expenses by creating a new team.
Another gap that was not identified until recently is the gap between the defensive/offensive security teams and the development teams. These teams, including developers, programmers, software engineers, and architects, were combined and introduced as the “Yellow Team” or “The Builders.” The lack of knowledge about security makes the suggestions and questions posed by the Blue and Red teams sound gibberish to the yellow team. It is well understood that one cannot expect the builders to learn all the security-related aspects or expect the security teams to have expertise in the whole SDLC. Still, the fact that considering security in the initial phase of development can strengthen the entire application cannot be overlooked.
All three teams have their areas of technology. The general flow is: Yellow team builds something, the Blue team creates policies to defend it, the Red team attacks it to test the defenses, issues are reported and Yellow fix those issues.” But the problem arises here as well when after the identification of a discrepancy, the teams begin to play blame games rather than following a problem-solving approach. Such a situation, like in the case of Blue and Red teams, makes it essential for the groups to interact with each other and share knowledge. It led to the introduction of “Green” and “Orange” teams, creating the color wheel of security.
Again, Orange and Green are not separate groups but the engagements between the Yellow and Red, and Yellow and Blue teams, respectively. Organizations are shifting from the Software Development Life Cycle (SDLC) to Secure Software Development Life Cycle (SSDLC). If the Yellow team is educated about the various attack techniques that the Red team uses, they can incorporate the remediation at the initial phase itself. We have heard about the “use cases” being developed, but the “misuse” cases can also assist in developing various features and avoiding vulnerabilities. Taking a simple example, if the importance of proper input sanitization is explained to the developers and is taken care of, vulnerability like Cross-Site Scripting (XSS) can be avoided in the final software.
Many times the Blue team finds it difficult to defend or investigate issues due to insufficient data. If the Yellow and Blue teams work together, they can ensure the configuration of proper logging and monitoring in the application that can avoid incidents like undetected breaches and also aid in better investigations for the forensics teams. The Yellow team can share the limitations of their application/software with the Blue team so that they can accordingly put policies and defenses in place to minimize the associated risk.
At the center of the wheel is the “White Team” that plans and organizes the other teams, decides the scope and rules of engagements, and acts as a supervisor by monitoring their progress. This team can include members from higher management, compliance, logistics, analysts, etc.
Initially, this may seem and would be a time-consuming process, but continuing with it would be fruitful in the longer run. As the knowledge and experience gap between the teams will narrow, organizations would come up with better solutions where the developed software/ application would be more secure with lesser bugs.