Episode 43 — Assess current security capabilities against mission and risk realities

In this episode, we step back from individual tools and isolated control checks and look at the bigger picture of whether your current security capabilities truly match the mission you support and the risks you actually face. Most organizations have a mixture of strong practices, weak practices, and accidental practices that grew over time, and it is easy to confuse activity with capability. A capability is not a product you bought or a policy you published. It is the reliable ability to achieve a security outcome under real conditions, including staffing constraints, operational pressure, and the kinds of failure you only discover on a bad day. The goal of this review is to create a grounded baseline, so you can invest and improve with intent rather than reacting to the loudest incident or the most persuasive vendor pitch.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Begin by defining the scope of what you mean by security capability, because if you only assess technology you will get a distorted result. Scope should cover people, process, technology, and partners, because those elements are inseparable when you look at real outcomes. People includes not only security staff, but the operational teams who own systems and the business leaders who decide tradeoffs. Process includes the way work is requested, approved, deployed, monitored, and corrected, including escalation paths and exception handling. Technology includes the tools, configurations, telemetry, and automation that enable repeatable behavior rather than heroic effort. Partners includes vendors, managed services, cloud providers, and any third party that shares responsibility for a control or an outcome. When you scope in this way, you avoid the trap of concluding that a capability exists simply because a tool is installed.

Once scope is clear, identify mission-critical outcomes and how you will measure success, because outcomes anchor the assessment in reality. Mission-critical outcomes are the results the organization must achieve reliably, such as protecting sensitive data, keeping key services available, meeting contractual obligations, sustaining safe operations, or maintaining customer trust. Success measures translate those outcomes into something you can observe and track, such as time to detect, time to contain, service uptime, unauthorized access rates, backup restoration success, or the frequency of policy exceptions. Measures do not need to be perfect, but they need to be meaningful enough to guide decisions and to show improvement over time. If you cannot define how you know an outcome is being achieved, you are likely measuring inputs instead, such as how many alerts were generated or how many tickets were closed. Outcomes focus the assessment on whether security supports the mission, not whether security is busy.

With outcomes defined, inventory the controls and practices that contribute to those outcomes, and map them explicitly so you can see coverage and gaps. Controls include technical controls, procedural controls, and governance controls, and you want to capture what is actually deployed and followed, not what is written in a document. Mapping controls to outcomes helps you avoid building a catalog that is interesting but unusable. For example, if an outcome depends on limiting unauthorized access, you should map identity controls, access review processes, privileged access handling, and logging practices that support detection and response. If an outcome depends on availability, map resilience controls such as backups, recovery testing, change control discipline, and monitoring. The point is not to create a masterpiece diagram, but to create a defensible understanding of which controls support which outcomes. When you can point from a mission outcome to specific supporting controls, prioritization becomes clearer and politics becomes less influential.

Next, evaluate maturity using simple, observable criteria, because maturity models only help when they can be applied consistently without endless debate. You are not trying to produce a glossy score; you are trying to determine whether a capability is dependable. Observable criteria can include whether a process is documented, whether it is consistently followed, whether it is measured, whether it is reviewed, and whether it is improved based on evidence. For technology capabilities, criteria can include whether the tool is deployed broadly enough to matter, whether it is configured correctly, whether it is maintained, and whether it produces usable signals for responders. For people capabilities, criteria can include whether roles are clear, whether training is current, whether on-call coverage exists, and whether handoffs are reliable. Simplicity is a feature here, because overly complex scoring encourages performative compliance rather than honest assessment. If criteria can be observed in normal operations, your assessment remains grounded.

As you assess maturity, check coverage, effectiveness, and operational sustainability, because a control can exist and still fail in practice. Coverage asks whether the control applies to the systems and workflows that matter, or only to a subset that makes the metrics look good. Effectiveness asks whether the control actually reduces risk in a measurable way, rather than just creating artifacts and alerts. Operational sustainability asks whether the control can be maintained over time without constant heroics, such as one person holding all knowledge or a fragile integration that breaks on every system update. Sustainability is where many programs quietly fail, because the control works in a demo or during an audit push but degrades during normal business pressure. You also want to consider how controls interact, because a strong control upstream can reduce demand on weaker controls downstream. When you evaluate these three angles together, you can distinguish between true capabilities and brittle appearances.

A useful assessment does not stop at whether something works, but also identifies bottlenecks, failure modes, and dependencies that determine how it fails. Bottlenecks are constraints that slow response, create backlog, or force risky shortcuts, such as an approval process that cannot keep up with business needs. Failure modes are the predictable ways a control breaks, such as alert fatigue causing missed detections or backup jobs succeeding without restore validation. Dependencies are the upstream services, teams, and vendors that must function for your capability to hold, such as identity providers, network segmentation, or a managed detection provider. Dependencies are not inherently bad, but hidden dependencies create surprises, and surprises in security often become incidents. This part of the assessment should be approached like engineering, where you assume components will fail and you design around that reality. If you can describe how a capability fails, you can design improvements that reduce the likelihood and impact of that failure.

After you understand your current capabilities, compare them against risk appetite and thresholds so you can judge whether the current state is acceptable or needs change. Risk appetite is the level of risk the organization is willing to accept in pursuit of mission goals, and thresholds are the points where risk becomes intolerable, such as unacceptable downtime, unacceptable data exposure, or unacceptable regulatory penalties. This comparison should be specific, not philosophical, and it should be anchored in outcomes and measures you identified earlier. If the organization says it cannot tolerate extended outages, but recovery testing is infrequent and restoration success is uncertain, the mismatch is clear. If the organization says it cannot tolerate unauthorized access to high-value systems, but privileged access review is inconsistent and detection is delayed, the mismatch is also clear. This step is where you avoid the trap of security perfectionism, because not every weakness requires immediate investment, but the weaknesses that cross thresholds do. The goal is alignment, where controls and capabilities match the risk realities of the mission.

With mismatches identified, look for quick wins that deliver disproportionate improvement, because early momentum matters and not every improvement requires a large program. Quick wins are changes that reduce risk meaningfully with modest effort, such as tightening a misconfiguration that exposes systems, improving alert routing so high-severity events reach the right responder, or standardizing a recurring workflow that currently relies on informal knowledge. Quick wins often come from removing friction rather than adding new steps, such as clarifying ownership, simplifying escalation, or eliminating duplicate tooling that creates confusion. They also come from making a control more sustainable, such as automating routine checks or improving documentation so knowledge is not trapped in one person’s head. A quick win should be chosen because it improves a mission outcome, not because it is easy to show off. When quick wins are tied to outcomes, they build credibility for deeper investments that take longer to deliver.

At the same time, be honest about structural gaps that require strategic investment, because capability gaps often reflect missing foundations rather than missing effort. Structural gaps might include insufficient telemetry to support detection, lack of identity governance to support access assurance, incomplete asset inventory that undermines control coverage, or understaffed response capacity that cannot meet risk thresholds. These gaps cannot be solved by writing a new policy or by asking teams to work harder. They require resourcing, architectural changes, and leadership commitment, and that is why it is important to flag them clearly rather than hiding them inside a long list of minor issues. Strategic investment should be framed in terms of mission outcomes, such as reducing time to contain incidents, increasing recovery reliability, or reducing high-impact exposure pathways. When leaders can see the connection between investment and mission risk reduction, decisions become more straightforward. Structural gaps are uncomfortable, but naming them is part of being a professional.

Document evidence as you go, but keep findings concise and actionable, because long reports often become shelfware. Evidence should show how you reached your conclusions, such as observed behaviors, sample data, process artifacts, incident patterns, and control test results. You are not trying to build a legal case, but you are trying to make the assessment defensible and repeatable. Findings should be written so a reader can understand the issue, why it matters, what outcome it affects, and what improvement would change the risk posture. Concision matters because leaders and operational teams need to act, and action requires clarity. If you bury the critical points under excessive narrative, you force readers to interpret rather than decide. Actionable documentation also enables progress tracking, because you can revisit the same findings later and measure improvement against the baseline.

Once the assessment is documented, socialize results in a way that invites targeted challenge sessions rather than passive review. Socializing is not about selling the results; it is about validating accuracy and building shared ownership of the next steps. Targeted challenge sessions should include the teams that own the systems and processes being assessed, because they can confirm reality, explain constraints, and identify unintended consequences of proposed changes. These sessions also surface disagreements about risk tradeoffs, and it is better to surface those disagreements early than to discover them during implementation. The tone should be collaborative and evidence-based, because defensiveness usually indicates that the assessment feels like judgment rather than improvement. When you invite challenge, you increase credibility, and you improve the quality of the final baseline. A baseline that is jointly acknowledged is far more useful than a baseline that is technically correct but politically rejected.

With validated findings, prioritize remediation aligned to mission outcomes, because prioritization is the difference between improvement and endless backlog. Remediation should be ordered by risk impact, outcome relevance, and feasibility, with explicit recognition of dependencies and sequencing. Some remediations unlock others, such as building reliable asset inventory before expecting consistent patch management reporting, or improving identity foundations before expecting stronger access reviews. Prioritization should also consider operational sustainability, because a remediation that increases workload without improving workflow will degrade over time. The best prioritization produces a small number of initiatives that move the needle on the most important outcomes, supported by quick wins that reduce immediate exposure. This is also where you clarify who owns each remediation and how progress will be measured, because accountability without measurement becomes vague aspiration. Alignment to mission outcomes keeps the program focused and makes progress visible.

To conclude, lock the assessment baseline so it can serve as a stable reference point, and publish an improvement plan that translates findings into a clear path forward. Locking the baseline means capturing the state of capabilities at a specific point in time, with the evidence and assumptions that make the assessment interpretable later. Publishing the improvement plan means stating what will change, who owns it, what success looks like, and how the organization will track progress toward mission-aligned outcomes. The plan should be realistic about timelines and dependencies, because credibility is lost quickly when plans ignore operational constraints. It should also preserve the reasoning that connects improvements to risk thresholds, because that reasoning is what sustains support when priorities shift. A well-constructed baseline and improvement plan turns assessment into action, and action is how security becomes a dependable mission capability rather than a collection of disconnected efforts.

Episode 43 — Assess current security capabilities against mission and risk realities
Broadcast by