TrailBlaze Adventures · CISSP Domain 6 Case

Security Assessment and Testing

A classroom and workshop case about vulnerability assessment, penetration testing, control validation, audit evidence, security metrics, and continuous monitoring.

Scenario — TrailBlaze Adventures Security Testing Before Global Launch

TrailBlaze Adventures is preparing a major platform release before peak travel season. The release includes new mobile features, partner APIs, social-platform improvements, payment changes, and IoT integration for GPS trackers and smart wearables.

The release is business-critical because TrailBlaze wants to expand into new markets quickly. However, several previous incidents have raised concern about whether security testing is deep enough and whether controls are actually working as intended.

Current testing concerns

  • Development teams run automated tests, but security test coverage differs across web, mobile, API, and IoT components.
  • Vulnerability scans are performed monthly, but findings are not consistently prioritized or remediated.
  • Penetration testing has focused mostly on the public website, not mobile offline mode, partner APIs, or cloud misconfigurations.
  • Some security controls exist in documentation, but nobody has recently validated whether they work in production-like conditions.
  • Compliance teams need audit evidence, but test results, exceptions, and remediation decisions are scattered across tools.

Business pressure

  • Marketing wants the new social features launched before the travel season starts.
  • Operations wants GPS and emergency features tested without disrupting active expeditions.
  • Developers are concerned that late security findings will delay the release.
  • Management wants a clear risk-based recommendation: launch, delay, or launch with compensating controls.
  • External partners want API access quickly, but security teams need confidence in integration testing.
Management request: “Create an assessment and testing approach that gives us evidence-based confidence before launch and helps us prioritize what must be fixed first.”

Student assignment

1

Investigate the case

Analyze the TrailBlaze release scenario and identify key challenges related to security assessment and testing.

  • Which systems, applications, APIs, and infrastructure components require security testing?
  • Which testing methods are most appropriate for web, mobile, cloud, API, IoT, and social-platform components?
  • Which controls need validation before the release decision?
  • How should findings be prioritized, reported, and tracked to remediation?
  • What evidence is needed for audit, compliance, and management decision-making?
2

Identify Domain 6 challenges

Group your findings under vulnerability assessment, penetration testing, test coverage, control validation, compliance testing, reporting, and continuous monitoring.

3

Link challenges to Domain 6 concepts

Connect each identified challenge to CISSP Domain 6 concepts and explain why that concept is relevant for assessing TrailBlaze security.

Deliverable: A structured list of at least 10 security assessment and testing challenges, each linked to one or more Domain 6 concepts.

Domain 6 challenges to investigate

Assessment Scope and Coverage

  • Testing has focused on the website but not the full ecosystem.
  • Partner APIs, mobile offline mode, IoT firmware, and cloud configurations may be under-tested.
  • Students must determine what should be included in a complete assessment scope.

Testing Methods

  • Different components require different methods such as SAST, DAST, API testing, mobile testing, and configuration review.
  • Automated tests may miss business logic flaws or authorization weaknesses.
  • Manual testing may be needed for complex workflows and abuse cases.

Vulnerability Management

  • Scan findings are not consistently prioritized or remediated.
  • False positives and false negatives may distort risk decisions.
  • There is no clear deadline model for fixing critical vulnerabilities before launch.

Control Validation and Audit Evidence

  • Some documented controls may not be functioning in practice.
  • Audit evidence is scattered across tools and teams.
  • Management needs defensible evidence to support launch decisions.

Continuous Monitoring

  • Security testing should not stop after launch.
  • New risks may appear when customers, partners, and guides use the platform at scale.
  • Continuous monitoring should feed future assessments and testing priorities.

Risk-Based Release Decision

  • The organization needs to decide whether to launch, delay, or launch with compensating controls.
  • Security findings must be translated into business impact and risk language.
  • Students must recommend a decision based on evidence, severity, and operational impact.

Link challenges to Domain 6 concepts

Students must connect each identified challenge to CISSP Domain 6 concepts.

ChallengeDomain 6 ConceptExplanation
Testing only covers the public websiteSecurity Assessment / Security Test PlanA complete test plan defines scope across web, mobile, API, cloud, IoT, and social-platform components.
Automated scans identify many findings without prioritizationVulnerability Assessment / Risk-Based TestingFindings must be prioritized by exploitability, business impact, exposure, and affected assets.
Mobile offline workflows are not deeply testedDynamic Analysis / Manual TestingRuntime and manual testing are needed to evaluate offline synchronization, cached data, and workflow abuse.
Source code changes are deployed quicklySAST / Code ReviewStatic analysis and code review help detect vulnerabilities before code reaches production.
Partner APIs may expose excessive dataPenetration Testing / Authorization TestingAPI testing should verify authentication, authorization, rate limiting, and data exposure risks.
Cloud storage and network settings differ between regionsConfiguration Review / Security BaselineConfiguration reviews compare systems against approved baselines and detect insecure deviations.
Documented controls are not recently validatedSecurity Control Testing / Control ValidationControls must be tested to verify they actually function as intended in realistic conditions.
IoT firmware update mechanisms need assuranceSecurity Testing / Threat SimulationTesting should simulate realistic attacks against firmware update, enrollment, and communication processes.
Audit evidence is scattered across toolsSecurity Assessment Report / Compliance TestingAssessment results must be documented clearly for compliance, governance, and management decisions.
Launch decision requires clear security evidenceSecurity Metrics / Security ScorecardMetrics translate technical findings into risk indicators that support launch, delay, or mitigation decisions.
Security risks continue after launchContinuous Monitoring / Risk MonitoringContinuous monitoring detects new vulnerabilities, configuration drift, and emerging threats after deployment.
False positives slow the remediation processTool Validation / Security ReviewFindings need validation so teams focus on real risks instead of inaccurate tool output.

Learning outcomes

Outcome 1

Define scope

Identify which systems, data flows, applications, APIs, and infrastructure components require assessment.

Outcome 2

Select test methods

Choose appropriate assessment methods such as scanning, penetration testing, code review, SAST, DAST, and configuration review.

Outcome 3

Validate controls

Evaluate whether security controls are implemented correctly and provide evidence that they operate effectively.

Outcome 4

Report risk

Translate technical findings into prioritized remediation actions and management-level release recommendations.

Instructor tip

Use this case in three phases:

Phase 1

Scope the assessment

Students map systems and decide which components require testing before release.

Phase 2

Select methods

Students assign testing techniques to each component and justify their choices.

Phase 3

Make a release decision

Students use findings to recommend launch, delay, or launch with compensating controls.