Abstract

This document specifies the conformance assessment methodology for OSSASAI. The workflow is designed to be systematic, evidence-based, and aligned with established security assessment frameworks including ISO/IEC 27001 audit methodology, NIST SP 800-53A assessment procedures, and OWASP ASVS verification approaches. The process supports both self-assessment and third-party verification scenarios.

Note: Assessment Philosophy: OSSASAI conformance assessment emphasizes verifiable evidence over procedural compliance. Each control requires specific, testable evidence artifacts that demonstrate implementation effectiveness.

Assessment Lifecycle

┌─────────────────────────────────────────────────────────────────────────────────┐
│                    OSSASAI Conformance Assessment Lifecycle                      │
├─────────────────────────────────────────────────────────────────────────────────┤
│                                                                                  │
│   ┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐     │
│   │  SCOPE   │──►│  ASSESS  │──►│ REMEDIATE│──►│  VERIFY  │──►│  ATTEST  │     │
│   │ Define   │   │ Baseline │   │   Gaps   │   │  Fixes   │   │ Publish  │     │
│   └──────────┘   └──────────┘   └──────────┘   └──────────┘   └──────────┘     │
│        │                                                             │          │
│        │                         CONTINUOUS                          │          │
│        │              ┌──────────────────────────────┐              │          │
│        │              │                              │              │          │
│        └──────────────│◄─────────  MONITOR  ────────│◄─────────────┘          │
│                       │   Drift Detection & Audit   │                          │
│                       └──────────────────────────────┘                          │
│                                                                                  │
└─────────────────────────────────────────────────────────────────────────────────┘

Lifecycle Phases

Phase Objective Key Activities Deliverables
Scope Define assessment boundaries System inventory, level selection, stakeholder identification Scope Document
Assess Establish conformance baseline Automated audit, manual verification, evidence collection Baseline Report
Remediate Address identified gaps Prioritization, implementation, compensating controls Remediation Evidence
Verify Confirm gap closure Re-assessment, evidence validation Verification Report
Attest Formalize conformance claim Report generation, attestation signing Conformance Statement
Monitor Maintain ongoing conformance Continuous verification, drift detection Monitoring Reports

Phase 1: Scope Definition

Objectives

Scope definition establishes the boundaries of the conformance assessment, ensuring all stakeholders share a common understanding of what is being assessed and to what standard.

Scoping Procedures

  1. System Identification

Identify and document the target system:

Element Description Example
System Name Formal product/service identifier “ACME AI Assistant v2.1”
System Type OSSASAI system category Agent Framework, Copilot, Assistant
Deployment Model Architecture classification SaaS, On-premises, Hybrid
Version Specific version under assessment “2.1.0-release”
  1. Assurance Level Selection

Determine the appropriate assurance level using the level selection criteria:

   Level Selection Decision:
   ├── Regulatory data (PII/PHI/PCI) OR public exposure? ─────► L3
   ├── Network-accessible OR team usage OR confidential data? ─► L2
   └── Local-only AND single operator AND internal data? ──────► L1
  1. Boundary Definition

Document system boundaries using the OSSASAI trust boundary model:

Boundary In-Scope Components Out-of-Scope Components
B1 (Inbound) Input interfaces, authentication External identity providers
B2 (Control Plane) Admin UI/API, configuration Infrastructure management
B3 (Tool) Built-in tools, plugins External APIs (responsibility shift)
B4 (Local State) Credential stores, logs, memory Backup infrastructure
  1. Stakeholder Assignment

Assign assessment roles per RACI matrix:

Role Responsibilities
Assessment Lead Overall coordination, schedule, report approval
Technical Assessor Execute audits, collect evidence, validate controls
Remediation Owner Implement control fixes, document changes
Attestation Authority Review findings, sign conformance statement

Scope Document Schema

# OSSASAI Assessment Scope Document
# Schema Version: 1.0

metadata:
  document_id: "OSSASAI-SCOPE-2026-001"
  created: "2026-01-15"
  last_modified: "2026-01-15"
  status: "approved"  # draft, review, approved

assessment:
  organization:
    name: "[Organization Name]"
    contact: "[Security Contact]"
  target_level: "L2"  # L1, L2, L3
  assessment_type: "self"  # self, third-party

target_system:
  name: "[Product Name]"
  version: "[Version]"
  type: "agent_framework"  # copilot, assistant, agent_framework, multi_agent
  deployment: "saas"  # local, saas, on_premises, hybrid

boundaries:
  b1_inbound:
    in_scope:
      - "Web chat interface"
      - "API endpoints"
      - "Slack integration"
    out_of_scope:
      - "Customer authentication systems"
  b2_control_plane:
    in_scope:
      - "Admin dashboard"
      - "Configuration API"
    out_of_scope:
      - "Cloud provider console"
  b3_tool:
    in_scope:
      - "File system tools"
      - "Shell execution"
      - "First-party plugins"
    out_of_scope:
      - "Third-party marketplace plugins"
  b4_local_state:
    in_scope:
      - "Credential storage"
      - "Session logs"
      - "Memory/context"
    out_of_scope:
      - "External backup systems"

data_classification:
  categories:
    - classification: "secret"
      examples: ["API keys", "OAuth tokens"]
    - classification: "confidential"
      examples: ["Source code", "Proprietary algorithms"]
    - classification: "internal"
      examples: ["Usage metrics", "Session logs"]

stakeholders:
  assessment_lead:
    name: "[Name]"
    email: "[Email]"
  technical_assessor:
    name: "[Name]"
    email: "[Email]"
  remediation_owner:
    name: "[Name]"
    email: "[Email]"
  attestation_authority:
    name: "[Name]"
    email: "[Email]"
    title: "[Title]"

schedule:
  scope_approval: "2026-01-15"
  assessment_start: "2026-01-20"
  remediation_deadline: "2026-02-15"
  verification_complete: "2026-02-20"
  attestation_target: "2026-02-28"

Phase 2: Baseline Assessment

Assessment Methodology

OSSASAI employs a hybrid assessment methodology combining automated verification with manual evaluation:

Method Applicability Advantages Limitations
Automated Audit Configuration-based controls Reproducible, fast, comprehensive Cannot verify design intent
Manual Testing Behavioral controls Validates real-world behavior Time-intensive, expertise required
Evidence Review Policy/process controls Verifies documentation Requires interpretation
Interview Organizational controls Captures tacit knowledge Subjective

Automated Audit Execution

# Download OSSASAI audit toolkit
curl -sSL https://raw.githubusercontent.com/gensecaihq/ossasai/main/tools/ossasai-audit.sh -o ossasai-audit.sh
chmod +x ossasai-audit.sh

# Execute baseline assessment
./ossasai-audit.sh \
  --level L2 \
  --profile openclaw \
  --output-format json \
  --evidence-dir ./evidence \
  > baseline-assessment.json

# Generate human-readable report
./ossasai-audit.sh \
  --level L2 \
  --output-format markdown \
  > baseline-report.md

Assessment Output Schema

{
  "$schema": "https://github.com/gensecaihq/ossasai/tree/main/appendices/schemas/assessment-v1.json",
  "assessment": {
    "id": "OSSASAI-ASSESS-2026-001",
    "timestamp": "2026-01-20T14:30:00Z",
    "version": "0.1.0",
    "target_level": "L2",
    "profile": "OSSASAI-PROFILE-OPENCLAW-OCSAS-0.1"
  },
  "summary": {
    "total_controls": 19,
    "status_distribution": {
      "pass": 14,
      "fail": 3,
      "partial": 1,
      "not_applicable": 1
    },
    "compliance_score": 73.7
  },
  "controls": [
    {
      "id": "OSSASAI-CP-01",
      "title": "Default-Deny Control Plane Exposure",
      "requirement_level": "MUST",
      "status": "pass",
      "verification_method": "automated",
      "evidence": {
        "artifacts": ["evidence/cp-01-bind-config.json"],
        "observations": "Gateway bound to 127.0.0.1:18789"
      },
      "timestamp": "2026-01-20T14:30:15Z"
    },
    {
      "id": "OSSASAI-TB-04",
      "title": "Outbound Data Exfiltration Controls",
      "requirement_level": "MUST",
      "status": "fail",
      "verification_method": "automated",
      "finding": {
        "description": "Egress allowlist not configured",
        "severity": "high",
        "evidence": ["evidence/tb-04-egress-config.json"]
      },
      "remediation": {
        "guidance": "Configure egress.allowlist in configuration",
        "reference": "/controls/tool-blast-radius#ossasai-tb-04"
      },
      "timestamp": "2026-01-20T14:30:22Z"
    }
  ]
}

Manual Assessment Procedures

For controls requiring manual verification:

Manual Assessment Control Matrix | Control ID | Manual Check Procedure | Evidence Required | |------------|----------------------|-------------------| | OSSASAI-ID-02 | Test session isolation by creating two sessions and verifying no state leakage | Session test logs, screenshots | | OSSASAI-TB-02 | Attempt high-risk operation and verify approval prompt appears | Approval workflow screenshots | | OSSASAI-LS-03 | Inject test payload into memory and verify it cannot execute as instruction | Memory injection test results | | OSSASAI-FV-01 | Review TLA+/TLC model and verify all invariants pass | TLC output logs, model files |

Phase 3: Gap Analysis

Gap Prioritization Framework

OSSASAI uses a risk-based prioritization framework aligned with NIST SP 800-30 (Guide for Conducting Risk Assessments):

┌─────────────────────────────────────────────────────────────────────────────────┐
│                       Gap Prioritization Matrix                                  │
├─────────────────────────────────────────────────────────────────────────────────┤
│                                                                                  │
│                              IMPLEMENTATION EFFORT                               │
│                    Low                    Medium                   High          │
│                 ┌─────────────────────┬─────────────────────┬─────────────────┐ │
│         High    │                     │                     │                 │ │
│                 │     PRIORITY 1      │     PRIORITY 2      │    PRIORITY 3   │ │
│   RISK SEVERITY │    Immediate        │      Urgent         │     Planned     │ │
│                 │                     │                     │                 │ │
│                 ├─────────────────────┼─────────────────────┼─────────────────┤ │
│         Medium  │                     │                     │                 │ │
│                 │     PRIORITY 2      │     PRIORITY 3      │    PRIORITY 4   │ │
│                 │      Urgent         │      Planned        │    Scheduled    │ │
│                 │                     │                     │                 │ │
│                 ├─────────────────────┼─────────────────────┼─────────────────┤ │
│         Low     │                     │                     │                 │ │
│                 │     PRIORITY 3      │     PRIORITY 4      │    PRIORITY 5   │ │
│                 │      Planned        │     Scheduled       │     Backlog     │ │
│                 │                     │                     │                 │ │
│                 └─────────────────────┴─────────────────────┴─────────────────┘ │
│                                                                                  │
└─────────────────────────────────────────────────────────────────────────────────┘

Gap Documentation Schema

# OSSASAI Gap Analysis Record
gap:
  id: "GAP-2026-001"
  created: "2026-01-20"
  status: "open"  # open, in_progress, resolved, accepted_risk

control:
  id: "OSSASAI-TB-04"
  title: "Outbound Data Exfiltration Controls"
  requirement_level: "MUST"
  assurance_level: "L2"

finding:
  description: "Egress allowlist not configured; tools have unrestricted outbound access"
  severity: "high"  # critical, high, medium, low
  likelihood: "medium"  # high, medium, low
  impact: "Data exfiltration via coerced tool usage"
  evidence:
    - artifact: "evidence/tb-04-egress-config.json"
      observation: "egress.allowlist field is empty"

current_state:
  description: "All outbound network requests permitted without restriction"
  risk_level: "high"

target_state:
  description: "Egress restricted to allowlisted domains only"
  acceptance_criteria:
    - "egress.allowlist configured with approved domains"
    - "Non-allowlisted requests blocked and logged"
    - "Audit shows no unauthorized egress"

remediation:
  owner: "platform-team@example.com"
  approach: "Configure egress allowlist in gateway configuration"
  steps:
    - "Identify required external endpoints"
    - "Configure egress.allowlist with approved domains"
    - "Enable egress logging"
    - "Verify blocking of non-allowlisted destinations"
  compensating_controls: null

priority:
  score: 1  # 1-5, where 1 is highest
  justification: "High severity MUST control with moderate implementation effort"

Phase 4: Remediation

Remediation Approaches

Approach Description When to Use
Direct Implementation Implement the control as specified Standard remediation path
Compensating Control Alternative control providing equivalent protection When direct implementation is infeasible
Risk Acceptance Document and accept residual risk When control is not applicable or risk is acceptable
Scope Exclusion Remove component from assessment scope When component is out of organizational control

Compensating Control Requirements

When direct implementation is not feasible, compensating controls MUST:

  1. Provide equivalent protection against the threat addressed by the original control
  2. Be formally documented with clear justification
  3. Include review timeline for reassessing the need for the compensating control
  4. Maintain audit trail of compensating control effectiveness
# Compensating Control Documentation
compensating_control:
  original_control:
    id: "OSSASAI-SC-02"
    requirement: "Reproducible builds and artifact pinning"

  compensating_measures:
    - measure: "Manual hash verification"
      description: "SHA256 hashes verified manually before each deployment"
      implementation: "Verification script in deployment pipeline"
      coverage_percentage: 85

    - measure: "Vendor attestation"
      description: "Signed attestation from plugin vendor of build integrity"
      implementation: "Vendor provides signed SBOM with each release"
      coverage_percentage: 70

  combined_effectiveness: "moderate"  # high, moderate, low
  residual_risk: "low"

  review:
    next_review: "2026-Q3"
    full_implementation_target: "2026-Q4"
    responsible_party: "security-team@example.com"

  approval:
    approver: "CISO"
    date: "2026-01-25"
    conditions:
      - "Maintain manual verification until automated signing available"
      - "Report any verification failures immediately"

Phase 5: Verification

Verification Procedures

  1. Re-Execute Automated Audit
   # Targeted verification of remediated controls
   ./ossasai-audit.sh \
     --level L2 \
     --controls OSSASAI-TB-04,OSSASAI-SC-02 \
     --output-format json \
     > verification-targeted.json
   
   # Full re-assessment
   ./ossasai-audit.sh \
     --level L2 \
     --full \
     --output-format json \
     > verification-full.json
  1. Validate Evidence Artifacts

Verify that evidence artifacts are:

  • Complete: All required artifacts present
  • Current: Generated within assessment period
  • Authentic: Unmodified from collection
  • Relevant: Directly supports control verification
  1. Conduct Regression Testing

Verify that remediation did not introduce regressions:

  • Re-test all controls in affected boundary
  • Verify no new findings in adjacent controls
  1. Document Verification Results
   verification:
     gap_id: "GAP-2026-001"
     verification_date: "2026-02-18"
   
     tests:
       - test: "Automated audit OSSASAI-TB-04"
         result: "pass"
         evidence: "verification/tb-04-audit.json"
   
       - test: "Manual egress blocking test"
         result: "pass"
         evidence: "verification/tb-04-manual-test.md"
         observations: "Non-allowlisted request to example.com blocked"
   
     conclusion: "Gap resolved - control implemented successfully"
     verified_by: "security-assessor@example.com"

Phase 6: Attestation

Conformance Statement

Organizations claiming OSSASAI conformance MUST publish a formal attestation:

# OSSASAI Conformance Statement
# Schema Version: 1.0

statement:
  id: "OSSASAI-ATTEST-2026-001"
  issued: "2026-02-28"

organization:
  name: "[Organization Name]"
  address: "[Address]"
  contact: "[Security Contact Email]"

product:
  name: "[Product Name]"
  version: "[Version]"
  description: "[Brief product description]"

conformance:
  standard: "OSSASAI"
  version: "0.1.0"
  assurance_level: "L2"
  profile: "OSSASAI-PROFILE-OPENCLAW-OCSAS-0.1"
  status: "conformant"  # conformant, non-conformant, partial

assessment:
  type: "self"  # self, third-party
  assessor: "[Assessor Name/Organization]"
  methodology: "OSSASAI Conformance Assessment Methodology v1.0"
  date: "2026-02-20"

controls:
  summary:
    total_applicable: 19
    implemented: 19
    with_compensating: 1
    not_applicable: 0

  exceptions:
    - control_id: "OSSASAI-SC-02"
      status: "compensating_control"
      description: "Manual hash verification in lieu of automated pinning"
      remediation_target: "2026-Q4"

evidence:
  package_location: "[URL or reference]"
  package_hash: "sha256:[hash]"

validity:
  effective: "2026-02-28"
  expiration: "2026-02-28"
  review_frequency: "quarterly"

attestation:
  authority:
    name: "[Name]"
    title: "[Title]"
    organization: "[Organization]"
  date: "2026-02-28"
  statement: |
    I attest that the above-named product has been assessed against
    OSSASAI version 0.1.0 at Assurance Level 2 and meets all applicable
    conformance requirements as documented in this statement.

Attestation Levels

Assessment Type L1 L2 L3
Self-Assessment ✓ Sufficient ✓ Sufficient ✗ Insufficient
Internal Audit ✓ Preferred ✓ Preferred ✓ Acceptable
Third-Party Assessment ✓ Optional ✓ Recommended Required

Phase 7: Continuous Monitoring

Monitoring Framework

### Automated Compliance Checks

Continuous Integration:

  • Run OSSASAI audit on every build
  • Fail builds that introduce regressions
  • Alert on compliance drift

Scheduled Assessment:

  • Weekly automated full audit
  • Monthly evidence refresh
  • Quarterly manual review

    Drift Detection

Configuration Monitoring:

  • Track security-relevant configuration changes
  • Alert on changes affecting OSSASAI controls
  • Require re-verification after changes

Incident Triggers:

  • Security incident → Full reassessment
  • Major version release → Scope review
  • Regulatory change → Level reassessment

CI/CD Integration

# .github/workflows/ossasai-compliance.yml
name: OSSASAI Compliance Check

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
  schedule:
    - cron: '0 0 * * 0'  # Weekly full assessment

env:
  OSSASAI_LEVEL: L2

jobs:
  compliance-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Download OSSASAI Audit Tool
        run: |
          curl -sSL https://raw.githubusercontent.com/gensecaihq/ossasai/main/tools/ossasai-audit.sh -o ossasai-audit.sh
          chmod +x ossasai-audit.sh

      - name: Run Compliance Assessment
        run: |
          ./ossasai-audit.sh \
            --level $OSSASAI_LEVEL \
            --output-format json \
            > ossasai-report.json

      - name: Check Compliance Status
        run: |
          status=$(jq -r '.summary.status_distribution.fail' ossasai-report.json)
          if [ "$status" != "0" ]; then
            echo "::error::OSSASAI compliance check failed"
            jq '.controls[] | select(.status == "fail")' ossasai-report.json
            exit 1
          fi

      - name: Upload Assessment Report
        uses: actions/upload-artifact@v4
        with:
          name: ossasai-assessment
          path: ossasai-report.json
          retention-days: 90

Compliance Dashboard Metrics

Metric Description Target
Compliance Score Percentage of passing controls ≥ 100% for claimed level
Control Coverage Controls with current evidence 100%
Evidence Freshness Average age of evidence artifacts < 30 days
Drift Events Configuration changes affecting controls Tracked, not targeted
Remediation Velocity Average gap closure time Decreasing trend

References

Normative References

  • OSSASAI Specification Overview (/spec/overview)
  • OSSASAI Assurance Levels (/spec/assurance-levels)
  • OSSASAI Control Catalog (/controls/overview)

Informative References

Assessment Methodologies:

  • NIST SP 800-53A: Assessing Security and Privacy Controls
  • ISO/IEC 27001:2022: Certification audit requirements
  • OWASP ASVS: Verification methodology
  • SOC 2 Type II: Trust Services Criteria assessment

Risk Management:

  • NIST SP 800-30: Guide for Conducting Risk Assessments
  • ISO 31000: Risk Management — Guidelines

Back to top

OSSASAI v0.2.0 - Open Security Standard for Agentic Systems. Apache 2.0 License.

This site uses Just the Docs, a documentation theme for Jekyll.