Help & Support

Operational Support for RMF Teams

Use this page for support intake, phase-specific guidance, AI scoring methodology, and enterprise escalation contacts.

Contact Support

Knowledge Base & FAQ

Where is the full CyberTax FAQ library?

Use the dedicated FAQ route for detailed answers on AI validation, overlays, exports, and integration.

Open Guidance

How does CyberTax scoring work?

Scoring combines control narratives, mapped evidence, and explainable AI rationale with deterministic thresholds.

Open Guidance

How do I improve low-scoring controls?

Use the remediation workflow to close missing elements, strengthen narratives, and attach objective evidence.

Open Guidance

Where do I troubleshoot SAML/OIDC/LDAP login issues?

Use the identity troubleshooting matrix and validate provider metadata, redirect/ACS URLs, and certificates.

Open Guidance

How do I use mock/demo environments safely?

Follow the mock/demo guide to isolate data and clearly label non-production evidence.

Open Guidance

Where are security controls, legal disclaimers, and privacy statements documented?

Use the Security & Compliance page and linked legal/privacy statements for trust documentation.

Open Guidance

Where can I find roadmap and API reference documentation?

Use strategic enhancements, roadmap, and API reference pages for planning and integration details.

Open Guidance

RMF Phase Help Links

Categorize / Select

Intake, CIA impact selection, overlays, and baseline setup guidance.

Open Phase Guidance

Implement

Control narrative quality, required evidence, and response completion guidance.

Open Phase Guidance

Assess

AI scoring interpretation, assessor findings, and remediation prioritization guidance.

Open Phase Guidance

Authorize / Monitor

Risk summary packaging, report exports, and ongoing monitoring cadence guidance.

Open Phase Guidance

Sample Program Showcase

Open the sample program page to review synthetic RMF data, selectable artifact examples, and phase progress.

Open Sample Program

Mock / Demo Environment Documentation

Use the dedicated guide for safe demo mode operation, sample data handling, and tenant-isolated demonstration workflows.

Open Mock/Demo Documentation

AI Scoring Methodology

  • Control narratives and mapped evidence are evaluated against control intent using deterministic model settings.
  • Outputs are schema-validated and mapped to score and status outcomes (Meets, Partial, Gap).
  • Reasoning and missing elements are stored for explainability and assessor review.
  • Fallback heuristic scoring is applied if AI execution fails.

Resolving Low-Scoring Controls

  1. Review AI reasoning and missing elements for the control.
  2. Update narrative language with concrete implementation details and boundary scope.
  3. Attach objective evidence aligned to the control statement.
  4. Re-run AI scoring and escalate persistent gaps to assessor review.

Identity Troubleshooting (SAML / OIDC / LDAP)

ProviderCommon IssueOperator Check
OIDCRedirect mismatchConfirm issuer URL, callback URI, and client secret reference.
SAMLSignature/metadata errorsVerify entity ID, ACS URL, IdP metadata source, and cert references.
LDAPBind/auth failuresValidate bind DN, base DN, filters, TLS mode, and network reachability.

Escalation Contact (Enterprise)

For high-severity incidents affecting authorization timelines, use enterprise escalation channels.