FiceCal Docs

User guide and technical reference

Documentation for FiceCal + MCP

This document explains each FiceCal section, how the model derives outputs (including AI token economics and Release 4 SLA/SLO/SLI reliability economics), and how to operationalize the same logic via MCP for assistant-driven workflows and automations. Current UI inventory: 6 capability lanes, 4 scenario demos, and reliability overlays/outputs for resilience decisioning across pricing, investment, and growth decisions.

Who this is for

FinOps teams, SaaS founders, engineering leaders, and anyone validating cloud unit economics before scaling.

What is covered

Calculator sections, formulas, guided workflows, 4 scenario demos, AI token economics, Release 4 reliability economics (10 inputs, 8 output cards, 2 chart overlays), health logic, share-state links, and MCP tools.

How to use this page

Start with Quick Start, then move through calculator usage, then MCP setup and examples.

Quick start

  1. In Quick actions, follow the first-timer strip: Step 1: Pick your role (Finance, Operations, Architecture, Executive).
  2. Use the Start Step 2 button to jump directly to the required core fields: Total Infra Cost and Revenue / Client (ARPU).
  3. After Step 2 is complete, Step 3 unlocks your role-specific top 3 outcomes + first action. Use this as your immediate readout before deeper tuning.
  4. Then use Group B controls for model tuning and Quantify Business Value inputs: CUD, target margin, budget, forecast assumptions, savings realization, and cost avoidance.
  5. Optionally add multi-technology costs (SaaS, licensing, private cloud, data center, labor) and scope domains for normalization.
  6. Review Group C auto-calculated outputs and Group E health/recommendation cards, then use the chart section to inspect cost/revenue/profit behavior across scale.
  7. Use all 4 scenario demos (Healthy, Unhealthy, Reliability Healthy, Reliability Unhealthy) to validate expected KPI behavior before entering live portfolio assumptions.
  8. For resilience planning, enable Reliability Economics and compare reliability-adjusted profit, ARPU uplift needed, and extra clients needed at current ARPU.
Tip: Keep first-pass analysis simple: role -> two core inputs -> top outcomes. Expand "Guided path" and "More actions (advanced)" only after the initial readout is clear. Then use reliability outputs to decide whether to invest in resilience, raise pricing, or increase client volume.

Calculator sections explained

Section What you provide What the model computes
Group A - Core Inputs Current clients (n), dev cost/month, infra cost/month, ARPU, and startup planning alternatives. Normalizes baseline assumptions for all downstream calculations.
Group B - Optional Tuning CUD discount %, target margin %, max chart clients, monthly budget, forecast growth/efficiency/drift, identified savings, realized savings, and cost avoidance. Applies commitment savings, pricing constraints, budgeting variance checks, forecast scenarios, and value-realization ledger signals.
Group B - AI Token Economics Enable AI mode, select pricing mode, enter token rates/volumes, retry/premium mix, shared overhead, and allocation policy. Computes AI token cost, allocated AI monthly spend, and AI cost/client outputs for blended unit-economics decisions.
Group B - Reliability Economics (SLA/SLO/SLI) Enable reliability mode, provide target/observed availability, incident profile, penalty assumptions, and reliability investment. Computes expected downtime, reliability failure-cost lanes, reliability-adjusted cost, reliability-adjusted profit/loss, ARPU uplift needed, extra clients needed, risk band, and data confidence.
Group B - Multi-technology Overlay Tech domain scope plus optional monthly SaaS/licensing/private cloud/data center/labor costs. Computes scoped normalized technology cost, coverage, and confidence to align with multi-domain FinOps analysis.
Group C - Auto Outputs No direct input; values are derived from Groups A/B. Break-even clients, min price, contribution margin, CCER, CUD savings, startup targets, budget variance, forecast margin/confidence bands, value realization ratio/gap, normalization confidence outputs, AI token economics outputs, and 8 reliability economics outputs.
Group D - Provider Curves Select cloud provider scenarios and visible lines (including reliability overlays when reliability data is active). Compares scaling behavior across 9 curves: dev, infra raw, infra CUD, total cost, total + reliability, revenue, profit, profit + reliability, and revenue target.
Group E - Health + Recommendations Derived from current assumptions. Zone score plus prioritized FinOps actions (with category filters and provider context), including reliability-aware remediation guidance when applicable.

How calculations work

The calculator uses deterministic formulas and scan-based thresholds to turn your inputs into decision-ready outputs for viability, pricing, growth planning, and reliability trade-off analysis.

Core equations

  • Revenue(n) = ARPU * n
  • TotalCost(n) = DevCost(n) + InfraCost(n)
  • BreakEven = first n where Revenue(n) >= TotalCost(n) (deterministic scan over modeled range)
  • ContributionMargin/client = ARPU - VCPU
  • CCER = Revenue / ModeledInfraSpend
  • NTC/client = sum(alpha_d * C_d) / n using selected domain scope
  • BudgetVariance = Budget - ModeledCost (headroom if positive)
  • ExpectedReliabilityFailureCost = SLA Penalty + Incident Labor + Revenue-at-Risk + Churn Risk
  • ReliabilityAdjustedCost = ExistingModeledCost + ReliabilityInvestment + ExpectedReliabilityFailureCost
  • ReliabilityAdjustedProfit = Revenue - ReliabilityAdjustedCost
  • RequiredARPU_with_rel = (ReliabilityAdjustedCost / n) * (1 + marginTarget)
  • ExtraClients_with_rel = first n where CurrentARPU * n >= (BaseCost(n) + ReliabilityLoad) * (1 + marginTarget)

Derived output behavior

  • Min price/client responds to total unit cost and target margin.
  • CUD monthly saving shows on-demand infra minus committed infra at current scale.
  • Target fields estimate required clients for a target price and required price for a target client volume.
  • Target monthly revenue is derived from target price and required clients scenario values.
  • Forecast margin band computes baseline/best/worst monthly margin using growth + cost efficiency/drift assumptions.
  • Forecast confidence band is the spread between best and worst scenario margin outcomes.
  • Total realized value combines realized savings and cost avoidance.
  • Realization ratio and residual gap compare delivered value against identified savings target.
  • Reliability outputs classify risk posture (none|low|medium|high) and report data-confidence quality.
  • Reliability chart overlays activate only when reliability data is sufficient, preventing false overlays.
  • Reliability Healthy vs Reliability Unhealthy demos make downside and investment trade-offs comparable in seconds.
  • R4 profitability/pricing/client-impact outputs help teams choose between resilience investment, ARPU uplift, and scale targets with a consistent formula base.

Quantify Business Value extension formulas

  • ForecastClients = n * (1 + growth%)
  • BaselineMargin = ForecastRevenue - ForecastCost
  • BestMargin = ForecastRevenue - (ForecastCost * (1 - efficiency%))
  • WorstMargin = ForecastRevenue - (ForecastCost * (1 + drift%))
  • ForecastSpread = BestMargin - WorstMargin
  • TotalRealizedValue = RealizedSavings + CostAvoidance
  • RealizationRatio = TotalRealizedValue / IdentifiedSavings
  • ResidualValueGap = IdentifiedSavings - TotalRealizedValue

Glossary and definitions

This glossary is consolidated here to keep public documentation in one place and avoid cross-page fragmentation.

ModeledCost

The projected monthly technology cost generated by the model for a given client volume. It combines the development decay component and infrastructure growth component for scenario analysis.

Forecast / CFO

ARPU (Average Revenue Per User)

Average revenue generated per client per month. In this calculator, ARPU is the core unit revenue assumption used to derive break-even, contribution margin, and CCER.

Unit economics

Break-even

The operating point where total monthly revenue equals total monthly cost. Above this threshold, the business becomes contribution-positive; below it, operations are loss-making.

Viability

Break-Even Clients / Minimum Viable Clients

The first client count in the modeled scan range where monthly revenue meets or exceeds monthly total cost. This is shown as the key threshold in KPI cards and chart insights.

Scale threshold

Min Price / Minimum Price

The minimum viable per-client price required to recover modeled cost and target margin at a given scale. Used for pricing floor and startup planning outputs.

Pricing

VCPU (Variable Cost Per User)

Per-client share of variable infrastructure cost. It indicates how much incremental cloud cost each additional client introduces under current assumptions.

Cost structure

Contribution Margin

ARPU minus VCPU. It measures the amount each client contributes to fixed-cost recovery and profit after covering their own variable infrastructure load.

Unit margin

CCER (Cloud Cost Efficiency Ratio)

Revenue divided by modeled on-demand cloud infrastructure cost in this calculator. A higher ratio means stronger revenue productivity per euro of modeled infra spend; low values indicate weak efficiency posture.

FinOps KPI

CUD / CUDs (Committed Use Discounts)

Commitment-based cloud discounts (e.g., 1-3 year commitments) that reduce unit infrastructure cost versus on-demand rates when workloads are predictable.

Optimization lever

SLA (Service Level Agreement)

Contractual reliability commitment with service-credit or penalty implications when delivered availability falls below agreed thresholds.

Reliability contract

SLO (Service Level Objective)

Internal reliability target used by teams to plan investment and operations before contractual SLA breaches occur.

Reliability target

SLI (Service Level Indicator)

Measured reliability outcome (for example observed availability) used to evaluate objective compliance and expected cost impact.

Reliability signal

Reliability Failure Cost

Expected monthly loss lane that combines SLA penalties, incident labor, direct revenue-at-risk, and churn-risk expected value.

Risk-adjusted cost

Reliability Risk Band

Categorical risk posture (none, low, medium, high) derived from breach gap and failure-cost share relative to adjusted cost.

Risk signal

Savings Plans

A commitment discount mechanism (primarily AWS) that lowers compute pricing for committed usage levels. Equivalent in intent to commitment models on other clouds.

Cloud pricing

Reserved Instances

Pre-purchased cloud capacity commitments that trade flexibility for lower unit cost. Used in optimization strategies alongside Savings Plans/CUDs.

Cloud pricing

Target Margin

The profit margin objective applied above modeled cost in pricing equations. It defines how much profit buffer you require beyond pure cost recovery.

Pricing policy

Target Revenue

Monthly revenue level required for the selected startup planning assumptions (target clients or target price) while meeting cost and margin objectives.

Startup planning

Budget Variance

Difference between technology budget and modeled/scoped monthly cost. Positive values indicate headroom; negative values indicate overrun risk.

CFO control

Forecast Band

Base, best, and worst margin outcomes generated from growth plus efficiency/drift assumptions. This is a deterministic scenario band, not statistical probability.

Scenario planning

Forecast Spread

Distance between best and worst forecast margins, often expressed versus revenue. Wider spread implies greater planning uncertainty and lower confidence.

Uncertainty signal

Savings Identified

Total monthly savings opportunity discovered by analysis (rightsizing, commitments, waste reduction), before realization has occurred.

Value pipeline

Savings Realized

Monthly savings already captured in spend outcomes, not just identified. This is the achieved value component used in realization tracking.

Value delivery

Cost Avoidance

Future spend prevented through proactive design, governance, or procurement decisions. Unlike realized savings, this often represents avoided increases.

FinOps value

Realization Ratio

Percentage of identified value that has been achieved through realized savings plus cost avoidance. Tracks execution quality of FinOps initiatives.

Execution KPI

Realization Gap

Remaining difference between identified value target and achieved value. A positive gap indicates value still pending capture.

Execution KPI

Normalization

Process of converting monthly costs into a comparable per-client basis across selected domains. Enables fair comparison of mixed technology cost structures.

Comparability

NTC/client (Normalized Technology Cost per Client)

Aggregate selected-domain monthly cost divided by client count. This provides a unified per-client technology cost signal across cloud and non-cloud domains.

Portfolio metric

Financial Truth mode

Baseline mode where selected domains are included with neutral weighting for transparent reporting. Used to anchor governance discussions on actual scoped spend.

Governance mode

Priority Index

Policy-weighted prioritization lens used to emphasize selected dimensions when governance requires scenario emphasis over neutral baseline representation.

Governance mode

Tech Coverage

Share of selected technology domains that currently have cost data provided. Higher coverage improves confidence in normalization outputs.

Data quality

Focus Maturity

Confidence level for scoped normalization quality (e.g., Low/Medium/High), based on data completeness and domain coverage.

Data quality

reference n / nRef

The current client baseline used for calibrating model coefficients from your present-day cost and usage conditions.

Model calibration

nMax

Maximum client count shown in chart/scanning ranges. It controls projection horizon for break-even search and curve visualization range.

Projection horizon

CFO Forecast Dashboard

The planning panel that summarizes budget checkpoints, margin scenarios, and realization trajectories for month-by-month finance review.

Finance reporting

How to use the calculator effectively

Recommended workflow

  1. Select the role that matches your decision context. This automatically aligns recommendation emphasis and guided-path framing.
  2. Complete only the two core fields first (Infra Cost + ARPU) to unlock the role preview and avoid early input overload.
  3. Review "You'll get first" outcomes and "Do this next" before touching advanced controls.
  4. Set a conservative CUD percentage first, then refine after observing sensitivity.
  5. Set monthly budget and check Budget Variance to identify immediate overrun/headroom posture.
  6. Add forecast growth, best-case efficiency, and worst-case drift to activate the forecast margin/confidence band outputs.
  7. Turn on Reliability Economics when incident and SLA posture matters; review reliability-adjusted profit, ARPU uplift, and extra-clients implications before approving scale plans.
  8. Add identified savings, realized savings, and cost avoidance to track realization ratio and residual value gap.
  9. Use output tooltips to validate assumptions behind each metric, then cross-check with chart curves for where margins compress at higher client counts.
  10. Use health recommendations with category filters to prioritize short-term vs strategic actions.

Scenario comparison approach

  • Scenario A (Healthy): baseline viability and balanced efficiency posture.
  • Scenario B (Unhealthy): downside stress posture with weak margin/efficiency signals.
  • Scenario C (Reliability Healthy): resilience investment and low breach-risk reliability state.
  • Scenario D (Reliability Unhealthy): elevated outage/breach pressure and downside-heavy reliability state.
  • Compare break-even, CCER, contribution margin, reliability-adjusted profit, ARPU uplift needed, and extra-clients-needed across scenarios to align finance and operations decisions.

Health zone and recommendations

The Health section continuously evaluates your model posture and labels it as a zone with an associated score and guidance.

  • Signals include break-even behavior, efficiency posture (CCER), margin quality, and commitment coverage.
  • Recommendation cards are prioritized to guide near-term and medium-term FinOps actions.
  • Category filters (All, Infrastructure, Pricing, Marketing, CRM, Governance) let users focus on a specific execution lane.
  • No-break-even conditions now surface strategic guidance beyond pure infra optimization (pricing floor adjustment, funnel acceleration, and CRM retention/expansion actions).
  • Provider filters help tailor actions to cloud-specific optimization opportunities.

Share-state links

The calculator can generate a URL that captures the current model state so colleagues can open the same assumptions without re-entering inputs, preserving analysis context for review meetings and handoffs.

  • Includes all active numeric inputs, selected tech domains, selected providers, and curve visibility toggles.
  • This now includes Quantify Business Value controls (budget, forecast assumptions, savings realization, cost avoidance).
  • Also includes AI token economics settings (pricing mode, rates, volumes, retry/premium mix, overhead, allocation policy).
  • Also includes reliability economics settings (SLO/SLI assumptions, incident economics, and reliability investment).
  • Includes all curve toggles, including reliability overlays (total-rel, profit-rel) when enabled.
  • Also includes UI context: selected role (ui) and interface mode (um) so shared links reopen in the same decision lens.
  • The URL contains an encoded ?state= token.
  • On load, the app decodes and validates this state, then recalculates outputs, chart, health, and recommendations.

MCP overview

The same FiceCal model is available through a Model Context Protocol server so AI assistants can call the model directly in workflows, audits, and planning automations.

  • Transport: JSON-RPC over stdio in the current implementation.
  • Core logic is shared between browser model behavior and MCP tool handlers for unit economics, reliability economics, health scoring, recommendation prioritization, and state encoding/decoding.
  • finops.calculate supports multi-technology scope through techDomains and optional non-cloud monthly cost inputs, plus reliability inputs/outputs for SLA/SLO/SLI parity.
  • MCP v0.3.0 keeps share-state parity with the calculator by supporting UI role/mode context (uiIntent, uiMode) in generated state tokens.
  • finops.recommend supports category-aware filtering and can include strategic pricing/marketing/CRM recommendations when inputs are supplied.
  • Useful for repeatable analysis, not just manual UI interactions.

MCP tools and contracts

Tool Purpose Typical output
finops.calculate Full model execution with normalized inputs and optional UI context. Calculated outputs (including normalization and reliability snapshots), health, recommendations, and optional state token that can preserve ui/um context.
finops.health Posture-only analysis for the same assumptions. Zone, score, and failed checks.
finops.recommend Action planning output. Prioritized recommendations filtered by zone/provider/category, with optional strategic business recommendations when inputs are supplied.
finops.state.encode Create share-state tokens from assumptions plus optional role/mode context. Encoded token for URL embedding with normalized ui, um, inputs, domains, providers, and hidden curves.
finops.state.decode Decode and validate share-state tokens. Restored assumptions payload.

MCP setup and usage

Local setup

  1. Clone finops-calculator or finops-calculator-mcp.
  2. Install dependencies in the MCP workspace.
  3. Run tests to verify contract/protocol behavior.
  4. Run parity tests to verify that share-state constants, reliability fixtures, and UI option values have not drifted from index.html.
  5. Register MCP server command in your client config.
# Example command path for MCP client config
node /absolute/path/to/mcp/server/index.js

Client connection references

Use the MCP connection examples from the finops-calculator-mcp repository (for example, github.com/duksh/finops-calculator-mcp) for Cursor, Windsurf, and Claude Desktop integration.

Example MCP request/response shape

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "finops.calculate",
    "arguments": {
      "inputs": {
        "nRef": 120,
        "devPerClient": 500,
        "infraTotal": 2400,
        "techDomains": ["cloud", "saas"],
        "costSaaS": 600,
        "reliabilityEnabled": "on",
        "sloTargetAvailabilityPct": 99.9,
        "sliObservedAvailabilityPct": 99.7,
        "incidentCountMonthly": 3,
        "mttrHours": 1.2,
        "incidentBlendedHourlyRate": 100,
        "criticalRevenuePerMinute": 25,
        "arrExposedMonthly": 70000,
        "slaPenaltyRatePerBreachPointMonthly": 3500,
        "reliabilityInvestmentMonthly": 1500,
        "startupTargetPrice": 35,
        "cudPct": 30,
        "margin": 15,
        "nMax": 2000
      },
      "providers": ["aws"],
      "uiIntent": "operations",
      "uiMode": "operator",
      "options": {
        "includeHealth": true,
        "includeRecommendations": true,
        "includeStateToken": true
      }
    }
  }
}

Decode state token and restore UI context

{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "tools/call",
  "params": {
    "name": "finops.state.decode",
    "arguments": {
      "stateToken": "..."
    }
  }
}
{
  "jsonrpc": "2.0",
  "id": 3,
  "result": {
    "state": {
      "v": 1,
      "ui": "operations",
      "um": "operator",
      "i": { "infraTotal": "2400", "ARPU": "30" },
      "td": ["cloud", "saas"],
      "p": ["aws"],
      "h": []
    }
  }
}
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "outputs": {
      "breakEvenClients": 51,
      "minPricePerClient": 27.79,
      "reliability": {
        "expectedReliabilityFailureCostMonthly": 10887.5,
        "reliabilityAdjustedCostMonthly": 15487.5,
        "reliabilityRiskBand": "medium",
        "reliabilityDataConfidence": "high"
      },
      "normalization": {
        "selectedDomains": ["cloud", "saas"],
        "coveragePct": 100,
        "normalizedTechCostPerClient": 25,
        "confidence": "High"
      }
    },
    "health": {
      "zoneKey": "yellow",
      "zoneTitle": "Yellow Zone - Needs Improvement",
      "score": 72
    },
    "recommendations": [
      {
        "title": "CCER below 3x",
        "category": "governance",
        "priority": "high"
      }
    ],
    "stateToken": "..."
  }
}

Recommendation tool with category filter

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "finops.recommend",
    "arguments": {
      "zoneKey": "red",
      "providers": ["aws"],
      "category": "marketing",
      "inputs": {
        "nRef": 80,
        "infraTotal": 2400,
        "startupTargetPrice": 35
      }
    }
  }
}

Troubleshooting

Calculator page issues

  • Unexpected output values: verify units (monthly costs, not annual).
  • Chart looks flat: increase chart scale or adjust ARPU/cost assumptions.
  • Budget Variance remains blank: provide both modeled cost inputs and a positive monthly budget.
  • Forecast band is missing: add ARPU (or startup-derived ARPU) and at least one forecast assumption.
  • Realization ratio is missing: identified savings must be greater than zero to compute ratio/gap.
  • Shared link does not restore: ensure full URL including ?state= is copied.
  • Role/mode not matching after opening link: regenerate the link after selecting role and mode so ui/um are captured.

MCP issues

  • Tools not visible: confirm server command path and restart the MCP client.
  • Tool call errors: validate argument types against the schema definitions in the finops-calculator-mcp repository.
  • Reliability outputs are blank: set reliabilityEnabled to on and provide at least SLO/SLI baseline assumptions.
  • Unexpected decode errors: ensure the token is complete and unmodified.
  • MCP parity failures: run npm run test:parity in finops-calculator-mcp/server and inspect drift in share-state version or UI option constants.
Last updated: Version ID: v2026.02.24-01

Reference author: Duksh Koonjoobeeharry (FinOps & AWS/GCP Cloud Solution Developer, DBA Researcher) ยท LinkedIn