McFadin Advisory

01

Architecture Review and Audit

A structured diagnosis from people who were in the conversations when your stack\u2019s assumptions got set. Two to four weeks. Fixed fee. A written report, an executive readout, a technical readout for your team, and a prioritized list of what to fix first.

For
Cassandra-at-scale leads, platform engineering leaders, VC and PE partners running technical due diligence, and CFOs who suspect their cloud bill is structurally wrong.
Engagement shape
Fixed-fee diagnostic, two to four weeks. Written report plus executive and technical readouts. No hourly billing, no scope creep.

The problem

You inherited a system. Or you are about to commit to one. Or you are three months into a migration and something feels off, but the people who designed the original plan are also the people you would ask to validate it. That is a closed loop, and closed loops get expensive.

Here is what we keep finding. The shape of your data platform reflects what a vendor sells, not what your workload actually needs. The data model was right at launch and wrong at 10x. The cluster is tuned for a workload that no longer exists. The migration plan understates cutover risk by a factor of three. The cloud bill grows every quarter and nobody on the team can explain which line item is load-bearing and which is paying for somebody else’s roadmap.

We have reviewed more than 100 architectures. The failure modes repeat. A two to four week review will not find everything. It will find the five or six things that matter, in an order you can act on, with the cost picture written down so nobody can say later they did not know.

What we diagnose

We offer four scope options:

  • Cassandra cluster review — topology, data model, JVM and GC, compaction, upgrade path, cost-per-workload
  • Data platform architecture review — end-to-end data platform, batch and streaming, storage layer, vendor lock-in audit, full cost picture
  • Migration readiness review — source system, target system, cutover plan, rollback plan, total migration cost
  • AI data infrastructure review — vector store, retrieval, feature pipelines, training data lineage, inference cost

Every review is led by a senior operator with decades of production data experience. We do not hand off to juniors.

What you get

Every review delivers the same core set of artifacts:

  • A written report, 20 to 40 pages, structured around findings and prioritized recommendations
  • A cost picture section — current spend mapped against what the workload should cost on a sanely designed stack
  • A 90-minute executive readout for your leadership
  • A 60-minute technical readout for your engineering team
  • A prioritized action list your team can run with the next morning
  • A 30-day follow-up call for questions that surface during implementation

How it works

Week 0 — Scope. A 60-minute call to confirm the right scope, the right systems, and the right people to talk to. We sign an NDA and a statement of work. You share docs.

Week 1 — Discovery. We read everything you give us: architecture docs, recent incidents, on-call runbooks, the current roadmap, the data model, the cloud bill broken down by service. We run targeted diagnostics or watch your team run them. We interview three to six of your engineers, 30 minutes each.

Week 2 — Analysis. We write. We come back with specific questions — not a survey, but the questions we only know to ask after a week inside the system.

Week 3 to 4 — Report and readouts. Draft report on Monday, comment-and-revise cycle midweek, final report Thursday, readouts Thursday and Friday.

Day 30 — Follow-up. One hour, anything you want to discuss from the report.

The decisions downstream of a bad architecture call are worth ten to a hundred times the cost of the review. Most reviews pay for themselves in the first quarter through the cost picture alone.

Questions teams ask

Why is this modeled on Jepsen?
Because Kyle Kingsbury figured out that the highest-trust shape for a deep technical review is a published, self-contained report. We borrowed the shape. The difference is that Jepsen reports are public research. Our reports are private to the client unless both sides agree to publish.
Will the report tell us what our stack should cost?
Yes. Every review includes a section comparing your current spend against what your workload should cost on a sanely designed stack. That number is usually the most-quoted line in the report.
Can the report be published as a case study?
Yes, if both sides want that, after a redaction pass. Many clients prefer private reports.
Can you review a system you helped design?
No. If we already designed it, we are not the right people to audit it.
What if you find something so bad we need you to stay and fix it?
Then we scope a Tiger Team engagement separately. The review fee is still the review fee.
Do you do security audits?
No. We will flag obvious issues. Security audits are a separate discipline and you want a firm that does only that.

Let’s look at it together.

Bring us whatever you're wrestling with — a new architecture, a migration plan, a bill that keeps growing, a team that needs a sounding board. Thirty minutes, no pitch. We'll tell you what we see.