§
§ · journal

Maximizing sales: ecommerce advisory strategies.

What an advisory engagement actually looks like at $1M to $10M ecommerce scale - the 90-day cadence, the working-session format, the deliverables by week, the five metrics that matter, and the renew-graduate-fire decision.

§ 01 · TL;DR

Ninety days. Four artifacts. One decision log.

A real ecommerce advisory engagement at $1M to $10M scale ships four concrete artifacts inside the first 90 days - a written diagnostic by week 2 or 3, a quantified prioritization of the top 8 to 12 initiatives, a metrics dashboard the in-house team can read without the advisor present, and a working-session cadence calendar. The cadence is three rhythms: a 60 to 90 minute weekly working session, a monthly leadership review, and a quarterly business review with a written renew-graduate-fire decision. The five metrics that carry the engagement are blended CAC, contribution-margin LTV (not revenue LTV), contribution margin per order, payback period, and channel ROAS with proper customer-level deduplication. Most $1M to $10M operators already track revenue, gross margin, and ROAS but miss the contribution-margin layer, and the missing data is what causes them to scale unprofitable channels. The renew-graduate-fire decision is harder than it sounds because operators get attached to the rhythm; a written quarterly decision forces the question. Advisory differs from fractional CMO and from agency retainer in scope, output, and accountability - advisory ships strategy and dashboards that the in-house team executes, fractional CMOs ship the marketing work directly, agencies ship specific tactical campaigns. The piece below is the engagement-mechanics playbook for operators already inside an advisory relationship who want to know what good looks like.

Fig. 01 · ninety-day engagement timeline · weekly, monthly, quarterly cadence.
§ 02 · what an advisory engagement actually does

Strategy, dashboards, decisions. Not campaigns.

The first thing an operator inside an active advisory engagement should be able to answer is what the advisor actually owns. The wrong answer is "growth" - growth is a metric, not a deliverable. The right answer is the four artifacts a real engagement ships inside 90 days, plus the working rhythm that produces them.

Advisory differs from fractional CMO work and from agency retainer work in three concrete ways. Scope. Advisory covers the full operating system of the ecommerce business - channel mix, contribution margin, retention mechanics, supply chain, hiring, technology - rather than just the marketing function. Output. Advisory ships strategy documents, decision frameworks, and metric dashboards that the in-house team executes; agencies ship campaigns and creative; fractional CMOs ship the marketing work directly. Accountability. Advisory is accountable for the operator's decision quality and the analytical framework, not for hitting a specific revenue target inside the engagement window. The Harvard Business Review framing of strategy versus tactics maps cleanly here - advisory is strategy plus measurement; the rest is execution.

The companion piece on essential ecommerce advisory for new entrepreneurs covers the buyer's framework - whether and when to hire an advisor in the first place. This piece picks up after the engagement is signed and the operator is asking what good looks like from the inside.

Three things to set up before the first working session. First, the in-house team has to nominate a single point of accountability on their side - usually the founder, sometimes the head of growth or the head of operations at $5M-plus scale. Advisory engagements with three or four operator-side stakeholders usually drift because no one person owns the implementation. Second, the operator has to commit to weekly cadence discipline - same time, same agenda format, mandatory attendance. Third, the operator has to give the advisor real data access - read-only logins to Shopify Admin, the ad accounts, Klaviyo, the analytics stack, and ideally the financial reporting. Advisory without data access produces generic output.

§ 03 · the 90-day diagnostic phase

Diagnostic by week three. Twenty-five to forty pages. Two surprises minimum.

The diagnostic is the load-bearing artifact of the entire first quarter. Get it wrong and the rest of the engagement drifts.

A working ecommerce advisory diagnostic is a 25 to 40 page document that lands in week 2 or week 3 of the engagement. It covers six sections in order: current-state operating metrics, channel-by-channel contribution analysis, retention and cohort behavior, contribution margin by SKU, structural problems blocking growth, and a prioritized initiative pipeline.

Current-state operating metrics. Blended CAC, contribution-margin LTV at 24 months, payback period in months, gross margin, contribution margin per order, returning-customer rate at 90 days, average order value, conversion rate by template (homepage, product, collection, cart, checkout), and Core Web Vitals at the 75th percentile per web.dev. Each metric carries a 30-day, 90-day, and 12-month figure so the advisor can identify trend versus snapshot.

Channel-by-channel contribution analysis. Meta, Google, TikTok, email, organic, affiliate, partnerships - each channel gets a contribution-margin ROAS figure with proper customer-level deduplication. Most operators look at platform-reported ROAS in Shopify's marketing attribution view or in their ad platform native dashboards, and platform-reported ROAS is gameable. The diagnostic uses customer-level data joined back to acquisition source, deduplicated against returning customers, and the contribution-margin figure rather than the revenue figure.

Retention and cohort behavior. A 24-month cohort table showing repeat-purchase rate, contribution-margin LTV, and time-to-second-purchase by acquisition month. The cohort table is where the diagnostic identifies whether the brand has a retention problem (most $1M to $10M operators do) or a top-of-funnel problem. A flat cohort curve plus low blended CAC means top-of-funnel is fine and retention is the load-bearing initiative; a steep cohort curve plus high blended CAC means top-of-funnel is broken and the retention work is premature.

Contribution margin by SKU. The top 20 products by gross revenue plus the bottom 20 by gross revenue, each with their contribution margin, returns rate, and inventory velocity. Most $1M to $10M operators discover at least two SKUs in the top 20 by revenue that have negative or near-zero contribution margin once shipping, returns, and payment-processing costs are accounted for - and these are usually the SKUs the brand is paying to acquire customers for. The diagnostic recommends pruning or repricing those SKUs, which usually shows up as the highest-impact single change in the first 90 days.

Structural problems blocking growth. The top three to five structural issues the diagnostic surfaces - typically a mix of platform constraints, retention mechanics, channel concentration, and team capability gaps. A real diagnostic names at least two structural problems the operator did not already know about, with a sized estimate of the revenue lift from solving each.

Prioritized initiative pipeline. Eight to twelve named initiatives ranked by expected revenue lift, effort in person-weeks, and elapsed time to results. Each has a named in-house owner, a target ship date, and the metric it will move on the dashboard. The pipeline is the bridge between the diagnostic and the rest of the quarter.

An advisory engagement that produces a thinner diagnostic - 8 to 15 pages, no SKU-level analysis, no cohort table - is selling generic strategy. The 25 to 40 page bar is not arbitrary; the analytical density is what separates real diagnostic work from a strategy deck.

§ 04 · the working cadence

Weekly working session. Monthly review. Quarterly decision.

Three rhythms. Each one has a fixed format. The format is the deliverable.

rhythm 01 · weekly

The weekly working session

60 to 90 minutes. Same time every week. Mandatory attendance from the founder or operator and the named owner of whatever initiative is in flight that week. Agenda set 24 hours ahead in a shared doc. Written meeting notes circulated within 4 hours after. Action items tracked in the decision log with a date, an owner, and a target ship date.

Format: review last week's commitments, work the current initiative, set next week's commitments.

rhythm 02 · monthly

The monthly leadership review

90 minutes. Operator plus any senior team members the operator wants in the room - typically the head of marketing, head of operations, head of finance for $5M-plus brands. Focused on the metric dashboard, the initiative pipeline, and any structural decisions that need a longer conversation than the weekly session allows.

Format: dashboard walkthrough, pipeline status, structural decisions, off-cadence asks.

rhythm 03 · quarterly

The quarterly business review

Half-day session. Covers the prior 90-day diagnostic outcomes, the next 90-day priorities, and a written renew-graduate-fire decision signed by the operator. The QBR is the forcing function for the whole engagement; without it, advisory drifts into continuity rather than progress.

Format: outcomes recap, next-quarter priorities, written decision.

The async layer beneath the three rhythms is a Slack or email channel for between-session decisions and a shared decision log. The decision log captures every meaningful judgment call across the engagement with date, context, decision, and the metric it will move. Advisory engagements without a written decision log usually drift after the first 90 days because nobody can remember why a particular call was made, and the engagement starts re-litigating decisions instead of building on them.

Cadence discipline is the deliverable. Working sessions that get rescheduled more than once a quarter for advisor-side reasons is the engagement failing in real time - the meeting cancellations are the work. The same applies on the operator side: founders who consistently miss the weekly working session or cancel for low-priority reasons are signaling that the engagement is not actually a priority, and the advisor should escalate the cadence problem at the next monthly review rather than absorb the missed sessions silently.

§ 05 · the deliverables that matter

Dashboards, runbooks, decision logs. Tools the in-house team uses after the call ends.

The metrics dashboard. Five to seven KPIs, weekly cadence, alert thresholds, owned by the in-house team. Built in Looker Studio or a Shopify-native analytics product if the operator already pays for one. The dashboard pulls from the source-of-truth systems (Shopify, the ad platforms, the email platform, the financial system) rather than from screenshots or manual exports. Most $1M to $10M brands run their reporting from Shopify Admin plus screenshots from each ad platform; the dashboard work consolidates that into a single weekly view.

Runbooks for repeat work. A runbook is a written, reproducible procedure for work that the in-house team will run more than once - a new product launch, a paid-channel test, a flow rebuild in Klaviyo, a subscription-program adjustment in Recharge. Each runbook has a named owner, a sequence of steps, the inputs required, and the metric it should move. A real advisory engagement ships 4 to 8 runbooks across the first quarter; runbook output is what allows the in-house team to graduate from the advisor over time.

The decision log. A shared document with one row per meaningful judgment call - date, context, options considered, decision, named decider, and the metric the decision will move. Decision logs prevent re-litigation and surface decision quality at the quarterly review. McKinsey's research on decision velocity in operating businesses identifies decision logs as one of the highest-leverage interventions a leadership team can install; advisory engagements that do not produce one are leaving the framework half-built.

The initiative pipeline. A live document tracking each named initiative from diagnostic through ship. Each initiative has a status (scoped, in-flight, shipped, measured, closed), a named owner, target ship date, the metric it will move, and the actual lift after measurement. The pipeline document is what the monthly leadership review walks through.

The week-2 commitment letter. A short written agreement between the advisor and the operator at the end of week 2 covering the engagement scope, the rhythm, the named owners on both sides, the dashboard metrics, and the renew-graduate-fire criteria. The commitment letter is not a contract amendment; it is an alignment document that captures what good looks like in writing so that the quarterly review has something to measure against. Advisory engagements without a commitment letter usually have ambiguous expectations and end up in renewal arguments at the quarter mark.

Three things explicitly not on the deliverables list: campaigns, creative, and execution. An advisory engagement that ships campaigns is a fractional-CMO engagement mislabeled. An advisory engagement that ships creative is an agency engagement mislabeled. An advisory engagement that owns execution end-to-end is a head-of-growth engagement mislabeled. The piece on SaaS development covers what to build internally; the piece on web development covers what to build for the storefront; advisory is the layer above both.

§ 06 · the implementation handoff

Strategy lands. Someone else does the work.

The most common point of failure in an ecommerce advisory engagement is the handoff between the strategy work and the implementation work. The diagnostic identifies a $400K revenue-lift initiative. The pipeline document names it. The in-house team commits to it at the weekly working session. Then nothing ships for six weeks because the in-house team did not have the engineering capacity, the design capacity, or the project management capacity to execute. The advisory engagement gets blamed for the lack of progress when the actual cause is implementation gap.

Three implementation models work at $1M to $10M scale.

In-house implementation. The operator has an in-house team that can execute the initiative pipeline - typically a small marketing team plus a part-time engineering or design freelancer. Advisory ships the strategy and the runbook; the in-house team ships the work. The advantage is cost and speed; the constraint is in-house capacity, which usually maxes out around 4 to 6 simultaneous initiatives.

Agency implementation. The operator pairs the advisory engagement with one or more execution agencies - a paid-acquisition agency for ad-account work, a Shopify-development agency for storefront work, a CRO agency for conversion-rate work, an email agency for retention work. Advisory ships the strategy; the agencies ship the campaigns and the code. The advantage is scaled capacity; the constraint is coordination overhead - the operator becomes the integration point between the advisor and the agencies, and that integration burden is real. Most $5M-plus brands run this model; sub-$3M brands usually cannot afford the coordination overhead and default to the in-house model. The piece on hiring an ecommerce development agency covers when to add the agency layer.

Mixed implementation. The operator has an in-house team for retention and content, plus one or two agencies for paid acquisition and storefront engineering, plus the advisor for strategy and measurement. The most common pattern at $3M to $10M scale. The implementation map is part of the week-2 commitment letter so that nobody is wondering who does the work after the strategy lands.

The advisor's role inside any of the three models is to keep the strategy and the implementation visible to each other. A weekly working session that loses track of what the in-house team or the agencies are actually shipping turns into a strategy theatre meeting; a working session that drowns in tactical execution turns into a project-management meeting. The 60 to 90 minute format keeps both layers in the room.

For Shopify Plus brands running a build alongside the advisory engagement, the work intersects with our Shopify Plus agency service - SEO, theme work, and integration ship inside the build phase rather than as standalone tactics. The piece on business plan for ecommerce founders covers the upstream financial framing the advisory engagement plugs into.

§ 07 · five metrics every engagement should track

CAC. LTV. Contribution margin. Payback. ROAS.

The five metrics that carry the engagement, plus the three that support them. Most $1M to $10M operators track three of the five and miss the contribution-margin layer entirely.

metric 01 · CAC

Blended CAC

Customer acquisition cost blended across all paid channels - Meta, Google, TikTok, affiliates, partnerships, sponsorships - measured monthly with a 30-day rolling average to smooth weekly noise. Blended CAC is the truth metric; channel-level CAC is gameable through attribution overlap. Healthy DTC operators at $1M to $10M scale typically run blended CAC at 25 to 45 percent of contribution-margin LTV; outside that band the unit economics break.

metric 02 · LTV

Contribution-margin LTV

Gross profit per customer over a 24-month window, after COGS, payment processing, shipping, and returns - not revenue LTV which is a vanity metric. The 24-month window matters because it captures the second-purchase moment that determines repeat-customer behavior for most ecommerce categories. Operators tracking only revenue LTV usually overestimate the customer's economic value by 30 to 60 percent and scale unprofitable channels as a result.

metric 03 · contribution margin

Contribution margin per order

Gross profit minus variable costs (COGS, payment processing, shipping, returns) on a per-order basis. The most-undertracked metric in $1M to $10M ecommerce. Operators typically track gross margin (revenue minus COGS) but miss the variable-cost layer underneath, and the missing data is what causes them to scale orders that lose money on a contribution-margin basis. A clean Shopify reporting setup with shipping cost allocation and returns deduction surfaces this metric weekly.

metric 04 · payback period

Payback period

Months from acquisition spend to recovery via contribution margin. Capital-efficient growth runs at 6 months or less; capital-intensive growth (subscription brands, high-AOV brands) can run 9 to 12 months and still pencil. Past 12 months, the operator is trading capital for revenue at a rate that requires either external funding or a step-change in retention to remain solvent. The piece on payback period in ecommerce covers the calculation in detail.

metric 05 · channel ROAS

Channel ROAS with customer-level deduplication

Return on ad spend per channel on a 30-day attribution window with proper customer-level deduplication. Most operators look at platform-reported ROAS in their ad accounts, which is gameable - the same customer attributed to Meta, Google, and TikTok shows up three times in the channel-level reporting and inflates ROAS by 30 to 80 percent. The advisory engagement runs ROAS on customer-level data joined back to acquisition source, deduplicated against returning customers, and reported alongside contribution margin rather than revenue. BCG Insights on marketing ROI covers the deduplication problem in detail.

The three secondary metrics that support the five. Returning-customer rate at 90 days - the share of new customers from a given month who place a second order inside 90 days; the leading indicator of cohort behavior and the earliest signal of retention strength. Cart-recovery flow conversion rate - the share of abandoned carts recovered through the email and SMS recovery flow; the highest-leverage Klaviyo metric for most $1M to $10M brands and the one that produced the Big Game Sports +420 percent BFCM result described later. Contribution margin by SKU - the top 20 products and the bottom 20 by gross revenue, each with their contribution margin, returns rate, and inventory velocity. The metric that surfaces the negative-margin SKUs the brand is paying to acquire customers for.

The piece on A/B testing for conversion success covers the testing infrastructure that produces statistical significance on these metrics; the piece on top conversion-optimization strategies covers the tactical work that improves the conversion-rate inputs to the dashboard.

§ 08 · when to renew, graduate, or fire

Three quarterly decisions. Written. Signed by the operator.

The decision is harder than it sounds because operators get attached to the rhythm. Forcing the question in writing is the only way to keep the engagement honest.

decision 01

Renew

When the diagnostic has revealed a multi-quarter initiative pipeline (subscription mechanics, retention overhaul, channel diversification, replatform) that the in-house team needs continued framework support to execute.

Signal: 3+ named initiatives in flight, dashboard moving in the right direction, in-house team still gaining capability rather than plateauing.

decision 02

Graduate

When the in-house team has the capability and the cadence to run their own operating system without weekly external input - typically after two to four quarters for a strong operator. The hardest decision because operators get attached to the rhythm.

Signal: in-house team running the dashboard reviews independently, runbooks getting updated by the team, fewer structural decisions surfacing in the working sessions.

decision 03

Fire

When the engagement has not produced quantified results inside two consecutive quarters, or when the advisory output is generic enough that a smart operator could write it themselves after 6 hours of reading the right material.

Signal: dashboard flat or moving wrong way, initiative pipeline stalling, working sessions feel like generic strategy talk rather than specific decisions.

The graduate decision is harder than the fire decision in practice. Fire is straightforward when the metrics are flat; graduate is hard because the operator is making the call to walk away from a relationship that is working. The forcing function is a written quarterly decision template that names the criteria for each option and asks the operator to circle one. The template lives in the decision log and the QBR walks through it. Harvard Business Review's work on decision traps identifies status-quo bias as one of the highest-frequency reasons leadership teams keep relationships running past their expiration date; the written template is the structural fix.

Advisory engagements that run for 8 or more quarters without a graduate-or-fire conversation are usually selling continuity rather than progress. The advisor benefits from the renewal revenue; the operator benefits from the meeting structure; the unit economics of the engagement quietly degrade because nobody is asking whether the framework is still teaching the in-house team something they did not already know.

§ 09 · three engagement archetypes

What the work has looked like. Three real cases.

Three engagements at three different stages with three different load-bearing initiatives. The metrics, the rhythm, and the implementation handoff in each case.

01

Emani · subscription cadence work, $0 to $2M MRR

The diagnostic. Subscription product brand at pre-revenue with a strong founder thesis, a working prototype, and no validated subscription cadence. Diagnostic in week 2 surfaced two structural questions: what cadence (monthly versus bi-monthly) maximized contribution-margin LTV, and what onboarding sequence converted trial subscribers to retained subscribers.

The cadence. Weekly working session for the first two quarters; monthly leadership review with the founder plus the head of operations; quarterly business review at month 3, 6, 9, 12. Initiative pipeline stayed at 3 to 5 active items because the team was small.

The result. $0 to $2M MRR over the engagement window. The load-bearing initiative was the subscription cadence test and the onboarding-flow rebuild in Klaviyo plus Recharge; the secondary initiative was the paid-acquisition channel mix shift from Meta-only to Meta plus Google plus organic. Contribution-margin LTV at 24 months ended the engagement at 3.2x blended CAC.

Read the Emani case study →

02

Big Game Sports · cart-recovery flow architecture, +420% BFCM

The diagnostic. Sporting goods brand at $4M annual revenue with a heavy seasonality concentration in Q4. Diagnostic in week 3 surfaced that the cart-recovery flow in Klaviyo was running the platform-default 3-step sequence with no SMS, no win-back at day 7, and no segmentation by cart value. The structural issue was a roughly 12 percent recoverable revenue leak across the Q4 window.

The cadence. Weekly working session through Q3 (the build phase) and twice-weekly through Q4 launch (the test phase). Monthly leadership review with the operator plus the head of marketing. Quarterly business review at month 3 and 6.

The result. +420 percent BFCM revenue versus the prior year, driven primarily by the rebuilt cart-recovery flow architecture (8-step sequence with email plus SMS, value-segmented messaging, day-7 win-back, day-14 re-engagement) plus a paid-acquisition shift toward Meta retargeting in the Q4 window. The runbook for the rebuilt flow shipped to the in-house team at end of Q3 and they ran the launch independently.

Read the Big Game Sports case study →

03

Noble Paris · product-page proof-stack rebuild, $420K MRR

The diagnostic. Premium fashion brand at $250K MRR with strong organic acquisition and a conversion-rate problem on product pages. Diagnostic in week 2 surfaced that product pages had thin proof - one or two reviews, no UGC, no editorial citations, no founder-voice content - against a brand position that promised craft and provenance. The structural issue was the gap between the brand-page promise and the product-page execution.

The cadence. Weekly working session through the engagement; monthly leadership review with the founder plus the creative director; quarterly business review at month 3 and 6. The implementation handoff was mixed - in-house team for content and creative, agency partner for storefront engineering, advisor for the framework and the metrics.

The result. $250K MRR to $420K MRR across two quarters. The load-bearing initiative was the product-page proof-stack rebuild - reviews, UGC, founder-voice product copy, materials sourcing detail, editorial citation block. Conversion rate on product pages moved from 1.8 percent to 3.1 percent across the engagement; AOV moved up 12 percent because the proof stack supported the higher-tier SKUs.

Each archetype carries the same five-pillar metric set (CAC, LTV, contribution margin, payback, ROAS) and the same three-rhythm cadence. The variation is in the load-bearing initiative, which is what the diagnostic identifies in the first 90 days.

§ 10 · red flags during the engagement

Five signals. Each one is fixable. Each one has a 30-day window.

  1. The diagnostic ships late or is generic. Past week 4, or thin enough that nothing in it would surprise a competent operator. Course-correct: a written conversation with the advisor naming the gap and a 30-day repair window. If the rewrite does not surface at least two structural problems the operator did not already know about, terminate at the next quarter boundary.
  2. Working sessions get rescheduled. More than once a quarter for advisor-side reasons, or chronically on the operator side. Cadence discipline is the deliverable; the cancellations are the work failing. Course-correct: name the cadence problem at the next monthly review; commit to the same time every week or move the engagement to a different cadence model.
  3. The dashboard never stabilizes. The advisor keeps proposing new metrics rather than tightening the existing five. Course-correct: lock the five primary metrics at the end of week 4; any new metric goes into a secondary tier. Dashboard-thrashing usually signals the advisor does not have a clear theory of the business.
  4. The named owner is the advisor. Initiatives in the pipeline have the advisor listed as the owner instead of an in-house team member. Course-correct: re-assign every initiative to an in-house owner inside the next two weekly sessions. Advisory output that the in-house team cannot execute without the advisor present is a fractional-CMO engagement mislabeled.
  5. The renew-graduate-fire conversation never happens. Or the advisor introduces it only after a contract renewal. Course-correct: the operator schedules the QBR independently and circulates a written renew-graduate-fire template the day before. If the advisor resists the format, the engagement is in continuity mode rather than progress mode.

Most red flags are recoverable inside one quarter if both sides are willing to course-correct in writing. The terminal failure is when the advisor refuses to acknowledge the gap; in that case the engagement has already failed and the QBR is the exit conversation.

§ 11 · questions operators ask

Six honest answers.

What does an ecommerce advisory engagement actually deliver in the first 90 days?

A real ecommerce advisory engagement at $1M to $10M scale delivers four concrete artifacts inside the first 90 days. First, a written diagnostic - typically a 25 to 40 page document covering the current-state CAC, contribution-margin LTV, payback period, channel ROAS, contribution margin by SKU, and the top three structural problems blocking growth. The diagnostic ships in week 2 or week 3, not month 2. Second, a quantified prioritization - the top 8 to 12 initiatives ranked by expected revenue lift, effort, and elapsed time, with each tied to a named owner and a target ship date. Third, a metrics dashboard the operator's team can read on a Monday morning without the advisor present - five to seven KPIs, weekly cadence, alert thresholds. Fourth, a working-session cadence calendar - typically a 90-minute weekly working session plus a 30-minute async check-in, plus a monthly leadership review. An advisory engagement that does not produce these four artifacts inside 90 days is selling fog and should be terminated at the first quarterly review.

How is an ecommerce advisory engagement different from a fractional CMO or a marketing agency?

Three things separate advisory from fractional CMO and from agency retainer work. First, scope. Advisory covers the full operating system of the ecommerce business - channel mix, contribution margin, retention mechanics, supply chain, hiring, technology - not just the marketing function. A fractional CMO owns the marketing function end to end. An agency executes specific marketing tactics. Second, output. Advisory ships strategy documents, decision frameworks, and metric dashboards that the in-house team executes. A fractional CMO ships the marketing work directly. An agency ships campaigns, creative, and reporting. Third, accountability. Advisory is accountable for the operator's decision quality and the framework, not for hitting a specific revenue target inside the engagement window. A fractional CMO is accountable for the marketing P&L. An agency is accountable for the metric in their scope of work. The right structure depends on whether the operator needs better thinking (advisory), a marketing leader (fractional CMO), or specific tactical execution (agency). Most healthy ecommerce operators at $1M to $10M scale use one or two of the three at any given time, not all three.

What does a typical ecommerce advisory weekly cadence look like?

A working ecommerce advisory cadence at $1M to $10M scale typically has three rhythms. The weekly working session - 60 to 90 minutes, same time every week, mandatory attendance from the founder or operator and the named owner of whatever initiative is in flight that week. Agenda set 24 hours ahead, written meeting notes circulated within 4 hours after, action items tracked in a shared decision log. The monthly leadership review - a 90-minute review with the operator and any senior team members, focused on the metric dashboard, the initiative pipeline, and any structural decisions that need a longer conversation than the weekly session allows. The quarterly business review - a half-day session that covers the prior 90-day diagnostic outcomes, the next 90-day priorities, and a written renew-graduate-fire decision. The async layer beneath all three rhythms is a Slack or email channel for between-session decisions and a shared decision log that captures every meaningful judgment call with date, context, decision, and the metric it will move. Advisory engagements without a written decision log usually drift after the first 90 days because nobody can remember why a particular call was made.

What metrics should an ecommerce advisory engagement track?

Five metrics carry an ecommerce advisory engagement at $1M to $10M scale. CAC blended across all paid acquisition channels - measured monthly, with a 30-day rolling average to smooth weekly noise. Contribution-margin LTV - gross profit per customer over a 24-month window, not revenue LTV which is a vanity metric. Contribution margin per order - gross profit minus variable costs (COGS, payment processing, shipping, returns) on a per-order basis. Payback period - months from acquisition spend to recovery via contribution margin, ideally under 6 months for capital-efficient growth. Channel ROAS - return on ad spend per channel (Meta, Google, TikTok, email, organic) on a 30-day attribution window with proper customer-level deduplication. Three secondary metrics support the five: returning-customer rate (a 90-day cohort metric), cart-recovery flow conversion rate, and contribution margin by SKU (the top 20 products and the bottom 20). Most $1M to $10M operators track revenue, gross margin, and ROAS but miss the contribution-margin metrics, and the missing data is what causes them to scale unprofitable channels.

How long should an ecommerce advisory engagement run before renewal or termination?

Most healthy ecommerce advisory engagements run 90 days then quarterly thereafter, with a written renew-graduate-fire decision at every quarter mark. Renew when the diagnostic has revealed a multi-quarter initiative pipeline (subscription mechanics, retention overhaul, channel diversification, replatform) that the in-house team needs continued framework support to execute. Graduate when the in-house team has the capability and the cadence to run their own operating system without weekly external input - typically after two to four quarters for a strong operator. Fire when the engagement has not produced quantified results inside two consecutive quarters, or when the advisory output is generic enough that a smart operator could write it themselves after 6 hours of reading. The graduate decision is harder than the fire decision because operators get attached to the rhythm; a written quarterly decision forces the question. Advisory engagements that run for 8 or more quarters without a graduate-or-fire conversation are usually selling continuity rather than progress, and the operator's cap table will eventually reflect the wasted spend.

What red flags should an operator watch for during an active ecommerce advisory engagement?

Five red flags during the first 90 days of an active ecommerce advisory engagement. First, the diagnostic ships late - past week 4 - or is generic enough that nothing in it would surprise a competent operator. A real diagnostic identifies at least two structural problems the operator did not already know about. Second, the working sessions get rescheduled more than once a quarter for advisor-side reasons. Cadence discipline is the deliverable; the meeting cancellations are the work failing. Third, the metrics dashboard never gets built or never stabilizes - the advisor keeps proposing new metrics rather than tightening the existing ones. Fourth, the named owner of each initiative is the advisor rather than the in-house team. Advisory output that the in-house team cannot execute without the advisor present is a fractional-CMO engagement mislabeled as advisory. Fifth, the renew-graduate-fire conversation never happens at the quarter mark, or the advisor introduces it only after a contract renewal. The fix for any one of these is the same - a written course-correction conversation with the advisor naming the specific failure mode and a 30-day window to repair. If the repair does not happen inside 30 days, terminate at the next quarter boundary.

§ 12 · the next step

Bring the dashboard. We'll bring the diagnostic.

A 30-minute ecommerce advisory discovery call. Named principal on the call, not a sales rep. Written 90-day diagnostic outline returned within two business days. 2,000-plus stores shipped since 2017; Trustpilot 4.9 across 70-plus reviews; New York and Delhi HQ; UN Global Marketplace Tier 1.