Category: Insights

  • Holding Company, Operating Firm: Drawing the Line Cleanly

    Every multi-entity structure eventually faces the same question, and the answer determines whether the structure becomes an asset or a liability. The question is where authority lives. Not where it lives on the org chart, which is the easy question, but where it lives in practice — in the small decisions that get made every day without anyone noticing, in the gravitational pull that builds up when a function gets centralized, in the slow rewriting of who decides what that happens whenever a holding company gets bigger than the firms underneath it. The hardest decision in designing the TX-LW structure was where to draw the line between the holding company and the operating firms. Draw the line in the wrong place and you get the worst of both worlds: a holding company too thin to actually help, and operating firms too constrained to actually operate. Draw the line in the right place and you get the thing that makes the platform work: real autonomy at the firm level, real leverage at the platform level, and no confusion about who decides what.

    This essay is about how we drew that line, why we drew it where we did, and what we have learned about defending it. It is also about the broader principle behind the decision, which is that organizational architecture is a form of pre-commitment. You are not just designing reporting relationships. You are designing a set of constraints that will outlive the founders, that will be tested by every new hire, every new initiative, every new firm acquired into the platform. The architecture is a promise to the future about what kind of company this is. Get the promise wrong and the company drifts into something nobody chose to build.

    The Drift Problem

    There are a thousand wrong ways to draw the line. The most common wrong way is to let the line drift. The holding company starts with a narrow role, then a new initiative requires platform-level coordination, then a particular firm has a problem the platform decides to solve for it, then a standardization push sneaks in because it would be easier for the controller. Five years in, the holding company has accumulated authority it never explicitly took, and the firms have lost authority they never explicitly gave up. Nobody can tell you when the shift happened, but the firms can tell you it happened.

    Drift is not a failure of intention. It is a failure of architecture. Every individual decision that contributes to drift is defensible on its own terms. Of course the marketing team should run the campaign centrally — it is more efficient. Of course the controller should set the chart of accounts — it is cleaner. Of course the platform should approve the new hire above a certain level — it is prudent. Each step is small. Each step is justified. The cumulative effect of the small justified steps is a company nobody chose to build, run by a holding company that has quietly become the headquarters of an operating company while still calling itself a platform.

    The cure for drift is not vigilance. Vigilance fails because the people doing the drifting are not adversaries — they are colleagues solving real problems in good faith. The cure for drift is a written line, defended publicly, with a default that favors the firm whenever the line is ambiguous. The architecture has to do the work, because the people are too busy and too well-intentioned to do it for you.

    What the Holding Company Does

    A short list. The holding company sets the capital allocation policy — how much each firm can spend on what kinds of investment. The holding company underwrites the shared infrastructure — the technology platform, the controllership function, the marketing function, the operations leadership. The holding company is the eventual employer of the firm leadership, which means it can hire, develop, and replace firm leaders. The holding company runs acquisitions, including the diligence and the integration. The holding company owns the brand at the platform level — the TX-LW name, the shared standards, the institutional voice.

    That is the list. The holding company does not own client relationships. The holding company does not own matter-level decisions. The holding company does not own staffing decisions inside a firm. The holding company does not own practice-area strategy. The holding company does not own pricing. Each of those is the firm’s.

    The list is short on purpose. A long list is a confession that you have not yet figured out what the holding company is for. The holding company is for the things that scale, that benefit from centralization, that one good decision-maker can do better than ten — and nothing else. Everything else is noise that belongs at the firm level, where the people who actually know the work can make the call. If you find yourself adding a tenth or eleventh item to the holding company’s responsibilities, the right move is not to add it. The right move is to ask why it is being added, what problem it is solving, and whether the firm could solve that problem itself if it had to. Most of the time, the firm could.

    What the Operating Firm Does

    Everything that the holding company does not do. Practice-area strategy. Client relationships. Matter-level decisions. Staffing decisions. Pricing. Local marketing — supported by the holding company’s marketing function but driven by the firm. Vendor decisions inside the firm. Day-to-day operating choices. Compensation inside the band the holding company sets for the role. The choice of what work to take and what to decline.

    The firm is a real business. It has its own P&L, its own staff, its own leadership, its own brand. It is not a profit center inside a larger company. It is not a subsidiary in the conventional sense. It is a separately operated business under common ownership. The distinction matters because it changes everyone’s incentives. A profit center inside a larger company manages to the larger company’s numbers. A real business under common ownership manages to its own numbers, with the larger company as a capital provider and infrastructure partner, not as a parent dictating quarterly targets.

    This distinction is not a semantic flourish. It changes who the firm leader gets up in the morning thinking about. If the firm is a profit center, the firm leader thinks about pleasing the platform. If the firm is a real business, the firm leader thinks about serving the client. The first orientation produces firms that drift toward whatever the platform measures. The second orientation produces firms that are durably good at the actual work, because the actual work is what the firm leader is being held accountable for.

    Shared Infrastructure, Separate Identities

    The phrase we use most often inside the company is shared infrastructure, separate identities. The shared infrastructure is technology, controllership, marketing operations, administrative leadership, and the institutional knowledge of how to run a small professional services firm. Every firm benefits from these. The separate identities are everything that touches the client — brand, voice, work product, relationship, location, history. Every firm preserves these.

    In practice this means a probate client who goes to Kreig LLC does not see TX-LW anywhere. The lawyer she meets with works for Kreig LLC. The engagement letter says Kreig LLC. The bills come from Kreig LLC. The portal is branded Kreig LLC. The client knows she is working with a small Texas probate firm. She is right. She is also benefiting from a controllership function, a technology platform, and an operations discipline that a standalone firm of that size could never afford. The shared infrastructure is invisible to her, which is exactly the point. Infrastructure that announces itself is no longer infrastructure. It is a brand collision.

    The separation of identities is not a marketing posture. It is an operational commitment. The firm has its own engagement letters because the firm is the legal counterparty. The firm has its own bills because the firm is the economic counterparty. The firm has its own portal because the firm is the relationship counterparty. The holding company is not the counterparty to anything client-facing. The holding company is the counterparty to the firm.

    Why the Architecture Looks This Way

    The architecture is shaped by what scales and what does not. Technology scales. Centralized controllership scales. Marketing operations scale. Real estate, vendor management, insurance, banking — all of these scale. The shared infrastructure is the things that scale.

    Client relationships do not scale. Practice expertise does not scale across practice areas. Trust does not scale. Reputation does not scale. The identity of being a small Texas firm that knows its clients and serves its community does not scale. These are the things we keep at the firm level, because the moment we try to scale them, they break.

    The deeper logic is that there are two fundamentally different kinds of value creation happening inside any professional services platform, and they obey different laws. Infrastructure value compounds with scale — every additional firm makes the controller more leveraged, the technology platform cheaper per seat, the operations playbook more refined. Relationship value compounds with depth — every additional year of working with a client makes the lawyer more useful, every additional case in a practice area makes the firm more expert, every additional referral in a community makes the firm more durable. The first kind of value rewards centralization. The second kind of value punishes it. A well-designed platform separates the two and lets each compound in its own way. A poorly designed platform tries to centralize both and ends up with infrastructure that scales beautifully and relationships that quietly atrophy.

    This is the architectural insight that drove every other decision. If you accept that infrastructure and relationships obey different laws, then the holding company has to be designed to handle one kind of value and the operating firms have to be designed to handle the other. They cannot be the same organization with different labels. They have to be different organizations with different cultures, different incentives, and different ways of measuring success. The line between them is not a convenience. It is a recognition of two different physics.

    What Happens When the Line Is Tested

    Every quarter, something happens that tests the line. A platform-level initiative would be easier if the firms standardized. A firm-level decision would be easier with platform-level approval. We default to the line as written. The platform does not standardize firms; the platform supports firms in being themselves. The firm does not seek platform approval; the firm decides and the platform reads the report.

    The tests rarely look like power grabs. They look like efficiency arguments. Wouldn’t it be easier if all the firms used the same intake form? Wouldn’t it be cleaner if all the firms followed the same engagement letter template? Wouldn’t it be smarter if the platform reviewed pricing above a certain threshold? Each of these questions is reasonable in isolation. Each of these questions, answered yes, moves the line. Five years of reasonable yeses produce an unreasonable structure. The answer to almost all of these questions is no, and the reason is not that the question is bad. The reason is that the architecture is more valuable than the local efficiency the question is asking us to capture.

    Holding the line costs us speed sometimes. We are willing to pay that cost because the alternative is the slow drift toward the kind of consolidated platform we explicitly chose not to be. The architecture is the discipline. The discipline is what makes the firms durable.

    The Counter-Argument, Honestly Stated

    The honest counter-argument to this architecture is that it leaves money on the table. A more consolidated platform would capture more of the operating leverage. A more standardized firm network would be easier to sell, easier to scale, easier to manage. The conventional roll-up playbook produces bigger numbers faster, at least on paper, and the people who run those platforms are not stupid. They have read the same books we have.

    We have decided that the money left on the table is the price of admission to a different game. The consolidated platform optimizes for short-term operating leverage and long-term fragility. The relationships get thinner, the firm-level expertise gets diluted, the people who actually know how to do the work leave because they no longer recognize the firm. The platform looks great in a deck and feels hollow on the inside. We are not building for the deck. We are building for the client who will be a client in ten years, and that client wants a firm that still feels like a firm, not a branch office of a consolidated operator.

    There is also a humility argument. We do not actually know what the optimal level of consolidation is. Nobody does. What we know is that consolidation is easy to do and hard to undo, and that a structure with strong firm-level autonomy preserves the ability to consolidate later if we decide we want to. The reverse is not true. A consolidated platform cannot retroactively manufacture firm-level identity and relationship depth. The architecture we have chosen is the more reversible one. In a world where we cannot be certain we are right, the more reversible architecture is the wiser bet.

    What to Do Monday Morning

    If you are designing a multi-entity structure, write the line down. Not in a slide. In a document that you would be comfortable showing every firm leader and every platform employee, that says here is what the holding company decides and here is what the firm decides and here is what we do when the two disagree. The act of writing it down forces a precision that the slide does not. The act of publishing it commits you to defending it.

    Keep the holding company’s list short. If it is more than five or six things, you are probably building a consolidated operator and calling it a platform. Be honest about which one you are building, because the two require different leadership, different incentives, and different stories told to the people you are trying to hire.

    Default to the firm when the line is ambiguous. Centralization is a one-way ratchet. Decentralization is a posture you have to maintain on purpose, every quarter, against the constant pressure of small efficiency arguments. The default has to lean in the harder direction, because the easier direction will win the rest of the time on its own merits.

    And finally, audit the line annually. Not the org chart. The actual line. Where did decisions get made this year that the architecture says should have been made somewhere else? Where did the platform reach into a firm? Where did a firm punt to the platform something that was its own to decide? The audit is not a punishment exercise. It is a recalibration. The drift is constant. The discipline is the answer.

  • Outcomes Over Activity: What We Actually Measure

    There is a sentence that every operator should write down and tape to the wall above the desk. The sentence is: what gets measured gets managed, and what gets managed becomes the company. The reason to tape it to the wall is that the implication is unforgiving. The metrics you pick are not a window into the business. They are the business, in slow motion. Five years of bonusing the wrong thing produces a firm full of people who are good at the wrong thing. The firm is not broken — it is exactly the firm the measurement system asked for. That is the part operators routinely miss, and it is the part that separates a firm that compounds quality over time from a firm that simply gets bigger.

    Activity is easy to measure. Outcomes are hard. This is why most professional services firms organize their compensation, their reviews, their reporting, and their daily routines around activity. Billable hours. Time entries. Matters opened. Emails sent. Documents drafted. None of these things tell you whether the firm is actually getting better at the work it exists to do. Some of them actively make the work worse. The firms that figure this out and discipline themselves to measure the right things end up, after a decade, looking radically different from the firms that did not — even though, on any given day, the two firms looked roughly the same.

    Why Activity Metrics Win by Default

    Activity metrics win by default because activity is countable and outcomes are slippery. A timesheet tells you to the tenth of an hour what someone did. An outcome tells you, eventually, whether the case turned out the way the client needed it to — but eventually is the problem. By the time the outcome arrives, the performance review is already done, the bonus is already paid, and the person being measured has already moved on to the next matter. The measurement system optimizes for what fits inside the review cycle. Activity fits. Outcomes do not, at least not without effort.

    There is also a deeper reason. Activity metrics protect the manager from having to make a judgment. If the rule is “bill 1,800 hours,” nobody has to decide whether the work was good. The hours are the rule and the rule is the result. Outcome metrics require a manager who is willing to look at the work, form a view about whether it was good, and own that view in writing. That is harder. Most firms quietly choose the easier path and call the choice rigor.

    When we acquire a firm, the move from activity to outcomes is one of the most contested changes. The people who have been getting paid for activity have a real and legitimate concern: they understand the rules of the old game, and they have built careers around playing it well. The move to outcomes feels, to them, like the rules are being rewritten mid-career. We try to be careful about the transition. We are not flexible about the destination. The destination is non-negotiable because the alternative is to accept a firm that is gradually, quietly, becoming worse at the actual work — and a firm that is becoming worse at the actual work cannot be repaired with a pep talk. It can only be repaired by changing what it measures.

    What an Outcome Actually Is

    An outcome is something that, if you point at it, the client can verify. Did the probate close on time? Did the tax controversy resolve favorably? Did the bookkeeping reconcile without exception? Did the corporate filing land before the deadline? These are outcomes. They are observable, they are unambiguous, and they are the things the client actually cared about when she hired the firm.

    An outcome is not hours billed. An outcome is not “responded to email within 24 hours.” Those are activities. They may correlate with outcomes, sometimes, but the correlation is loose and the moment the activity becomes the metric, the correlation breaks. People will respond to the metric, not to the thing the metric was trying to measure. This is Goodhart’s Law, and Goodhart’s Law is a law in the same way gravity is a law. Pretending it does not apply to your firm because your people are sophisticated is the kind of mistake that smart organizations make routinely and never recover from.

    The test for whether something is an outcome or an activity is simple. Could a competitor copy the metric without copying the underlying capability? If yes, it is an activity. Anyone can bill more hours. Anyone can answer email faster. Nobody can casually copy a track record of probates that closed cleanly, controversies that resolved favorably, and clients who came back. Outcomes are the metrics that, if you hit them year after year, mean you are actually good at the work. Activities are the metrics that, if you hit them year after year, mean you were good at hitting the metrics.

    How We Measure Firm Leaders

    Each firm leader has a small number of outcomes she is accountable for. Client retention. Client outcomes against the firm’s own quality standards. Staff retention. Financial performance, measured properly — gross margin, contribution margin, free cash flow — not just revenue. The state of the systems and processes. The depth of the management bench. That is the list. It fits on one page.

    We do not measure firm leaders on the number of cases opened, the number of hours billed, the number of marketing events attended, or any of the other proxies that fill up management dashboards. We trust them to figure out the activities. We hold them accountable for the result.

    The list is short because a long list is the same as no list. A firm leader who is accountable for thirty things is accountable for nothing — she will pick the three or four that are easiest to influence in the current quarter and let the rest drift. The discipline of a short list is that the firm leader cannot hide. She knows what she is being measured on. We know what she is being measured on. There is no plausible deniability when one of those numbers moves the wrong way. The conversation that follows is straightforward, because the architecture of the conversation was set up months earlier when the metrics were chosen.

    The hardest item on that list is “the state of the systems and processes.” It is hard because it is the only item that requires judgment rather than measurement. A firm leader can hit her financial numbers for two years while quietly letting the systems decay, and the cost of that decay will not show up on the dashboard until year three. We pay attention to it anyway, in person, by walking the firm and looking at how the work actually gets done. The dashboard cannot tell us this. Nothing can, except going to look. The willingness to go look is part of the holding company’s job that the dashboard cannot replace.

    How We Measure Practitioners

    Practitioners are measured on a different but related list. Quality of work product as reviewed by peers and the firm leader. Client satisfaction in the matters they handled. Throughput against a realistic target, with the realism set per practice area. Contribution to the firm beyond their individual matters — mentoring, process improvement, internal training. The development of their own skills against a documented plan.

    Compensation is tied to this list. Bonuses are tied to this list. Promotion is tied to this list. The list is shared explicitly with every practitioner so that there is no daylight between what is being measured and what is being rewarded. Nothing erodes trust faster than the gap between the stated metrics and the metrics that actually drive pay. When practitioners discover that the stated metrics are decorative and the real metrics are something else, two things happen simultaneously: they stop trusting the firm, and they start optimizing for the real metrics anyway. The firm gets the worst of both worlds — cynicism plus the wrong behavior.

    Quality of work product is the metric that does the most work and gets the least attention in the industry. Most firms measure quality by absence — no malpractice claims, no client complaints — which is the wrong end of the distribution. We measure quality by presence. A senior practitioner reviews a sample of every practitioner’s work product every quarter, scores it against a documented rubric, and discusses it with the practitioner. The rubric is not perfect. No rubric ever is. The point of the rubric is not to be perfect; the point is to be a structure that forces a conversation that would otherwise not happen, between two people who would otherwise not have it.

    What We Stop Measuring

    We stop measuring billable hours as an individual performance metric. The firm still tracks hours, because the firm still needs to bill, but the individual practitioner is not measured against an hour target. The reason is simple: hour targets distort behavior. They encourage padding. They encourage avoiding efficiency improvements. They encourage taking on busy work instead of high-leverage work. The right amount of distortion is none.

    We stop measuring response time on emails. We stop counting matters opened. We stop tracking attendance at internal meetings. We stop the rituals that most firms do because most firms have always done them. We replace these with the outcome-level reporting and we trust the team to manage their own time.

    The unintuitive part of stopping these measurements is that you cannot just stop measuring them. You have to stop talking about them, stop charting them, stop building dashboards around them, and stop letting the old measurements creep back in under new names. The gravitational pull of activity metrics is constant. There is always a manager who feels more comfortable with a number she can verify than a number she has to judge. There is always a finance team that finds it easier to allocate cost by hour than by outcome. There is always a partner who remembers the old system fondly and proposes bringing back just one or two of the old measurements “for context.” The work of holding the line on what we do not measure is, in our experience, as hard as the work of choosing what we do measure.

    The Reporting Discipline

    Every firm reports the same set of numbers to the holding company every month. The reporting fits on one page. The narrative around the report is short. The exceptions are explained. Trends are noted. That is the entire interaction. We do not require slides. We do not require strategic plans. We do not require quarterly business reviews. The firm leader runs the firm. We read the report. If something looks wrong, we ask a question. If something looks right, we say so and move on.

    This requires a level of trust between the holding company and the firm leader that does not exist in most firms-owned-by-bigger-companies. We are aware of the difference. The trust is the entire premise. If the firm leader needs more oversight than this, the firm has the wrong leader. If we cannot let the firm leader operate at this level of autonomy, we have the wrong holding company. The accountability model is the architecture; everything else is detail.

    The one-page report is also a forcing function for the holding company. Most platforms drift toward more reporting over time, because more reporting feels like more control. It is not. More reporting is more noise. The signal-to-noise ratio of a one-page report with six outcome metrics is dramatically higher than the signal-to-noise ratio of a thirty-page deck with two hundred activity metrics. The one-page report makes the exceptions stand out. The thirty-page deck buries them. We have chosen the format that surfaces what matters and accepted that the format will sometimes feel too lean. It is supposed to feel too lean. Anything that feels comprehensive is, by definition, hiding something.

    The Hardest Part: Patience

    Outcome metrics are slow. A firm that switches from activity to outcomes will, in the first year, look like a firm with less data. The dashboards will be sparser. The granularity will be lower. The activity-loving partners will feel underinformed. The temptation to add back “just one” activity metric to fill the gap will be constant. Resist it. The point of the switch was that activity data was bad data, and bad data plus good data is just contaminated data. The discipline is to wait, in some discomfort, until the outcome data accumulates enough texture to run the firm with.

    The reward, when the outcome data does accumulate, is a different kind of firm. The firm starts to know things about itself that activity-measured firms cannot know. Which kinds of matters actually run cleanly and which ones predictably blow up. Which practitioners deliver the outcomes the clients hire the firm for and which ones look productive but produce mediocre results. Which initiatives moved the things that matter and which initiatives just produced motion. This knowledge compounds. After a few years, the firm has a self-awareness that the activity-measured competitor does not have and cannot easily build. That self-awareness is the durable competitive advantage. Everything else is the scaffolding that produces it.

    What to Do Monday Morning

    Write down the metrics that currently drive compensation at your firm. Then write down the metrics that you would want to drive compensation if you were starting from scratch. Compare the two lists. The gap between them is the work. Closing the gap is a multi-year project, because compensation systems are deeply embedded in habits and contracts, but the project does not begin until the gap is written down.

    Cut the number of metrics to something a firm leader can hold in her head. If the dashboard has more than ten things on it, the dashboard is hiding rather than revealing. Pick the six or seven that, if all of them are healthy, mean the firm is healthy. Accept that the cut will feel reckless. It is not. It is the only honest version of the dashboard.

    And finally, when an activity metric quietly creeps back in — and it will — name it out loud and remove it. The drift toward activity is constant. The discipline of staying with outcomes is what makes the measurement system worth having. A measurement system that measures the right things, badly, is better than one that measures the wrong things, perfectly. Almost everyone gets this backwards. The firms that get it right end up, after a decade, looking like nothing else in the market.

  • Roll-Up vs. Hold-Separate: The Honest Trade-Offs

    The most dangerous moment in the life of an acquisition strategy is the moment you fall in love with one model. Once you have decided that a roll-up is the right answer, every firm starts to look like a roll-up candidate. Once you have decided that hold-separate is the right answer, every firm starts to look like a hold-separate candidate. The strategy stops being a tool and starts being an ideology, and ideologies are very expensive teachers. They charge tuition in the form of bad acquisitions, decade-long integrations, and partners who walk out the door with the client list.

    There are two dominant ways to assemble a portfolio of small professional services firms. You can merge them into a single brand with shared systems, shared staff, and a single P&L — the roll-up. Or you can hold them separately, each operating under its own name and leadership, sharing only what makes sense to share at the holding-company level. Both models work. Both have produced excellent outcomes and spectacular failures. The question is not which one is right in the abstract. The question is which set of trade-offs you want to live with, in which kind of market, with which kind of clients, given what you can actually execute. The honest comparison is the one that takes the other side seriously, and that is the comparison we have tried to write below.

    This is our attempt at that honest comparison. We have picked one model for TX-LW, and we will tell you why at the end. But we want to lay out the case for the other side first, because the roll-up is a real strategy with real advantages, and pretending otherwise is not useful to anyone — least of all to operators trying to make this decision for themselves.

    The Roll-Up: What It Is

    A roll-up acquires several firms in the same industry and combines them into one. The acquired firms typically lose their individual brands within a few years. Back-office functions consolidate. Pricing standardizes. The combined entity reports as a single business and is usually positioned for a larger exit — to a strategic buyer, a larger private equity fund, or the public markets.

    The roll-up is, at its core, a financial engineering strategy that depends on operational execution to deliver. The financial part is the multiple arbitrage: you buy small firms at small-firm multiples, you combine them, and the combined entity trades at a larger-firm multiple. The operational part is the integration: you have to actually capture the synergies, retain the clients, and run the combined business well enough that the multiple arbitrage is not just an accounting illusion. The financial part is easy to model. The operational part is where roll-ups live or die, and most of the variance in roll-up outcomes is variance in operational execution, not in the original thesis.

    The Case For the Roll-Up

    Cost synergies are real. One billing system instead of five. One HR function. One marketing team. In professional services, where SG&A often runs twenty to thirty percent of revenue, consolidating overhead can add meaningful margin within twelve to twenty-four months. The math here is not the trick. The trick is whether the firm can actually execute the consolidation without breaking the work that pays for it.

    Pricing power. A combined firm has more leverage with vendors, landlords, insurance carriers, and technology providers. It can also raise prices to clients more confidently when it is the only specialist of its kind in a region. The pricing-to-clients story is the more interesting one because it gets at the question of whether the combined entity has real market power or just bigger logos.

    Cross-selling. A unified brand makes it easier to move a client from one service line to another. The estate planning client becomes the tax client becomes the small-business advisory client. One relationship manager, one invoice, one point of contact. Cross-sell is the most over-promised and under-delivered benefit in professional services M&A. It is real, but it requires a level of internal coordination that few merged firms achieve in the first three years.

    Exit multiple arbitrage. Small firms trade at three to five times earnings. A combined entity at fifteen or twenty million in EBITDA can trade at eight to twelve times. Buying small and selling big is a legitimate way to create value, and it has made a lot of investors a lot of money. The arbitrage is real, but it is also crowded — there are a lot of firms chasing the same multiple expansion at the same time, and the price of the small firms has been bid up in many categories to the point where the arbitrage is thin.

    Talent ladder. A larger firm can offer career paths that a small one cannot. Specialization, management tracks, equity programs, formal training. For ambitious associates, the combined firm is a better employer than any of the standalone pieces would have been. The talent argument is the most underrated one in the roll-up case, because the best associates in any small firm are usually the most mobile, and a better career path is sometimes the only thing that keeps them.

    The Case Against the Roll-Up

    Integration is harder than it looks. The synergies on the spreadsheet assume that the billing systems will merge cleanly, that the staff will adopt the new processes, and that clients will not notice. None of that is automatic. Most roll-ups underestimate the cost and duration of integration by a factor of two. The deck assumes eighteen months. The reality is closer to three or four years, and during those years a lot of the original thesis quietly stops being true.

    Client churn during transitions. Small-firm clients hire small firms on purpose. When the firm name changes, when their long-time contact leaves, when the invoice arrives on different letterhead, a portion of the book walks. Industry data suggests ten to twenty percent attrition is common in the first two years of a professional services roll-up. The attrition is rarely uniform — the most valuable clients, the ones with the most options, churn first. The book that remains after integration is, on average, lower-quality than the book that was acquired.

    Cultural collision. Each acquired firm has its own way of working — how it handles difficult clients, how it prices, how it decides what to take on. Merging cultures means picking winners and losers. The people on the losing side leave, and they often take clients with them. The leadership of the acquired firm always says, in the diligence period, that culture will not be a problem. It is always a problem. The diligence is happening before anyone has been asked to change anything; the integration is happening after everyone has been asked to change everything. The difference is not subtle.

    Brand dilution in local markets. A firm that has spent thirty years building a name in a particular Texas county is worth more under that name than under a regional brand nobody recognizes. The roll-up trades local equity for scale equity, and the trade is not always favorable. In categories where local reputation is most of the franchise — small-market law, boutique accounting, specialized advisory — the trade is almost never favorable, and the firms that survive the rebrand do so by being good enough operationally to overcome the loss of brand equity, which is a much higher bar than the diligence model assumed.

    Management complexity scales nonlinearly. Running one fifty-person firm is harder than running five ten-person firms in some ways and easier in others. The combined entity needs a layer of professional management that small firms never required, and that layer is expensive. The professional managers do not generate revenue. They generate the conditions under which revenue can be generated, which is a real contribution, but it is also a contribution that has to be paid for out of the synergies the roll-up was supposed to capture. The synergies, in other words, are partially eaten by the cost of capturing them.

    The Hold-Separate Model: What It Is

    In the hold-separate model, each acquired firm keeps its name, its location, its client list, and its operating identity. The holding company brings in its own operating leadership at each firm and provides shared services at the parent level — finance, technology, marketing infrastructure, recruiting, legal, compliance — without forcing the firms to merge with each other.

    The hold-separate model is, at its core, a portfolio strategy executed at the operational level. The portfolio part is the financial diversification: seven firms with seven different exposures are less risky than one firm with one big exposure. The operational part is the discipline of not consolidating the things that should not be consolidated. The hold-separate model fails when the holding company gets impatient with the lack of integration and starts merging things anyway. It succeeds when the holding company can sit with the apparent inefficiency long enough for the underlying durability to compound.

    The Case For Holding Separate

    Client relationships stay intact. The sign on the door does not change. The phone number does not change. For a client who has worked with a firm for fifteen years, nothing visible has happened. Retention is meaningfully higher than in roll-up transitions, and the difference is large enough that it shows up in the cash flow statement within a year of the acquisition.

    Local brands keep their value. A firm with deep roots in Lubbock or Tyler or Corpus Christi continues to be that firm. Its referral sources, its bar association ties, its local hiring pipeline — all of it stays connected to the brand the community already knows. Local brand value is one of those things that is invisible until you destroy it, at which point you discover it was a meaningful fraction of what you paid for.

    Operational risk is contained. If one firm has a difficult quarter — a partner leaves, a major client churns, a regulatory issue surfaces — the problem is contained to that firm. It does not propagate through a single shared P&L. The hold-separate structure is, in this sense, a form of insurance against the kinds of localized disasters that any portfolio of small businesses will eventually produce.

    Each firm can be optimized for its market. A litigation boutique and a transactional firm should not share a pricing model, a staffing model, or a marketing model. Hold-separate lets each firm be the best version of itself rather than a compromise. The compromise model is what most consolidated platforms end up being, because the cost of running multiple operating models inside one combined firm is too high — so a single model wins, and the firms whose old model was discarded quietly underperform forever after.

    Acquisitions are faster to close. There is no integration plan to negotiate, no rebranding to schedule, no staff to consolidate. The diligence focuses on the firm as it stands, and the operating transition focuses on bringing in the holding-company operators — not on dismantling what already works. Faster acquisitions mean more acquisitions per year, which compounds the portfolio more quickly than a model where each deal absorbs two years of integration capacity.

    The Case Against Holding Separate

    Fewer cost synergies. You do not get to consolidate the back office to the same degree. Each firm still has its own billing, its own physical office, its own local administrative staff. Shared services at the parent level help, but they do not replicate the margin lift of full integration. The hold-separate model leaves meaningful money on the table in the form of duplicated overhead, and any honest hold-separate operator will admit this.

    Lower exit multiple. A holding company of seven separately branded firms generally trades at a discount to a single seven-firm combined entity of the same revenue. The market pays for simplicity, and hold-separate is not simple. The exit-multiple discount is the biggest single argument against hold-separate, and it is the argument that will sound loudest in the boardroom when the strategy is being challenged. Operators who choose hold-separate have to be willing to accept a lower exit multiple in exchange for higher durability, and they have to be willing to defend that trade-off in front of investors who would prefer the higher multiple.

    Cross-sell is harder. When the firms have different names, sending a client from one to another requires a warm hand-off rather than a brand-level transition. Some of it happens. Less of it happens than in a combined entity. The hold-separate operator has to decide that cross-sell is not the primary thesis, because if it is the primary thesis, hold-separate is the wrong structure.

    Management coordination is real work. Seven firms with seven sets of operators means seven sets of relationships, seven sets of priorities, seven sets of cultural quirks to navigate. The parent company has to be disciplined about what it standardizes and what it leaves alone, and that discipline is not free. The coordination cost shows up in the form of senior holding-company executives whose entire job is to be in good relationships with seven different firm leaders, and that headcount has to be paid for somewhere.

    Talent ladder is shorter at each firm. An ambitious associate at a ten-person firm has fewer internal moves available than they would in a fifty-person combined entity. Some of this can be addressed through cross-firm mobility at the holding-company level, but it is not the same as a single firm with a deep bench. Hold-separate operators have to be deliberate about manufacturing career paths that span firms, or they will lose the most ambitious people to combined competitors.

    When Each Model Wins

    The roll-up tends to win when the acquired firms are commoditized, geographically clustered, and serve clients who care more about price and convenience than about a particular relationship. Dental practices, veterinary clinics, urgent care, certain insurance brokerages — these have produced legitimate roll-up success stories. The brand of the individual practice was not generating most of the value. The combined entity, with better systems and lower unit costs, genuinely served clients better.

    The hold-separate model tends to win when the acquired firms have deep local brands, long-tenured client relationships, and services that depend on judgment rather than throughput. Law firms, boutique accounting practices, specialized advisory shops. The thing the clients hired in the first place is the firm — not a scaled platform that the firm happens to belong to. Disrupt that, and you destroy what you paid for.

    The mistake that most operators make is assuming the right model is determined by their preference rather than by the category. A roll-up operator who is good at integration will sometimes try to roll up a category that does not support roll-ups, and the integration discipline will not save the strategy. A hold-separate operator who is good at portfolio management will sometimes try to hold-separate a category that genuinely commoditizes, and the operational hygiene will not save the strategy. The category is the constraint. The operator’s job is to recognize which constraint they are working under and to choose the model that fits, not the model they personally prefer.

    Why We Chose Hold-Separate

    TX-LW operates in the second category. The firms we acquire are small Texas professional services businesses whose value is concentrated in their name, their local relationships, and the judgment of the people who do the work. When we evaluated the trade-offs, the integration risk and client-churn risk of a roll-up looked larger than the synergy opportunity. The exit-multiple discount we accept is real, but we believe it is more than offset by retention, operational resilience, and acquisition velocity.

    We also chose hold-separate because of how we plan to operate. We bring in our own operators — finance, marketing, administration, technology — and run them at the holding-company level so each firm gets professional infrastructure without losing its identity. That model only works if the firms stay distinct enough to keep their local advantages. A roll-up would erase exactly the thing we are trying to preserve.

    There is also a reversibility argument. The hold-separate model preserves the option to consolidate later if the category shifts or if a particular set of firms turns out to be more commoditized than we thought. The roll-up model does not preserve the option to deconsolidate, because once the brands are erased and the relationships are pooled, there is no way to put the toothpaste back in the tube. In a world where we cannot be certain we are right about the category, the model that preserves optionality is the wiser choice.

    The Honest Caveat

    None of this makes the roll-up wrong. It makes it wrong for us. If you are running a roll-up in a category where the strategy fits, you are probably right to do so. If you are running a roll-up in a category where it does not fit — and a lot of recent professional services roll-ups fall into that bucket — the trade-offs will catch up with you. The same is true in the other direction. A hold-separate strategy applied to a genuinely commoditized service is just a more expensive way to run the same business.

    The honest answer to “roll-up or hold-separate?” is that it depends on what you are buying and what your clients are paying for. We have made our choice. We respect operators who have made the other one, in the categories where it fits. The strategy is a tool, not a tribe, and operators who treat it as a tribe end up making the same mistake twice — once when they choose the wrong tool, and once when they refuse to change it.

    What to Do Monday Morning

    Before you choose a model, write down what the client is actually buying when she hires one of these firms. If she is buying convenience, price, and predictability across a commodity service, you are in roll-up territory. If she is buying a particular relationship, a particular reputation, or a particular judgment, you are in hold-separate territory. Write the answer down with enough specificity that a skeptical board member could not push back on it. If you cannot, you have not done the work.

    Stress-test the model against your worst acquisition. Not your best. Your best deals will work under either model — the good firms always survive bad strategies. The marginal acquisition is where the strategy is actually tested. If your model only works on the great firms, it is not a model, it is a hope. The model has to be robust to the firm you reluctantly bought because the multiple was right and the principal was tired.

    And finally, do not fall in love with the model. The model is a tool. The tool that fits today may not fit in ten years. The operators who survive the longest are the ones who can change models when the category changes, not the ones who defend the model the longest. The discipline is to keep asking, every couple of years, whether the strategy still fits the categories you are buying — and to have the intellectual honesty to change the answer when the evidence demands it.

  • Capacity Planning for Firms Under 25 People

    Most small firm leaders believe their problem is demand. It almost never is. The phone rings. The referrals come in. The pipeline is full. The actual problem — the one that keeps the firm from compounding into something durable — is that the firm cannot reliably convert demand into delivered work without breaking the team that does the converting. That is a capacity problem. It looks like a growth problem because the symptoms are the same: missed deadlines, frustrated clients, partners working weekends, associates updating their LinkedIn pages. But the cause is different, and the cure is different, and the leaders who confuse the two end up running marketing campaigns to solve a staffing model.

    The hardest constraint in a small professional services firm is almost never demand. It is capacity. What is missing is the people, the hours, the bandwidth to do the work well without burning out the team that has it. Most firm leaders will tell you they need more clients. What they actually need is more capacity to serve the clients they already have — and a system that lets them see the difference six months before the wheels come off, instead of six months after.

    Capacity planning in a small firm is almost always done badly, because nobody learned how to do it. The senior practitioner went to law school or got her CPA. She did not take a class on staffing models. So she runs the firm by feel, hires when she is drowning, lays off when she is slow, and never has the right team for the work in front of her. We can do better than this, but only if we treat it as a discipline — one with its own vocabulary, its own metrics, and its own weekly rhythm. The discipline is not glamorous. It is also the single highest-leverage activity a firm leader can spend her time on, because every other operational improvement runs through the team that does the work, and the team is the capacity.

    Know What an Hour Costs and What It Produces

    The first piece of capacity planning is knowing, for each role in the firm, what an hour of that person’s time costs and what an hour of that person’s time produces. The cost number is straightforward. Salary plus benefits plus a fully-loaded allocation of overhead, divided by available hours. Every firm should know this number for every role, and most firms either do not know it or have not updated it in three years.

    The production number is where most firms fail. They know what was billed. They do not know what was actually accomplished — how many matters moved forward, how many client touches happened, how many internal-quality steps were completed. Billed hours and produced work are different things, and the difference is where the firm’s slack lives. A practitioner who bills forty hours in a week and produces work that closed three matters is twice as productive as a practitioner who bills the same forty hours and produces work that closed one and a half matters. The dashboards do not show this difference, because the dashboards measure billing. The firm leader has to learn to see it anyway.

    The reason this matters for capacity planning is that the firm’s actual capacity is the product of headcount and productivity, not headcount alone. A firm that grows headcount without tracking productivity will grow capacity more slowly than it expects, because the new headcount comes in at lower productivity and stays there until the firm invests in moving it up. Most firms hire and then hope. Better firms hire and then teach. The difference between hoping and teaching is roughly thirty percent of the productivity of the new hire over the first two years. That thirty percent is the firm’s most underused source of capacity.

    Plan for the Realistic Year

    A practitioner in a small firm has roughly 1,500 productive hours in a year. Not 2,000, not 1,800 — 1,500, once you subtract vacation, illness, training, administrative overhead, client development, and the rest of the things that have to happen but do not bill. Plan for that number. Firms that plan for 2,000 routinely overcommit the team and routinely miss internal deadlines. The annual plan that assumes 2,000 billable hours is not an ambitious plan; it is an unrealistic one, and unrealistic plans cause the same kind of damage as no plan at all, except slower and more expensively.

    Plan for the unevenness of demand. A probate practice has a steady baseline plus periodic surges when something complicated lands. A tax practice has a brutal January through April and a softer rest of the year. A bookkeeping practice has month-end and quarter-end peaks. The annual hour total is meaningless without the seasonal shape underneath it. A firm that is correctly capacity-planned on an annual basis can still be catastrophically over-capacity in a single month, and the catastrophic month is what the clients remember.

    The discipline here is to build the capacity model around the peak, not the average. A firm sized for the average will fail in the peak, lose clients in the peak, and burn out the team in the peak. A firm sized for the peak will have slack in the trough, which feels wasteful but is actually the price of being reliable. The slack in the trough is where the firm can invest in cross-training, process improvement, client development, and the other long-term-but-not-urgent work that never happens otherwise. The firms that run lean enough to have no slack are the firms that never improve. They are too busy executing to ever get better at executing.

    Hire Ahead of the Work, Not Behind It

    The single biggest mistake we see is hiring after the firm is already over capacity. By the time the partner can prove she needs another associate, the team has been working evenings for three months, two people are looking for new jobs, and the new hire takes six months to ramp anyway. The firm spends a year recovering from a hire that should have happened a year earlier.

    We hire when the trailing six-month utilization shows a sustained level that, if it continues, will overstretch the team in the next six months. The trigger is leading, not lagging. The cost of an underutilized associate for three months is far smaller than the cost of an over-utilized team for nine. The math here is almost always misunderstood. The cost of the underutilized associate is the salary plus benefits for the underutilized months — a known, bounded number. The cost of the over-utilized team is the attrition of senior people, the client churn from missed deadlines, the burnout of the team that stays, and the multi-year drag on the firm’s reputation. The bounded loss is always preferable to the unbounded one, and yet most firms make the opposite trade because the bounded loss is visible on a P&L and the unbounded one is not.

    The cultural change required to hire ahead is harder than the financial change. Most firm leaders have an emotional resistance to hiring someone before there is a desk full of work for that person to do. The resistance is understandable. It is also wrong. The job of the firm leader is to manage the firm’s capacity curve, which means accepting some slack so that the team can absorb the next surge without breaking. A firm leader who refuses to accept any slack is, in effect, betting that the future will look exactly like the past — and the future never looks exactly like the past in professional services, which is why capacity planning exists as a discipline.

    Cross-Training Is Capacity

    In a four-person firm, if one person is unavailable for a week, fifteen percent of the firm’s capacity has just disappeared. The only insurance against this is cross-training. Every important process should have at least two people who can run it. Every important client should have at least two people who know the matter. This is operational hygiene, not nice-to-have. The firm that does not cross-train is the firm that has a brittle dependency on individuals, and brittle dependencies on individuals always fail eventually — either because someone leaves, or because someone gets sick, or because someone has a personal emergency that the firm cannot work around.

    Cross-training takes time the firm does not feel it has. The senior practitioner has to explain how she does the work, the junior practitioner has to do it under supervision, and both have to absorb the inefficiency of the handoff. We carve out time for this anyway, because the alternative is a firm that grinds to a halt whenever someone takes a vacation or a sick day. The carve-out is non-negotiable, because the moment it becomes negotiable, it becomes the first thing that gets cut when the firm is busy — and the firm is always busy.

    There is a second-order benefit of cross-training that is rarely discussed. The act of explaining how a process works forces the senior practitioner to examine the process, and examination almost always surfaces improvements. The cross-training session is also, every time, a process-improvement session. The firms that take cross-training seriously discover that their senior practitioners have been doing things a particular way for years that, when written down and shown to a junior practitioner, turn out to be unnecessarily complicated. The cross-training is the surfacing mechanism. The improvement is the dividend.

    Track the Right Numbers Weekly

    The weekly operations meeting at every firm in our family looks at the same handful of numbers. Open matters by stage. New matters this week. Closed matters this week. Hours billed by person. Hours produced by person against a target. Client-side waiting items — what is the firm waiting on from clients. Firm-side waiting items — what are clients waiting on from the firm. That is the dashboard. It fits on one page. It tells the firm leader what is going on in fifteen minutes.

    The reason most firms do not have this dashboard is not technical. The data exists, somewhere, in the systems they already pay for. The reason they do not have it is that nobody made it a priority to build. Once it is built, the firm leader cannot imagine running without it. The dashboard is the difference between a firm leader who knows what is happening and a firm leader who finds out what is happening after it has stopped being preventable.

    The two waiting-item numbers are the ones that most firms ignore and that we consider the most important. Firm-side waiting items measure the work that is queued up but not moving — usually because the firm is over capacity in a particular role. Client-side waiting items measure the work that is queued up but not moving because the firm is waiting on something from the client. Both numbers should be small. Both numbers should be aging less than a week. When either number gets large or starts aging, the firm has a problem that the billing dashboard does not show. The waiting-item dashboard shows it three or four weeks earlier, which is the difference between a problem you can fix and a problem you can only apologize for.

    The Capacity Curve Over Time

    Capacity in a small firm is not a static number. It is a curve that moves with the firm’s experience, its processes, and its tools. A firm that does the same work the same way every year will not gain capacity at all — it will only gain capacity by hiring. A firm that systematically improves its processes will gain capacity from the same headcount, year over year, as the team learns to do the work more efficiently and as the systems learn to absorb more of the rote work.

    The firms that compound capacity from improvement, rather than just from hiring, are the ones that end up with structurally better margins than their peers. They have figured out that capacity is partially a function of how the work is organized, not just how many people are doing it. The work of capacity improvement is unglamorous — process documentation, template refinement, system configuration, automation of the parts that automate well — but the payoff compounds. A firm that gets five percent more efficient every year doubles its effective capacity in fifteen years without doubling its headcount. The competitor that did not invest in efficiency has to actually double its headcount to keep up, which means it has to absorb all the management complexity that comes with twice as many people. The compounding firm wins on margin, on culture, and on resilience, and the win is invisible until it suddenly is not.

    The Team Is the Asset

    Every firm we own is mostly its team. The clients, the brand, the matter book — all of that compounds on top of the team. A firm with the right team can rebuild every other asset. A firm with the wrong team cannot. We hire carefully, develop deliberately, and invest in the people who are already there. The capacity that compounds is the capacity that stays.

    This is the part of capacity planning that the spreadsheet cannot capture. The spreadsheet treats headcount as fungible — one practitioner is worth roughly one practitioner. The reality is that a senior practitioner with five years of firm tenure is worth two practitioners with six months of tenure, and the gap is not visible in the headcount line. The investment in retention is, mathematically, the highest-leverage capacity investment a firm leader can make. The retained senior practitioner produces more per hour, requires less supervision, mentors the junior practitioners, and carries the institutional knowledge that no documentation system fully captures. Losing her is the single most expensive thing that can happen to a small firm, and yet firms routinely under-invest in retention because the investment is illegible on the P&L.

    What to Do Monday Morning

    Build the one-page dashboard. If it takes you a week, take the week. If it takes you a month, take the month. The dashboard is the foundation of every other capacity decision you will make for the next five years, and running without it is running blind. Open matters, new matters, closed matters, hours billed, hours produced, firm-side waiting, client-side waiting. Seven numbers. That is the foundation.

    Re-plan the year around 1,500 hours per practitioner. If your plan is built on a higher number, the plan is fiction. Re-plan it. Tell the team what changed and why. Accept that the new plan will be harder to hit on revenue, and accept that the team will trust you more because the plan is honest.

    And finally, identify the next hire before you need it. Not the hire after the team breaks. The hire two quarters before the team breaks. Write down the trigger that will tell you it is time to make the offer. Then watch the trigger every week. The discipline of watching the trigger is the discipline of running a firm rather than reacting to one, and it is the discipline that, more than any other, separates the firms that compound from the firms that just survive.

  • From Heroics to Process: Documenting Work Before You Automate It

    There is a moment in the life of every small firm when the founder realizes that the firm cannot grow past her. The signs are subtle for a long time and then suddenly obvious. The work that depends on her judgment keeps showing up faster than she can train others to handle it. The senior people get frustrated because she will not let go. The clients are loyal to her, not to the firm. Whenever she takes a vacation, the work piles up in a way that takes two weeks to dig out of. The firm is, in effect, a personality projected onto a payroll. It is a wonderful personality and a fragile firm, and the founder is too busy being the personality to notice that the firm under her has stopped scaling.

    This is the moment process design becomes the most important investment the firm can make. It is also, in our experience, the moment the firm is least equipped to make it. The same things that made the founder good at running the firm by heroics — her judgment, her speed, her unwillingness to delegate the parts she cares about — make her bad at writing down how the firm works. The founder has to learn a new skill, late in her career, to do the thing that will determine whether the firm outlives her. Most founders never get there. The firms that do are the ones that, in retrospect, were able to compound.

    You cannot automate a process that does not exist. This sounds obvious until you walk into a small professional services firm and try to figure out how it actually does its work. The work gets done — the documents go out, the clients get served, the bills get paid — but if you ask three people in the firm how the work happens, you will get three different answers, each only partly true. The firm does not have a process. The firm has heroics.

    Most small firms are run on heroics. A senior person knows how to do something. She does it. When she is busy, she trains a junior person by doing it together a few times. Eventually the junior person can do it on her own, mostly. When something unusual happens, the senior person handles it. When the senior person leaves, the institutional knowledge leaves with her. This is fine until the firm needs to scale, at which point it becomes a wall.

    From Heroics to Process

    Process design in a professional services firm is not about turning the work into a factory. It is about making the work repeatable enough that it does not depend on any one person knowing the answer. The senior person’s job changes from “do the work” to “design the work so that other people can do it correctly without me.” This is a different job. It uses different muscles. It requires the senior practitioner to translate twenty years of tacit knowledge into explicit instructions, and then to accept that the explicit instructions, executed by a less experienced person, will produce work that is ninety percent as good as her own — which is the right trade, because the firm can run ten parallel instances of ninety percent and only one instance of one hundred percent, and ten times ninety is nine hundred percent.

    The smallest unit of process design is the checklist. A real checklist, not a wishlist — a sequence of steps that, when followed, produces a reliable outcome. For probate intake, the checklist might have forty items. For drafting a particular kind of trust, eighty. For an offer in compromise, two hundred. The checklist is not a substitute for judgment. It is a way to free up the practitioner’s judgment for the parts of the work that actually require it.

    The genius of the checklist, which Atul Gawande wrote a whole book about and which professional services firms have mostly failed to learn from, is that it does not lower the ceiling of performance — it raises the floor. The best practitioner with a checklist is at least as good as the best practitioner without one, because she can simply ignore the items she already has internalized. The mediocre practitioner with a checklist is substantially better than the mediocre practitioner without one, because the checklist catches the items she would otherwise have forgotten. The variance of the firm’s output narrows. The narrow variance is what clients are actually buying when they hire a “good firm.” They are not buying the peak performance. They are buying the predictability.

    Document Before You Automate

    The first rule of automation is that you cannot automate what you have not first documented. The temptation to skip the documentation step and go straight to the software is enormous, because documentation is boring and software is exciting. The teams that skip the documentation step end up with software that automates the wrong thing, or that requires more manual work than the process it replaced.

    We document everything before we automate anything. The process documentation lives in plain text, edited by the practitioners who actually do the work, reviewed by the firm leader, and updated whenever the process changes. It is not glamorous. It is the most valuable artifact in the firm. The documentation, taken together, is the firm’s operating system — and a firm with a written operating system is qualitatively different from a firm without one. The first kind of firm can be taught to a new partner. The second kind of firm can only be lived through, which means it cannot be replicated, transferred, or sold.

    The order matters: document first, then automate. Most firms that get this order wrong end up with a tangle of automations that nobody fully understands, each one solving a small problem in a way that creates a larger one. The right sequence is to write down the process as it is currently being executed, in enough detail that a thoughtful new hire could follow it. Then improve the process on paper — most processes have at least one or two obvious improvements that surface only when they are written down. Then, and only then, look at what parts of the improved process are mechanical enough to automate. The mechanical parts get automated. The judgment parts stay with humans. The result is a firm that uses software as a force multiplier, not as a substitute for understanding its own work.

    Process Owners

    A process without an owner decays. Every process in the firm has an owner — a specific person whose job includes keeping that process current, identifying where it is breaking, and fixing it. The owner is not necessarily the senior practitioner. Often the best process owner is the senior paralegal or the office manager, because she is the one who watches the process get executed every day and notices when it is not working.

    Process owners are paid for the role. It is not extra duties on top of a full-time job. We carve out real time and real authority for the people who own processes, because the alternative is processes that look great on paper and that nobody actually follows.

    The pathology of process ownership without authority is one of the most common patterns in professional services firms. Someone is named “process owner” but has no power to change the process, no time to maintain it, and no audience for raising the alarm when it is failing. The process stays on paper. The work happens however the practitioners decide it happens. Six months later, the documented process and the actual process have diverged so far that the documentation is worse than useless — it is misleading. New hires are trained on the documentation and then quietly learn from senior peers that the real way to do things is different. The firm now has two operating systems, one written and one oral, and the oral one wins. The cure is not better documentation. The cure is process owners with authority commensurate with the responsibility.

    When to Break the Process

    Process discipline is a tool, not a religion. Every process should have an explicit exception path. The exception path is what the practitioner does when the fact pattern is novel, the client is unusual, the deadline is unusual, or something else is unusual. The exception path requires more judgment than the standard path. That is the entire point.

    A firm that follows its processes ninety-five percent of the time and uses good judgment for the other five percent is much better off than a firm that has perfect adherence to a process that does not actually fit reality. Process exists to make the routine work routine, so that the unusual work can get the attention it deserves.

    The cultural failure mode in mature-process firms is process worship — the belief that the process is the work, rather than a scaffolding for the work. Process worship produces firms that follow the documented steps even when the steps are clearly wrong for the situation in front of them. The signs of process worship are easy to spot once you know what to look for: practitioners who escalate trivial decisions to managers because “the process does not cover this,” documentation that has not changed in three years even though the firm has, and a steady erosion of judgment among the people who used to have it. The cure is to make the exception path as legitimate, as documented, and as celebrated as the standard path. The exception path is not a failure of the process. It is part of the process, and the practitioners who use it well are doing the highest-value work in the firm.

    The Documentation Discipline

    Documentation rots if it is not maintained. The discipline of keeping it current is harder than the discipline of writing it in the first place, because the urgency that drove the original effort fades, and the maintenance work is invisible until the documentation is wrong. The firms that maintain their documentation well do so because they have built maintenance into the regular operating rhythm — not as a quarterly project but as a continuous practice. Every time a process is executed in a way that differs from the documentation, the documentation gets updated. Every time a new edge case is encountered, the documentation absorbs it. The documentation is a living artifact, not a published one.

    The format matters less than the discipline. We have seen firms run perfectly good documentation systems in plain text files, in Notion, in Google Docs, in dedicated process-management software, and in old-fashioned binders. We have also seen firms with expensive process-management platforms whose documentation is stale and useless. The platform does not create the discipline. The discipline is what makes the platform worth anything. A firm with a discipline and no platform will have better documentation than a firm with a platform and no discipline, every time.

    What Good Process Design Feels Like

    In a firm with mature processes, a new hire can be doing real client work in weeks instead of months. A long-term associate can leave for two weeks of vacation and the firm does not freeze. The firm leader can sleep on Sunday night without lying awake wondering which deadline is going to be missed on Monday. The work product is consistent. The clients notice that the firm feels organized.

    The reason most small firms do not invest in process design is that the payoff is invisible. You cannot point at a particular client win and credit the process for it. You can only point at the absence of failures, which is the hardest thing in the world to point at. We do this work anyway, because the absence of failures is what compounds into a durable firm.

    There is also a more subtle, and ultimately more important, effect of mature processes. The firm starts to be able to think about itself. Not in the vague, retrospective way that all firms can — “I think we are getting better at X” — but in the specific, prospective way that only systematized firms can. The leadership can look at a documented process, see that it produces a particular kind of result, and ask whether a different version of the process would produce a better result. The firm becomes capable of experimentation, because there is a baseline to experiment against. Pre-process firms cannot really experiment, because they cannot measure the difference between the experiment and the control. The maturity of the process is the prerequisite for the maturity of the firm’s self-understanding, and the self-understanding is the prerequisite for everything that compounds beyond it.

    What to Do Monday Morning

    Pick the three processes that the firm runs most often and write them down. Not in a slide. In plain text. Start with the most ordinary, most repeatable processes — intake, billing, file opening — because these are the ones where the gap between “documented” and “actually executed” is widest and where the wins from closing the gap are largest. Resist the urge to start with the glamorous, complicated processes. Start with the ones that should be boring and have somehow become hard.

    Assign an owner to each process. Give the owner real time and real authority. If you cannot give the owner real time, the process is not important enough to have an owner. If you cannot give the owner real authority, the documentation will diverge from the practice within a quarter. The owner-with-power is what makes process work compound rather than decay.

    And finally, write down the exception path for each process. Not in the abstract. With examples. The exception path is the proof that the process respects judgment, and the explicit acknowledgment of the exception path is what keeps the standard path from becoming a religion. A firm that has both — a clean standard path and a legitimate exception path — has built the operating system that will let it scale beyond the senior people who currently hold it together. The rest is execution, and execution is easier than building the operating system in the first place.

  • The 80/20 of Back-Office Infrastructure for a Professional Services Firm

    The first time you walk through a small professional services firm with an eye for the systems underneath, the dominant feeling is archaeology. Layer on layer of software accreted over decades. A practice management tool the partner picked when she went solo in 2009. A billing system the office manager added when the firm hit a million in revenue. A document repository that came in because a paralegal liked it. A bookkeeping platform an accountant recommended once. None of them are wrong, exactly. Each one solved a problem that was real at the moment it was installed. The problem is that no one ever zoomed out and asked whether the collection, taken as a whole, was a coherent operating system or a museum of past emergencies. Almost always, it is the museum.

    Small professional services firms run on systems. The reason most of them do not run well is that the systems are accidental — assembled over fifteen years from whatever the partner happened to be using when she went solo, plus whatever the office manager added when something broke, plus whatever the IT person who came in once recommended before he disappeared. The firm runs on these systems the same way an old house runs on its plumbing: it works until it doesn’t, and when it stops working, nobody knows where the shutoff is.

    When we acquire a firm, the systems audit is one of the first things we do. It is also one of the longest. The point is not to replace everything — replacing everything in a firm is how you destroy a firm. The point is to figure out what is actually load-bearing, what is decorative, and what is actively making things worse. The audit produces a map. The map is the foundation of every operational improvement that follows, because without the map, every improvement is a guess. Most firms operate without a map. We refuse to.

    The 80/20 of Back-Office Infrastructure

    There is a relatively short list of systems that every small professional services firm needs to run well. A practice management system that actually models the work. A document management system that the team trusts. A billing system that ties back to the work product. A client portal that clients will use. A financial system that produces real management reporting. A secure way to handle email, files, and credentials. And the integration layer that holds it all together.

    Most small firms have rough versions of each of these. The problem is rarely that any one of them is missing. The problem is that they do not talk to each other. The billing system has one version of the client list. The practice management system has a different one. The email system has a third. The bookkeeping software has a fourth. Reconciling these takes hours every week and introduces errors that nobody catches until a client calls.

    The deeper problem with un-integrated systems is not the time they consume. It is the cognitive tax they impose on the people doing the work. Every disconnection requires a human to hold two facts in her head and verify they match. Multiply that by twenty disconnections, across a team of fifteen people, across a workday — and a meaningful fraction of the firm’s collective brain is spent on reconciliation that adds no client value. The disconnections also create the silent failure mode that small firms are most vulnerable to: the small inconsistency that does not get noticed for six weeks and produces a bill that is wrong, a deadline that is missed, or a document that does not match the file. Integration is not a luxury. It is a precondition for the firm being able to think about anything else.

    Integration Is the Product

    What we actually deliver to the firms in our family is integration. We do not pick the fanciest practice management software and roll it out everywhere. We pick the system that fits the firm’s practice area and we make sure it is integrated with the rest of the stack so that data flows automatically, billing reconciles automatically, client communications log automatically, and the firm leader can see what is going on without asking three different people for three different reports.

    The cost savings from this are real but they are not the point. The point is that integration removes the cognitive overhead of the firm. The firm leader stops thinking about which system has the truth. The intake coordinator stops re-typing the client’s information into four different places. The bookkeeper stops chasing down what happened to a payment because the system shows it. Every minute that the firm spends on this kind of internal reconciliation is a minute it is not spending on clients.

    Integration also produces a second-order effect that most firms underestimate: it changes what kinds of questions the firm can ask itself. A firm with integrated systems can ask, “which practice area has the longest cycle time from intake to first deliverable?” and get an answer in five minutes. A firm with disconnected systems cannot ask the question at all, because answering it would require a paralegal to spend two days pulling data from four places, and nobody is going to spend two days answering a question that the firm has never asked before. The integrated firm gets to be curious about itself. The disconnected firm gets to be opinionated about itself. The first kind of firm improves over time. The second kind of firm just gets older.

    What We Will Not Do

    We will not build custom software when commercial software exists. We have watched too many firms get into the software-building business by accident and ruin themselves. The right answer is to pick the best commercial product, configure it carefully, and pay the subscription. The wrong answer is to hire developers and start writing code.

    We also will not chase trends. The list of systems above is boring. It has been boring for fifteen years and it will be boring for the next fifteen. Boring systems that the team trusts beat exciting systems that the team has to learn every time. Stability has a value that does not show up in software demos.

    The temptation to build custom software is one of the most reliable failure modes in professional services operations. It usually begins innocently. A senior partner identifies a workflow that no commercial tool quite handles. A developer in the firm’s family offers to build a quick solution. Six months later, the firm has a piece of software that mostly works, that depends on the developer to maintain, and that has become a load-bearing part of the firm’s daily operations. Twelve months later, the developer is busy with something else, the software has bugs nobody can fix, and the firm is back to manual workarounds — only now the manual workarounds are stacked on top of a custom system that nobody fully understands. The right discipline is to write the workflow down, look for the commercial tool that comes closest, accept the seventy or eighty percent fit, and configure around the gap. The seventy percent solution that is maintainable is dramatically better than the hundred percent solution that is fragile.

    Where AI Fits

    AI fits inside the existing systems, not on top of them. A bookkeeping system that has AI helping with categorization is more useful than a separate AI tool that you have to copy data into. A document management system that surfaces relevant prior work using embeddings is more useful than a chatbot that you have to prompt manually. The integrations that mattered five years ago still matter; AI just makes some of the steps inside those integrations faster and more accurate.

    We are deploying AI inside the firms we own, but quietly. The firms do not advertise it to clients. The associates do not introduce it as “AI-powered.” It is just that the work happens faster, the responses come quicker, the documents have fewer mistakes, and the firm has more capacity. The technology recedes into the background of the work, which is where it belongs.

    The reason we deploy AI quietly is not modesty. It is durability. Tools that announce themselves to clients become part of the firm’s brand promise, and brand promises tied to specific tools age badly. Five years from now, the specific AI tool that the firm advertised will be obsolete, replaced by something better, and the firm will either have to update its brand promise constantly or have to defend an obsolete tool against newer competitors. Tools that work silently behind the firm’s existing brand do not have this problem. The firm’s promise is to deliver quality work efficiently. The tool helps the firm keep that promise. If the tool changes, the promise does not. This is a small distinction with large consequences, and it is the one that separates firms that use AI well from firms that performatively use AI badly.

    The Hidden Cost of Software Sprawl

    One of the patterns we see in firm after firm is what we call software sprawl. A firm of twelve people will have forty-three software subscriptions, each one renewed automatically, each one used by one or two people, each one creating a small data island that has to be reconciled with everything else. The total annual spend is meaningful — often six figures — and the productivity drag is larger still. The sprawl happens because every individual subscription decision was rational at the time it was made. The aggregate is irrational, but no one is responsible for the aggregate.

    The cure for software sprawl is the systems audit, repeated annually. Walk through every subscription. Ask who uses it. Ask what would happen if it disappeared. Most firms discover, on the first audit, that they can eliminate twenty to thirty percent of their subscriptions without losing any capability the firm actually relies on. The savings are real, and the simplification is even more valuable. A firm that runs on seven well-chosen systems is faster, clearer-thinking, and more resilient than a firm running on forty-three. The discipline of choosing what not to run is, paradoxically, what gives the firm its actual operating capacity. The maximalist firm thinks it has more capability because it has more tools. The minimalist firm thinks it has more capability because it has fewer distractions. The minimalist firm is correct.

    Why This Matters

    A small firm cannot afford the systems work we are describing here. The math does not work for a four-person practice. The math starts to work at a platform of firms, because the same controller, the same systems architect, the same integration work serves multiple firms at once. This is the whole rationale for the holding company structure. The systems are the asset. The firms are the customers. The clients are the beneficiaries.

    This is also the part of the platform thesis that is genuinely defensible against larger competitors. Big firms have enormous systems teams but very little ability to deliver bespoke configurations to small practice areas. Small firms have intimate knowledge of their practice areas but no ability to invest in proper systems. A platform that combines the practice-area intimacy of the small firm with the systems-investment capacity of the larger one occupies a real and durable position in the market. The systems work, in other words, is not back-office plumbing. It is the strategic differentiator dressed up as plumbing — which is the most reliable way to build a durable advantage, because nobody copies the things that look boring.

    What to Do Monday Morning

    List every system the firm uses. Every one. Subscriptions, internal tools, spreadsheets that are load-bearing, the email aliases that route to the wrong place. Get them on one page. The act of listing them is, by itself, a small revelation, because no firm leader has ever counted before, and the count is always larger than expected.

    Identify the disconnections. Where does the same client appear in two different systems with two different attributes? Where does a deadline get tracked in three places? Where does a payment have to be entered twice? The disconnections are the work. Closing them is what produces the integration that compounds.

    And finally, do not buy anything new until you have eliminated something old. The discipline of one-in-one-out is what keeps software sprawl from accreting back. Every new tool has to displace an old one, or the firm has to make an affirmative decision to expand the surface area. Most of the time, the firm decides not to expand. That decision is the one that, year after year, keeps the operating system clean enough to actually be an operating system rather than an archaeology site.

  • Where AI Earns Its Keep in Professional Services, and Where It Quietly Fails

    There are two ways to be wrong about AI in professional services, and almost every firm is wrong in one of them. The first is to treat AI as a discontinuity — to assume it is about to remake the profession, displace the practitioners, and reward the firms that bet aggressively on rebuilding themselves around it. The second is to treat AI as a fad — to assume it is hype, that the existing way of doing things will reassert itself, and that any investment in it is a tax on a profession that has worked fine for a hundred years. Both views are reassuring in their certainty. Both are wrong. The reality is messier and more interesting, and getting it right requires resisting the urge to be certain about something that is still in motion.

    Every conversation about AI in professional services eventually arrives at the same set of questions. Will AI replace attorneys, accountants, bookkeepers? Will small firms lose to large firms with better technology? Will technology-first competitors disrupt incumbents? These are the wrong questions, or at least they are the wrong first questions. The right first question is far more boring: where, specifically, in the work that this firm does every day, can AI make the work better?

    The honest answer for most small firms today is “a few places, narrowly, with careful supervision.” That is less exciting than the broader claims, but it is what we actually see when we deploy these tools inside our firms. The places where AI works are surprising. The places where it does not are also surprising. The difference between the two has almost nothing to do with the underlying model and almost everything to do with the structure of the work — which is the part that gets the least attention in the AI discourse and that, in our experience, matters the most.

    Where AI Earns Its Keep Today

    Document review. Not the final review by an attorney, but the first-pass triage. Finding the relevant clauses in a hundred-page contract, surfacing the unusual provisions, comparing against a known good template. The attorney still does the legal judgment, but she does it on a curated and annotated document instead of a raw one. The time savings are real. The accuracy improvement is also real — the AI does not get tired on page sixty, the way a human reviewer does, which means the unusual clause that hides on page sixty-two no longer gets missed.

    Drafting. Standard letters, standard motions, standard engagement letters, standard responses to common client questions. The output is never publishable as-is, but it is far better than starting from a blank page. The skill is in writing the prompt correctly and in editing the output rigorously. The associate who knows how to do both does the same work in half the time. The skill of editing AI output is, importantly, not the same as the skill of drafting from scratch. It requires a different cognitive posture — a critical, suspicious, line-by-line read rather than a generative one. Firms that train their associates explicitly in this skill get more out of AI drafting than firms that simply hand the tools to the team and hope.

    Research. Tax research, case research, regulatory research. AI search is good at finding the relevant authority. It is not yet good at synthesizing the authority into a defensible answer. So we use it to find what to read, not to decide what to do. The distinction is operational: AI is a research assistant, not a research conclusion. The firm that treats it as a research conclusion will eventually issue an opinion that is wrong, lose a client over it, and discover that AI hallucination is not a theoretical risk — it is a malpractice risk hiding inside a productivity tool.

    Bookkeeping categorization. The marginal AI improvement here is enormous because the work is repetitive, the categories are well-defined, and the corrections are easy to learn from. The bookkeeper goes from coding every transaction to reviewing the AI’s codings. Throughput doubles. Accuracy goes up. This is the canonical example of AI fitting the structure of the work — a high-volume, well-defined, correctable task with clear feedback loops. Where the structure of the work matches the strength of the model, the value is unambiguous. Where the structure does not match, no amount of model improvement helps.

    Where AI Quietly Fails

    Anything that requires the model to understand who the client actually is, what they actually want, and what they have actually agreed to. AI does not know your client. It cannot tell you whether the answer that is technically correct is also the answer your client should hear, in the way your client should hear it, given the relationship you have with them.

    Anything that involves novel judgment. The first time a fact pattern looks like X but is actually Y, AI will get it wrong, because it is averaging across cases it has seen. The exceptions are where the practitioner earns her living. AI cannot replace the practitioner there and probably should not try.

    Anything that introduces material risk. We do not let AI send anything to clients without human review. We do not let AI sign anything. We do not let AI make decisions that we would not let a first-year associate make on her own. The standard is the same one we have always used for first-year work: useful, but always reviewed.

    The “quiet failure” framing in this section’s title is deliberate. AI does not usually fail loudly. It fails by producing output that looks plausible, that is wrong in ways that are subtle, and that requires a knowledgeable reviewer to catch. The firms that get hurt by AI are not the firms whose AI tools crashed. They are the firms whose AI tools worked just well enough to be trusted by people who did not have the skill to verify the output. The reviewer-skill problem is the actual problem. The model-quality problem is a secondary one, and it is one the vendors will solve faster than the reviewer-skill problem will be solved. The firms that invest in their reviewers’ AI literacy are the firms that will use AI well over the next decade. The firms that invest only in tools are buying half the answer.

    The Deterministic Layer Is the Point

    We have written elsewhere about the line between deterministic systems and nondeterministic ones. AI is nondeterministic by nature. The work in our firms is mostly deterministic by nature — the same kinds of matters, the same kinds of documents, the same kinds of decisions, with the same kinds of safeguards. The way we use AI is to put nondeterministic steps inside deterministic workflows, with deterministic checks on the output. This is unglamorous and it is also what works.

    A modern tax controversy practice looks the same as it did five years ago from the client’s perspective. The forms are the same. The deadlines are the same. The IRS is the same. What has changed is that the steps in between — pulling transcripts, summarizing notices, drafting responses, calculating projections — happen faster and with fewer errors. The practitioner spends more time on the substantive judgment and less time on the mechanical work. That is the entire promise of AI in this kind of practice, and it is enough.

    The architectural insight here is worth stating explicitly. The job of the firm is to deliver deterministic outcomes — the right legal advice, the right return, the right book of accounts. The job of the workflow is to deliver those outcomes reliably. AI is a tool that can do some of the intermediate steps faster, but it cannot be allowed to compromise the determinism of the outcome. So we wrap the nondeterministic AI steps in deterministic scaffolding: a structured input, a known-good template, a human reviewer, a checklist verification. The scaffolding is the firm’s promise to the client. The AI is the productivity multiplier inside the scaffolding. Firms that get this layering right move faster without losing reliability. Firms that get it wrong move faster and lose reliability at the same time, and the loss of reliability is not visible until the first time it matters, at which point it is too late.

    The Talent Implication Most Firms Miss

    The popular narrative is that AI will reduce the need for junior associates. The narrative is partially right and mostly misleading. The mechanical work that junior associates used to do — first-pass document review, basic research, template drafting — is exactly the work AI is best at. So firms will indeed need fewer hours of that work from juniors. But the firms that will thrive are the ones that take the time they used to spend supervising juniors on mechanical work and reinvest it in training juniors on the judgment work that AI cannot do. The output is the same headcount, but a different developmental curve — juniors who are doing harder, more cognitively demanding work earlier in their careers, and reaching senior judgment maturity faster than their predecessors did.

    The firms that get this wrong will hollow out their talent pipeline. They will keep the same supervision model — juniors doing mechanical work, seniors reviewing it — but with AI in the middle, which means the juniors are not actually doing the mechanical work, which means they are not building the muscle that the mechanical work used to build. Five years later, those firms will have senior associates who have never had to read a hundred-page contract from cover to cover, and who therefore cannot reliably catch the things that the AI missed. The talent risk of AI is not that it will replace the juniors. The talent risk is that it will produce a generation of seniors who never developed the underlying skill that AI is now imperfectly performing. The cure is to be deliberate about what juniors do learn, given what they no longer have to do.

    What We Are Building Toward

    Over the next several years we expect AI to keep moving from optional to assumed inside the firms we own. The associates we hire will use it because it makes their work better. The clients will benefit because the work will be faster, cheaper, and more accurate. The competitive advantage will accrue to firms that integrate AI carefully into their existing workflows, not to firms that try to rebuild themselves around it. Quiet integration beats loud rebranding every time.

    The firms that will struggle are not the ones that are slow to adopt AI. They are the ones whose underlying processes were so undocumented and so ad-hoc that they cannot tell where AI would fit. The pre-condition for using AI well is having a real process to begin with. That has always been the pre-condition for everything else in a professional services firm, too. AI is not a way to skip the work of building a real firm. It is a multiplier that rewards firms that have already done the work. The multiplier on zero is still zero, and a lot of firms are about to discover that their AI investments are multiplying the wrong thing.

    What to Do Monday Morning

    Pick three tasks in your firm that are repetitive, well-defined, and currently consume meaningful associate time. Document those tasks. Then pilot AI on them, with explicit human review at every step. Measure the time savings, the accuracy delta, and the reviewer experience. The pilot is the data. Once you have the data, you can decide whether to expand the use of AI on that task — and whether to expand it to other tasks of similar shape.

    Resist the temptation to deploy AI across the firm at once. The deployment that scales is the one that is preceded by a documented process and followed by measured results. The deployment that fails is the one that is announced before it is tested. There is enormous pressure inside firms right now to be seen to be using AI. The pressure is mostly cultural, not commercial, and it is causing firms to make commitments faster than their actual experience with the tools supports.

    And finally, decide explicitly what you will not let AI do. Write it down. Tell the team. The list is as important as the list of what you will let AI do, because the boundary is what protects the firm from the quiet failure mode. Firms with clear lines about what AI does and does not do produce reliable work with AI in the mix. Firms with fuzzy lines produce variable work and eventually a malpractice claim. The clear-line firm is the durable firm, and the durability is the point.

  • Decentralized by Design: Why We Hire Operators, Not Managers

    There is a recurring fiction in professional services about how firms get run. The fiction is that someone, somewhere, is making the decisions. A managing partner. A founder. A board. Pull on the thread long enough and you find that the decisions are actually being made by whoever happens to be in the room when the question comes up, and whoever can hold the floor longest. This is fine for a four-person firm. It does not scale. It also does not produce the kind of accountability that a serious organization runs on, because nobody can be held accountable for a decision they did not realize they were making.

    The serious version of the question — “who decides what?” — is the most important organizational question any platform of firms has to answer, and it has to be answered explicitly, in writing, with consequences attached. The platforms that answer it well end up running on something that looks like decentralization. The platforms that answer it badly end up running on something that looks like consensus, which is a form of decentralization that has no accountability attached and that produces neither the speed of centralization nor the local intelligence of true decentralization. The right answer is a third thing, and the third thing is what we have been trying to build.

    When we acquire a firm we replace the implicit decision-making with something different. Not centralization — the opposite. We push real authority down to the people closest to the work, and we make that authority specific enough that it is unambiguous who decides what. The result looks decentralized because it is decentralized. But it is decentralized by design, not by accident. Decentralization by accident is just chaos. Decentralization by design is a system, and a system is what allows a platform to scale without losing the local intelligence that makes the firms worth owning in the first place.

    Hire Operators, Not Managers

    The most common mistake we see in small professional services firms is hiring a manager when what the firm needed was an operator. A manager coordinates. An operator owns the result. A manager attends meetings about a problem. An operator solves the problem and then writes a one-paragraph memo about what was done. The skills look similar from the outside; the outcomes are not.

    We hire operators. We pay them like operators. We give them real budgets and real decision rights and we measure them against real outcomes. The trade-off is that operators are harder to find, harder to train, and harder to manage in the conventional sense — because the conventional sense of “manage” mostly means “review and approve,” and operators do not need that.

    The most reliable test for whether someone is an operator or a manager is to give her a problem and watch what happens in the first forty-eight hours. The operator goes and looks at the problem. She talks to the people involved. She forms a working hypothesis. She makes a small decision to test the hypothesis. By the end of the second day, the problem is either smaller or better understood. The manager, given the same problem, schedules a meeting. She circulates an agenda. She compiles a list of stakeholders. She writes a project plan. By the end of the second day, the problem is exactly the same size it was, but is now accompanied by a calendar invite. Both behaviors are defensible in the abstract. Only one of them produces movement, and movement is what we are paying for.

    The second-order consequence of hiring operators is that the holding company needs less of itself. An organization full of managers requires layers of oversight to coordinate the coordination. An organization full of operators requires a lean center whose primary job is to clear obstacles, allocate capital, and stay out of the way. The leaner center is cheaper, faster, and harder for the operators to resent — because the operators do not feel managed, they feel supported. That feeling is the difference between an operator who stays for ten years and an operator who leaves for someone who will give her the room she needed in the first place.

    Clear the Obstacles, Do Not Direct the Work

    Our role as the holding company is to clear obstacles. That sentence is short and easy to say, and most of what we do every day is figure out what it means in practice. It means we buy the practice management software the firm needs and could not afford alone. It means we hire the controller who lets the firm leader stop doing AR by herself. It means we negotiate the lease, the malpractice insurance, the bank line, the vendor contracts — everything that has nothing to do with serving clients.

    What it does not mean is telling the firm what work to take, how to price it, or which clients to fire. Those are the firm’s decisions. We are sometimes asked our opinion. We sometimes give it. But the decision is theirs and the result is theirs.

    The discipline of obstacle-clearing without direction is the hardest part of our job. The temptation to direct is constant, because direction is what holding companies traditionally do, because direction feels like value-add, and because direction lets the holding company executives feel like they are earning their pay. We resist the temptation because direction destroys the very thing we are paying the operators for. An operator who is being directed is not an operator anymore. She is a contractor with extra steps. The operator who runs her firm because we cleared her runway, and not because we gave her the playbook, is the operator who actually produces returns over a decade.

    Accountability Over Activity

    Most professional services firms are organized around activity. Billable hours, time entries, meetings attended, emails sent, documents drafted. None of these things are outcomes. None of them tell you whether the firm is actually getting better at the work. Reorganizing a firm around outcomes is harder than it sounds because almost every existing system — software, compensation, status hierarchy — reinforces activity.

    We measure firm leaders on a small number of things and we measure them honestly. Client outcomes. Client retention. Staff retention. Financial performance, but properly defined, not just revenue. The depth of the bench they are building. The state of the systems they inherited and are improving. That is the report card. Everything else is noise.

    The honest version of accountability is harder than the polite version. The polite version is “we have aligned on the metrics and we are tracking them together.” The honest version is “if these numbers are wrong for two years in a row, the firm leader will be replaced, and she knows it.” The polite version produces firm leaders who manage expectations. The honest version produces firm leaders who manage the firm. The first kind of accountability is theater; the second kind is architecture. Operators want the second kind, because they want to know what game they are playing and whether they are winning it. Managers prefer the first kind, because the theater protects them from being measured. The clarity of the honest version is part of why we are able to recruit the operators we recruit.

    Long Horizons

    Decentralization works only if the people you have decentralized to know they are going to be there long enough to live with the consequences of their decisions. Most professional services firms run on much shorter horizons than the work demands — quarterly hour targets, annual partner draws, three-year strategic plans that change every twelve months. The result is that the operators in those firms make decisions on the timescale of their reviews, not on the timescale of the firm.

    We are a permanent holder. We do not have a fund clock. We do not have a sponsor pressing for an exit. The firm leaders we hire know they are going to be running their firm five years from now, ten years from now, longer if they want it. That changes how they make decisions. They make better ones.

    The economic literature has been pointing at this for a long time, and the professional services industry has been ignoring it for almost as long. Short-horizon principals produce short-horizon agents, who produce short-horizon decisions, which compound into a portfolio of firms whose long-term value is significantly lower than the sum of their parts. Long-horizon principals do not have a magic touch — they just remove the pressure that forces operators to make decisions they know are wrong on a five-year view because they are right on a five-quarter view. The structural choice of being permanent capital is one of the highest-leverage choices a holding company can make. It changes nothing about any individual decision. It changes everything about the distribution of decisions over time. We have made that choice. It is the choice that, more than any other, is responsible for the kinds of firms we can build.

    What This Looks Like on a Random Tuesday

    The probate firm leader needs to decide whether to take on a complex litigated matter that will tie up two associates for six months. She does not need our permission. She does not ask for it. She decides, takes the case, and a week later sends a short note explaining the reasoning. We file it away. If the case goes badly, we will not second-guess the decision; we will look at what the firm learned. If it goes well, we will not take credit for it; we will look at what the firm learned.

    That is what decentralized leadership looks like in practice. It is not a slogan. It is a series of small, specific moments in which the person closest to the work decides, owns, and learns. We are betting that over time, a platform of firms run by operators who own their decisions outperforms a platform of firms run by managers who report on theirs. So far the bet is paying off.

    The cultural insight underneath this Tuesday is that ownership cannot be granted in theory and withheld in practice. Either the firm leader has the authority to take the matter or she does not. If the platform reserves the right to second-guess the decision after the fact, then the authority she has is conditional, and conditional authority is a kind of pseudo-authority that produces all the work of decision-making with none of the benefits. We have decided to give real authority and to accept the occasional bad decision that comes with real authority, because the alternative is to retain the right to micromanage and to receive, in exchange, firm leaders who behave like middle managers. The bad decision is bounded. The pseudo-authority is corrosive. The trade is obvious once you have lived on both sides of it.

    What to Do Monday Morning

    Write down, by role, who decides what. Do it in enough detail that there is no ambiguity. The firm leader decides on staffing, pricing, work selection, and local marketing. The platform decides on systems, capital allocation, and the leadership of the firm. The audit is the document. The document is the architecture. The architecture is what protects the decentralization from drifting back into the default centralization that every organization eventually slides toward.

    Hire operators, not managers, even when the resume of the manager looks more impressive. The manager will be easier to evaluate in the interview and harder to live with after. The operator will be harder to find, harder to read, and a better long-term bet. Train yourself, as a leader, to recognize the operator pattern in interviews — the bias for action, the comfort with incomplete information, the willingness to be wrong in writing.

    And finally, lengthen the horizons of the people you have decentralized to. If you cannot promise them that they will be there in five years, do not pretend you can decentralize to them in the meantime. Short-horizon decentralization is just an excuse to push hard decisions onto people who cannot afford to make them well. Long-horizon decentralization is what produces the firms that compound into something durable, and the durability is the entire point.

  • Deterministic vs. Nondeterministic: Where LLMs Fit in Modern Automation

    Deterministic vs. Nondeterministic: Where LLMs Fit in Modern Automation

    Every few years, an entirely new primitive arrives that forces operators to rethink how work gets done. The spreadsheet did it. The relational database did it. The web browser did it. Each one looked, on arrival, like a curiosity for hobbyists and a threat to whatever workflow it eventually displaced. Each one ended up redrawing the org chart of the average company. The pattern is so consistent that, once you have seen a few of these transitions, the temptation is to assume you know what to do with the next one. The temptation is usually wrong. What stays constant across these transitions is not the playbook for adopting the new primitive — it is the fact that the operators who get the next one right are the ones who refuse to assume they already know how to think about it.

    Large language models are the latest primitive, and they have already triggered the usual cycle: breathless optimism, predictable backlash, and a quieter, more interesting middle phase in which serious operators figure out where the new tool actually belongs. We sit on the boards and in the cap tables of small technology companies working through that middle phase right now. What follows is what we are seeing, what we believe, and where we think the puck is going. The thesis, briefly stated: the right mental model for LLMs is not “intelligent software” but “probabilistic infrastructure,” and the companies that internalize that distinction will build dramatically more durable systems than the ones that do not.

    The Old Contract: Determinism as a Feature

    For the last fifty years, enterprise software has been built on a single, almost moral premise: given the same input, produce the same output, every time. This is determinism, and it is not a stylistic choice. It is the load-bearing assumption underneath payroll, accounting, settlement, claims processing, identity, access control, and the long tail of internal tooling that makes a company function. When a deterministic system gives a different answer on Tuesday than it gave on Monday, that is not creativity. That is a bug, and in regulated industries it is often a fineable one.

    Deterministic systems are testable. They are debuggable. They produce audit trails that satisfy auditors, regulators, and the occasional plaintiff’s attorney. They are also, by design, brittle. Tell a rule engine something it has not seen before and it either fails loudly or, worse, fails quietly. For decades, the industry has papered over that brittleness with two expensive ingredients: humans and consultants. Both are now in shorter supply than the work requires.

    The deeper observation is that the deterministic regime made an implicit bet about the world: that the inputs to enterprise systems could be cleaned up before they arrived. Forms would be filled out correctly. PDFs would conform to templates. Customers would describe their problems in approved categories. The bet was never fully true, but it was true enough for a long time that the industry got away with it. The cleanup work was hidden inside the human layer — clerks, support agents, paralegals, billing operators — who normalized messy reality into the structured shapes the deterministic systems could consume. The work was real, but it was outside the software, so it did not show up in the architecture. LLMs are the first technology that can credibly absorb a large fraction of that hidden cleanup work, which is why their impact will be larger than the surface metrics suggest.

    The New Primitive: Probabilistic by Design

    LLMs invert the old contract. They are probabilistic, sampling from distributions over possible outputs. Ask the same question twice and you may get two reasonable, non-identical answers. For an operator coming from the deterministic world, this feels like a regression. It is not. It is the price of a capability that traditional software has never had: the ability to read messy, unstructured, ambiguous input and produce a useful response without anyone having to enumerate the rules in advance.

    The strategic question for any operator is therefore not whether to adopt LLMs. That question is already settled by the economics. The strategic question is where, inside the business, the new primitive belongs, and just as importantly, where it does not.

    The cultural transition is harder than the technical one. Engineers who have spent decades treating nondeterminism as the enemy now have to learn to design with it, around it, and on top of it. The right mental model is not “LLMs make software smarter.” The right mental model is “LLMs are a new kind of dependency, with different failure modes than the dependencies engineers are used to managing.” This sounds modest. It is the entire game. Companies that treat LLMs as smart software try to push them deeper into the core, where the consequences of variance are highest, and discover the hard way that probabilistic systems do not belong there. Companies that treat LLMs as a probabilistic dependency keep them at the edges, wrap them in deterministic scaffolding, and discover that the same model can do an enormous amount of useful work without ever being asked to make decisions it cannot reliably make.

    Where Determinism Still Wins

    Determinism still wins anywhere correctness is binary and the cost of variance is high. Moving money. Writing to a system of record. Granting or revoking access. Calculating tax. Signing a contract. Executing a known sequence of API calls. These are the places where the answer is either right or wrong, where there is no graceful degradation, and where the auditor will eventually come asking. We tell our portfolio companies, bluntly, that an LLM has no business making any of these decisions on its own. The deterministic layer is not legacy. It is the spine.

    The reason the deterministic spine has to remain deterministic is not just regulatory. It is epistemic. A regulator, a customer, or an internal investigator must be able to look at a system and answer the question, “why did this happen?” Deterministic systems answer that question by replaying the inputs and showing the same outputs. Probabilistic systems cannot answer it the same way, because the same inputs do not always produce the same outputs. You can sometimes recover a plausible explanation by examining the prompt, the context, and the seed — but “plausible” is not “deterministic,” and any business that has to defend its decisions in front of an auditor will discover that the difference matters.

    Where LLMs Earn Their Keep

    LLMs earn their keep at the edges of the business, in the places where structured systems have always struggled. Reading a customer email and figuring out what the customer actually wants. Pulling line items out of a PDF invoice that arrived in a format no one has seen before. Triaging a support ticket. Drafting a first pass of a contract, a memo, a follow-up. Classifying a transaction. Summarizing a meeting. Translating a stakeholder’s loose request into a structured query against a database that already exists.

    None of this is glamorous. All of it is expensive when done by humans, and all of it has historically been the work that breaks every rule-based system the moment reality drifts. This is the work LLMs were built for, and it is the work where the ROI in our portfolio has been most consistent and most defensible.

    The pattern of where LLMs work and where they do not is, ultimately, a pattern about the structure of the underlying task. Tasks with high input variance and tolerant-of-variance outputs — interpretation, summarization, classification, first-draft generation — are where LLMs shine. Tasks with low input variance and intolerant-of-variance outputs — money movement, identity, compliance — are where LLMs fail. The mistake operators make is treating the LLM as a general-purpose tool that can be pointed at anything. It is not. It is a specialized tool with a particular shape, and the operators who match the tool to tasks of the right shape are the ones who get returns from it. The operators who match the tool to tasks of the wrong shape are the ones writing the cautionary press releases.

    The Hybrid Pattern

    The systems that are working in production are neither purely deterministic nor purely LLM-driven. They are hybrids, and the pattern is converging across our companies and across the industry.

    An LLM sits at the edge, where the input is messy. It interprets, extracts, classifies, and proposes. Deterministic code sits at the core, where the action is consequential. It validates, executes, and logs. Between the two lives a contract: structured outputs, schema validation, allow-listed tool calls, retries, and human-in-the-loop review for the cases that fall outside the contract. The LLM proposes. The deterministic layer disposes. The audit trail survives.

    This is not a theoretical architecture. It is, increasingly, the default. The companies that have skipped this discipline and pointed an LLM directly at production systems have been the ones generating the cautionary tales that fill the trade press.

    The hybrid pattern is also where the architectural craft has migrated to. A decade ago, the interesting design decisions in enterprise software were about data models, API contracts, and consistency guarantees. Today, the most interesting decisions are about where the seam between LLM and deterministic code sits, how the contract between them is enforced, and what happens when the LLM produces something the deterministic layer cannot accept. These are real engineering decisions with real consequences, and the engineers who get them right are the ones whose systems will run quietly in production for years. The engineers who treat the seam as an afterthought are the ones whose systems will be torn out within a year of deployment, replaced by something better-designed by a competitor.

    Taming the Nondeterminism

    You can narrow an LLM’s variance, though you cannot eliminate it. Temperature settings, constrained decoding, JSON schemas, function calling, evals, and automated retries all push the distribution of outputs toward something tight enough to ship. The teams that are winning treat their LLM layer the way good engineering teams have always treated flaky dependencies: assume it will misbehave, design the surrounding system to catch it, and measure relentlessly. Evals are the new unit tests, and the companies that take them seriously are the ones whose systems do not embarrass them in front of customers.

    The discipline of evals deserves special emphasis, because it is the discipline that most determines whether an LLM-powered system can be operated at scale. An eval is a test that measures, against a defined set of inputs, whether the LLM’s outputs meet a defined standard. The companies that run hundreds of evals per release catch regressions before they reach customers. The companies that run none discover regressions after customers notice. The cost of building evals is real and the cost of not building them is larger, but the second cost is invisible until it suddenly is not. We push our portfolio companies, hard, to invest in evals well before they feel necessary, because the alternative is to operate the system blind for as long as the team will tolerate, which is always longer than is wise.

    Build, Buy, or Wrap

    A recurring question in our boardrooms is what to build, what to buy, and what to wrap. Our working answer: do not build foundation models, do not buy thin wrappers, and wrap aggressively wherever the underlying model is a commodity. The durable value at the small-company scale is almost never in the model itself. It is in the proprietary data, the workflow integration, the deterministic guardrails, and the trust the company has earned with its customers. Those are the assets that compound. The model underneath will be replaced, probably more than once, before the decade is out.

    The most reliable way for a small company to lose money in this market is to fall in love with a particular model provider and design the product around the provider’s specific capabilities. The model market is moving too fast, and the provider that is on top this quarter will not necessarily be on top next quarter. The companies that abstract the model behind their own interface — that can swap providers as the price-performance curve moves — preserve optionality that the companies that hardwire a particular API quietly lose. Optionality is not a feature anyone buys directly, but it is a feature that, over five years, separates the companies that compound from the companies that have to keep rebuilding.

    What We Are Watching

    Three things are on our radar going into the next stretch. First, the slow professionalization of LLM operations: evals, observability, prompt versioning, and the rest of the unglamorous plumbing that turns a demo into a system. Second, the migration of agentic patterns out of the lab and into real workflows, which will force a much more serious conversation about permissions, identity, and liability than the industry has had so far. Third, the quiet consolidation of the deterministic layer itself, as workflow engines, orchestrators, and policy systems learn to host LLM steps as first-class citizens rather than bolt-ons.

    Of these three, agentic patterns are the one most likely to be misjudged, in both directions. The optimists will claim that agents are the future of all enterprise work, and they will be partially right and mostly early. The pessimists will claim that agents are a parlor trick that does not scale, and they will be partially right about the current generation and entirely wrong about the trajectory. The truth is that agents will work, but only in narrow, well-instrumented contexts, with clear permissions, careful identity controls, and explicit human accountability. The companies that build that scaffolding before they ship agents will be fine. The companies that ship agents without it will produce the next round of cautionary tales.

    The Real Question

    The interesting question, then, is not whether deterministic or nondeterministic systems are better. That framing belongs to a debate that has already been settled by the operators actually shipping software. The interesting question is where, inside your specific business, the line between the two should sit, and who in your organization has the authority and the technical judgment to draw it.

    Get that line right, and you get a business that is both reliable and adaptive: the audit trail your regulators expect, paired with the flexibility your customers have quietly started to demand. Get it wrong in either direction, and you end up with a system that is either too brittle to serve the market or too loose to be trusted with it. The companies that figure this out early will spend the next decade compounding the advantage. The ones that do not will spend it explaining themselves.

    What to Do Monday Morning

    Map your existing system into two columns. Determinism on one side; tolerated nondeterminism on the other. Be honest about which side each step actually belongs on, not which side it currently sits on. Most companies, when they do this exercise for the first time, discover that they have placed several deterministic-by-nature operations inside an LLM call, and several LLM-appropriate operations inside rigid rule engines. The map is the diagnosis. The work that follows is the cure.

    Build the contract between the two layers with the same rigor you would use for an API contract with an external vendor. Schema validation, allow-listed tool calls, structured outputs, retries, fallback paths, human-in-the-loop review for edge cases. Treat the LLM as if it were a contractor whose work product had to be checked before it could be accepted into the system. Because that is what it is.

    And finally, invest in evals before you feel ready. The companies that ship LLM-powered systems without evals are not saving time — they are deferring the cost of measurement, and the cost compounds. Evals are not a sign of maturity; they are a prerequisite for it. Build them early, run them often, and treat the eval suite as a first-class artifact of the codebase. The companies that do this will look, in five years, dramatically more competent than the companies that did not, and the gap will be visible in their products, their reliability, and their margins.