Four Questions Every Cost Transformation Leadership team Must Answer

The house lights dim slightly over a vast conference hall, where tiered rows of chairs are filled with people still buzzing from the last round of high‑energy presentations and live demos. Faces glow in the wash of stage lighting and the reflection of laptop screens, side conversations hum in every aisle, and you can feel that mix of exhaustion and adrenaline that only comes after a day of big ideas and bold claims about the future. 

Standing at the intersection of factory floors and boardrooms, Paola Neba is a senior operations and cost‑transformation leader who blends deep manufacturing roots with top‑tier strategy experience. She began her career as a mechanical engineer at Kimberly‑Clark, progressing through engineering and project management roles across Europe before moving into consulting with firms such as Booz Allen Hamilton, Booz & Company/Strategy&, and Accenture. Over the last decade, she has held director and managing director roles in North America and Europe, leading large‑scale supply chain, operations, and AI‑enabled cost‑transformation programs for consumer, retail, and industrial clients, with a particular focus on turning complex analytics into sustainable P&L impact and real change on the ground.

Please join me in welcoming our keynote speaker for the Jamestown AI‑Powered Cost Transformation Conference, Paola Neba.

Friends, colleagues, innovators of Jamestown, thank you for being here at the 2026 Conference on AI‑Powered Cost Transformation. Today’s agenda is full of models, platforms, and pilots, but before we dive into more tools, this conversation needs to start with something more fundamental: what problem are we actually trying to solve when we talk about “cost transformation”?

Cost transformation, in its simplest form, is the discipline of structurally and sustainably reshaping a company’s cost base so that every dollar spent is tightly aligned with strategy, generates acceptable returns, agile enough to react to dynamic market conditions, and creates room to reinvest in growth and long term resilience. It is not a glorified budgeting exercise, not an exercise of start, stop or monitor and it is not a once‑a‑decade crash diet. It means redefining purpose, redesigning work, simplifying the operating model, reallocating resources, adjusting the culture, reshaping partnerships, and institutionalizing new ways of working so that the economics of the business remain stronger long after the fixers, project team, the consultants, and yes, even the AI squads, have moved on. Cost transformation involves strategically selecting and aligning people to execute value-driving processes, underpinned by effective decision loops that foster a culture conducive to longterm success. Put simply, culture is the strongest predictor of performance.

And yet, many of the cost programs in this room, even the smart ones with beautiful dashboards and LLM copilots, will stall or under‑deliver. Not because the algorithms mis‑forecasted, not because the opportunities were mis‑sized, but because leadership never got fully explicit about four foundational questions that should govern how the program is set up and what success truly means. Those questions are: what is the true expectation of success for this project; which structural choices really matter; what future state success will unlock; and what execution model you will deliberately choose, and why.

Let me walk through each of these, not as theory, but as the backbone of how transformations can actually stick.

 Question 1: What is the true expectation of success for this project?

The first question is one many organizations skip because it feels obvious: “We want savings. Isn’t that enough?” It is not. The first, and most neglected, question is: what is the true expectation of success for this project?

That question is really about the lens through which the program’s outcomes will be judged, and therefore which results will be considered sustainable versus cosmetic. If you put ten executives in a room and ask, “What does success look like for this cost program?”, you often get ten different answers. Finance is thinking about net P&L impact and guidance. Operations is thinking about throughput and service. The group or investors might be thinking about capital rotation and valuation multiples. Without a shared definition, people work hard, AI produces insights, initiatives “go green”, and still everyone is dissatisfied.

In practice, organizations tend to frame success in at least four distinct ways:

– The benefit case: This frame looks at the positive contributions of the project: headcount reductions, procurement savings, productivity gains. It focuses on identifying and tracking gross benefits, often without systematically netting out the cost of achieving them.

– The business case: Here the frame is broader. Success explicitly includes project costs, investments, and benefits. The emphasis shifts from gross to net economics, consulting fees, severance, system investments, backfill, and overtime all come into the equation.

– The value‑creation exercise: This lens recognizes both tangible and intangible benefits. P&L impact matters, of course, but so do new capabilities, cultural shifts, risk reduction, brand strength, and resilience, especially in transformations tied to digital or AI where future competitiveness and license to operate are on the line.

– The EBITDA play: Here, the test of success is sustainable uplift in the income statement. In‑scope and out‑of‑scope performance are netted to show whether the underlying economics of the business have structurally improved, not just whether isolated initiatives hit their local targets.

You can see how AI fits differently into each of these. In a benefit‑case world, AI is a productivity enabler: more automation, fewer hours, lower unit costs. In a value‑creation frame, AI is also a capability builder: better decisions, smarter pricing, richer customer insights. In an EBITDA frame, AI becomes part of the operating model, baked into how the P&L is managed day to day.

Answering this question explicitly forces leaders to decide whether they are simply aggregating initiative‑level benefits, or whether they are committing to a broader, P&L‑anchored, value‑anchored definition of success that accounts for offsets, headwinds, and leakage. It determines how rigorously initiatives must be tied to financial baselines, how performance will be measured after the project ends, and how finance and the business will jointly validate that improvements are real and durable. This is the question that defines what “sustainable results” actually mean for your AI‑powered cost journey.

Once that expectation is settled, you can design the program management structure, and the AI stack, to support it. A benefit‑case program might emphasize initiative tracking and simple dashboards. An EBITDA‑anchored, AI‑enabled transformation usually demands integrated financial baselining, algorithmically enhanced forecasting, joint ownership between finance and operations, and a disciplined value‑realization engine that looks beyond initiative status to total P&L impact. When this alignment is missing, you see the familiar pattern: a dashboard full of green checks, but a P&L that stubbornly refuses to move, eroding credibility in both the program and its sponsors.

 Question 2: Which structural choices will make or break this program, and how will We take them?

Once success is defined, AI can start telling you where the money is, where the inefficiencies sit, where the cycle times are broken. That is powerful, but it is not sufficient. The second foundational question is: which few structural choices will make or break this program, and how will we take them?

Large cost transformations, especially those fueled by digital and AI analytics, generate thousands of actions. You get lists of SKUs to rationalize, branches to consolidate, tasks to automate, contracts to renegotiate. But their overall fate rarely hinges on the long tail. It hinges on a surprisingly small number of structural decisions that shape the playing field for everything else.

These structural choices are context‑specific, but in most companies they fall into recurring categories:

– Operating model and organizational shape: How many layers will you really run with? Which activities will you centralize, offshore, or move into shared services or AI‑assisted hubs? What will spans of control look like in a world where AI handles more routine work?

– Portfolio and scope: Which businesses, products, sites, or channels are truly core, and which are candidates for reduction, sale, or a different ownership structure?

– Relationship with group or above‑market entities: How responsibilities and costs are shared or shifted; what you will insource versus rely on global platforms for; how much local experimentation with AI you will allow versus mandating global solutions.

– Leadership and change‑management approach: How visibly senior leaders will sponsor the AI‑enabled ways of working, what behaviors are expected to change, and how those new behaviors will be modeled, reinforced, and measured.

The heart of this question is not just identifying these pivotal choices, but deciding how they will be made. Who has decision rights? What data, including AI‑generated insights, must be on the table? What criteria will govern trade‑offs, pure savings, speed, capability building, risk, customer experience? Which stakeholders must be consulted, and by when must decisions be taken to avoid slippage?

When you make this explicit, a few things happen. The steering committee calendar and agenda stop being about generic status updates and start being about specific decisions. Analytical effort, human and machine, is concentrated where it matters most, such as a potential divestiture or a radical simplification of the operating model, rather than being spread across dozens of marginal initiatives. And you reduce “decision drift”: those critical calls that are discussed endlessly but never truly decided, undermining both timelines and confidence.

If this discipline is absent, program governance often becomes a highly digital but fundamentally weak ritual. You see beautiful RAG dashboards, AI‑generated risk heat maps, and initiative trackers, but the handful of structural choices the program depends on remain unresolved. When those choices are clearly identified, sequenced, and owned, the program’s management structure, including its AI tooling, can be tuned to drive them to conclusion, reinforcing the definition of success established in the first question.

 Question 3: If we succeed, what future state will this program unlock for our business and our people?

The third question is the one leaders sometimes dismiss as “fluffy,” especially in a cost conversation. It is not. It is: if we succeed, what future state will this program unlock for the business and our people?

Let’s be honest: cost transformation hurts. People leave. Teams are reconfigured. Processes are redesigned. Long‑standing habits are challenged. Layer AI on top, and there is also anxiety about automation, new skills, new roles. If the only story you tell is “we must hit this savings number” or “group told us to take out X million,” enthusiasm will erode quickly, especially when progress is uneven, or when early benefits are modest compared with the long‑term ambition.

Articulating a compelling future state reframes the program as an investment in a better business, not just an exercise in subtraction. That future state should be described across multiple dimensions:

– Economic: A structurally lower and more flexible cost base, improved margins, and more capital freed up for innovation, AI investments, and strategic bets rather than maintenance of inefficiency.

– Strategic: A portfolio and footprint aligned with where your market is going, not where it was five years ago, so the company competes from a position of strength, not just survival.

– Organizational: A simpler, faster, more accountable organization where roles are clearer, hand‑offs are fewer, and AI augments teams so they can spend more time on value‑creating work and less on low‑value grind.

– Human: A leadership culture that is more decisive and transparent; teams that feel they have a voice in how work is redesigned; and an employee experience that becomes more coherent and less bureaucratic, even under cost pressure.

In an AI context, this future state might include things like: “We will be able to make pricing decisions in hours instead of weeks.” “We will predict and prevent failures instead of reacting to them.” “We will redeploy talent from repetitive tasks to customer‑facing and innovation roles.” These are not science fiction; they are design choices.

A strong description of the future state is specific enough that people can see themselves in it, “this is what my function, my team, my day‑to‑day will look like”, yet broad enough to integrate different streams of the transformation. It acts as a design constraint, steering leaders toward options that move the organization toward that future, rather than just chasing the fastest savings. And it becomes the backbone of your communication strategy, especially when AI is involved and people are understandably wary.

This is particularly important in multi‑year journeys where the economic impact is back‑loaded. When year‑one or year‑two savings are small relative to the 2027 or 2028 ambition, people need a reason to stay engaged after the first wave of excitement, and fear, subsides. The future‑state narrative provides that reason, linking today’s painful decisions to a tomorrow that feels both credible and worth the sacrifice.

 Question 4: What execution model will We choose, and why?

Only after you have clarified what success means, which structural choices matter most, and what future state you are building toward does it make sense to ask the fourth question: what execution model will you deliberately choose, and why?

This question determines where power, accountability, and discipline sit in your AI‑powered cost transformation, and how the program management structure should be designed in very practical terms. At its core, you are choosing between a more centralized model and a more business‑integrated (or decentralized) one, with many hybrids in between.

– In a centralized model, a strong transformation office or task force, often with a dedicated data and AI unit, is embedded across functions, owns a single integrated plan, manages interdependencies, and drives a disciplined cadence of reviews, sprints, and course corrections. Headcount moves, process redesigns, AI deployments, and value‑realization milestones are centrally orchestrated and closely linked to the agreed definition of success, with direct visibility to the C‑suite and the board.

– In a business‑integrated model, responsibility and ownership sit primarily with line leaders. They receive clear targets and guardrails, and they embed initiatives, including AI use cases, into their normal business rhythm: monthly performance reviews, operational routines, talent decisions. The central team plays the role of orchestrator, capability builder, and challenger rather than controller.

Each model carries trade‑offs. Centralization can accelerate decision‑making, ensure consistency, and protect cross‑functional priorities, but it risks creating a “shadow organization” and undermining local ownership. Business‑integrated approaches can drive stronger line accountability and increase the odds that changes stick, but they risk fragmentation, uneven standards, and slow resolution of cross‑cutting issues.

By asking what execution model will we choose, and why, you force leadership to confront these trade‑offs in the context of the earlier answers. A program framed as an EBITDA play, with complex structural decisions and a bold future‑state ambition, may require a more assertive central engine, at least in the early phases, to ensure coherence and pace. A more focused value‑creation program in a mature, execution‑strong organization might succeed better with a lighter central team and greater reliance on line management for AI adoption and cost discipline.

The answer drives very concrete design decisions: how big the transformation office should be, what skills it needs, data science, change, finance, operations, how often steering meets and what is on the agenda, how risks and issues are escalated, and how AI squads and business teams work together. Without this clarity, organizations drift into an unhelpful hybrid where the central team is held responsible for outcomes but lacks authority, and line leaders are nominally accountable but feel that “the program” is something happening to them, not with them.

 Why these four questions matter, especially in an AI era?

These four questions do not replace the hard analytical and operational work of cost transformation, things like baselining, opportunity identification, process redesign, AI model development, and disciplined execution. Rather, they provide the scaffolding that allows all that work to translate into sustainable results.

– Expectation of success defines the economic and performance standard your program must meet to be truly successful.

– Structural choices define the few big decisions your governance must be built to take, rather than merely tracking activity.

– Future state defines the “why” and ensures design and communication are anchored in a compelling narrative, not just a savings target or a tech promise.

– Execution model defines the “how” of day‑to‑day management, ownership, cadence, and the role of AI in driving and sustaining change.

When senior teams address all four explicitly at the outset, and revisit them as the program and the technology evolve, they avoid many of the familiar pitfalls: misaligned expectations between finance and operations, governance that is busy but not decisive, exhausted teams that cannot see the purpose, and PMOs that either overreach or under‑power. Instead, your AI‑powered cost transformation stands a much better chance of delivering not only the target numbers, but a more resilient, effective, and future‑ready organization on the other side.

As we gather here in Jamestown, surrounded by extraordinary innovation, it is tempting to believe that AI alone will save our cost base. It will not. AI is an amplifier. It will amplify clarity, and it will amplify confusion. It will amplify discipline, and it will amplify drift. These four questions help ensure that what AI amplifies in your organization is the right thing: a coherent definition of success, focused decisions, a meaningful future state, and an execution model designed on purpose, not by accident.

There are many other traps in cost transformation: underestimating change fatigue, failing to engage middle management, under‑resourcing implementation, letting scope creep, and being blindsided by external shocks. But if you use these four questions as your compass, they will continually pull you back to the fundamentals that matter most. 

And in a world where AI is reshaping what is possible, nothing will matter more than leaders who know not just what they can do with this technology, but what they intend to achieve, and how they will prove that the value is real, durable, and worth the journey.

MB


Comments

Leave a Reply

Discover more from © Thoughtsandideas 2025. All rights reserved

Subscribe now to keep reading and get access to the full archive.

Continue reading