## Intro
> [!summary] Who actually keeps project on track
> Every project starts with high hopes and ambitious goals. Yet, only a select few truly succeed. More often than not, teams begin to slow down, struggling to deliver as expected. Frustration builds—both on the client’s side, as deadlines slip, and within the team, as momentum and motivation wane. And when frustration sets in, blame starts to spread in all directions.
>
> Sound familiar? Or perhaps you'd rather avoid such situations altogether?
>
> Let’s dive into a real case study and uncover:
> - How to spot early warning signs before a project spirals out of control
> - How to break free from stagnation and regain momentum
> - How to create an environment where real value is consistently delivered—keeping clients happy and teams engaged
>
> Let’s uncover who truly keeps a project on track—and how you can ensure yours stays on course.
^intro
You can find recordings on:
![[Events.base#Presented on]]
## Origins
![[Putt's Law#^definition]]
The story begins with a product company that, to put it simply, lost control. They struggled to deliver effectively, and people were far from happy. Worse still, no one knew how to fix the situation:
- Failing to **deliver value** \[[[Predictability|predictably]]\]
- The team is losing motivation and **satisfaction**
- No one with the **expertise** to step in and help ^expertise
^lost-control
Fortunately, an opportunity arose to conduct a [[PMaaS]] audit for them. [[Martyna Gola]] was responsible not only for carrying it out but also for recommending improvements—both in project management and at the technical team level. ^pmaas-intro
![[PMaaS#^advantages]]
As a [[Project manager]], she had a broad understanding of [[Product management]], project delivery, and people [[Leadership]], particularly in technical teams. However, like most PMs, she couldn't identify issues stemming from technical decisions due to a lack of expertise in that area:
> [!tldr] [[T-POP]] - [[Project manager]] perspective
> - [[T-POP#Product|Product]]
> - [[T-POP#Operation|Operation]]
> - [[T-POP#People|People]]
> - ~~*[[T-POP#Tech|Tech]] [decisions]*~~
^pm-coverage
Since a PM doesn't have full visibility into the technical side—why certain things are done the way they are—Martyna reached out to me for a consultation on the technical team’s setup. ^tech-intro
And so, our story began with:
> [!quote] [[Martyna Gola]]
> Hey, would you have 15 minutes to share some insights with me on the topic of **separating frontend and backend team work**?
> [...] I’d like to make sure if there are cases **where this makes perfect sense** and how to identify them
^fe-vs-be
And you can just imagine how I looked at that moment

^fun
### A Tragedy in Five Acts
I had a strong feeling that this project would reveal numerous underlying issues—organizational, technical, and competency-related. However, what intrigued me the most was identifying the original, fundamental cause that had led the project into its current state.
> [!question] The very question
> What was the **primary failure** in this project?
^the-very-question
In the following sections, I’ll describe various aspects of the team's dysfunctions—framing them as a **tragedy in five acts**. This structure not only reflects the chronology of how these issues unfolded but also mirrors the way I analyzed the project alongside [[Martyna Gola]].
- [[#1. Structure]]
- [[#2. Communication]]
- [[#3. Impact]]
- [[#4. Ownership]]
- [[#5. Leadership]]
^agenda
As always, let me emphasize—there is **no universal method** for diagnosing or fixing project issues. The aspects I discuss here represent **my approach** to uncovering the root cause in this particular case. However, in a different project, under different circumstances, I might take an entirely different path.
![[There is no universal method#^disclaimer]]
What I expect from the reader is not to blindly follow my steps, but to understand the reasoning behind my choices—so they can apply the right methods in their own context when the situation demands it.
## 1. Structure
![[Conway's Law#^definition]]
I begin this section with one of my favorite laws—one that perfectly illustrates how most project issues are ultimately communication issues, which in turn are reflected in the organization’s structure. And structure is the first and easiest thing to observe.
This was particularly important because, at the start, structure was the only thing I had access to. Since this was Martyna’s audit, my role was to analyze what she had observed, without direct interaction with their team—essentially making it a **black box** situation.

^black-box
At first, this might seem like an impossible challenge, but Martyna had described their problems with remarkable precision. In fact, I wasn’t even the one asking the questions—she was. And her questions were spot on.
### PM + team split
The first questions considering frontend and backend team split she started with:
> [!quote] [[Martyna Gola]]
> *It's not about a **technical** separation of FE and BE (!),*
> *but about **separate meetings, plannings, and siloing of both parts**.*
^fe-be-communication
This was the perfect remark—because it didn’t just focus on the technical differences between frontend and backend teams but rather on why they had become siloed in the first place. What’s more, task-level coordination between developers was being handled by the Project Manager, which, as you can imagine, led to potential problems.
![[Team vertical split.png|400]]
So the obvious next question was: **Why was the team structured this way? What led to this setup?** And the answer? Well, it wasn’t hard to guess...

^but-why-nobody-knows
### Lattice team
The key issue wasn’t just the theory behind why organizations make such splits—it was whether anyone in the team actually knew why this decision had been made. Did the team understand the reason behind it? Because this directly shaped how they communicated. And that led to the next critical question: HOW does the FE team communicate with the BE team?
> [!quote] [[Martyna Gola]]
> *They **don’t even know what their [BE] structure looks like**...*
>
> *[...] One developer works on **FE dashboards** and has one assigned BE developer for **BE dashboards**—that’s how they coordinate.*
^vertical-division
![[Team lattice.png|400]]
As mentioned earlier, task synchronization between both teams was managed by their PM. But an even more interesting pattern emerged:
- Some individuals **did** communicate across teams,
- But **only within specific feature silos**,
- And barely with others inside their own team.
^communication-patterns
### Silos
This created a network of isolated feature silos rather than an integrated team. The frontend and backend weren’t just technically separated—they operated as distinct units, each with their own meetings, plannings, and internal flows. The division wasn’t driven by architecture, but by communication boundaries.
![[Silo#^definition]]
![[Lattice team.png|400]]
In practice, this meant that some individuals did collaborate—but only within the context of specific features. They rarely communicated across silos and, surprisingly, not even consistently within their own. The result was a fragmented web of micro-teams, each solving its own problems in isolation.
The consequences were predictable:
![[Silo#^consequences]]
Silos don’t just slow you down—they compound risk. Decisions made in isolation tend to optimize locally rather than globally. And when issues arise, they’re harder to diagnose because no one has the full picture.
### [[Brooks’s Law]]
Meanwhile, Martyna and I noticed something else—their **backend team was significantly larger than their frontend team**. This might have made sense, but we wanted to confirm if **they actually knew why**.
![[Team scalability.png|600]]
> [!quote] [[Martyna Gola]]
> *[...] Things were moving too slowly.*
> *So they hired more backend developers, but now there’s **no budget left for FE**.*
^be-bigger
This wasn’t just a management failure—it also highlighted a fundamental misconception - Scaling a team does not immediately scale delivery. Hiring more developers does not automatically increase output proportionally—in fact, it can slow things down before it speeds them up, as new people need onboarding, which affects the productivity of the existing team.
![[Brooks’s Law#^definition]]
This principle clearly explains how [[Onboarding|onboarding]] new people into a project requires existing team members to onboard them—especially in the absence of proper documentation. As a result, their productivity temporarily drops, which impacts the delivery pace of the entire team.
^brooks-law
![[Brooks’s Law#^visualized]]
It ultimately exposed poor management and a lack of understanding that weak or missing processes become even more visible during scaling. Some gaps simply can't be shortcut—without the right foundations, growing the team only makes things worse.
### Overscaled siloed monolith
What we were looking at wasn’t just a communication failure. It was a structural one:
- **Siloed [[Monolith|monolith]]**,
- A structure that no one fully understood,
- Teams that barely knew each other,
- And severe communication breakdowns.

In short, the organization had scaled in the wrong places, without the right foundations. And it was now paying the price in delivery speed, cohesion, and morale.
So, when looking at this through the lens of [[Conway's Law]], the picture becomes crystal clear: The architecture mirrored the org chart. And the org chart mirrored their communication failures.
## 2. Communication
Now that we understand their structure clearly reflects communication gaps, let’s dive deeper into this issue—and reverse [[Conway's Law]] for a moment.
![[Conway's Law#^me-about-structure-problems]]
Given how their setup looks, we can pinpoint areas of potential risk. Since we’re dealing with a critically siloed structure, the problems are most likely to impact their ability to deliver as a team. This affects every stage—from product vision, through development, reviews, and testing, all the way to the deployment process itself.
All of these stages require strong team communication—which in turn depends on having shared processes, standards, and collaboration practices in place.
- Deployment
- Testing
- Review
- Design & Development [Decision making]
- Vision
^development-cycle
Let’s walk through these elements—starting from the end: the deployed product and the way their **deployment procedure** is handled.
### Deploy
The First—and Simplest—Question: Do they have automated deployment?

^automated-deploy
So, How Do They Deploy?
> [!quote] [[Martyna Gola]]
> There are dedicated two people for deploying to [[Production environment]].
> As far as I understand, **only these two people have the skills to do it**.
^deploy
This immediately raises two serious concerns:
1. **Only two people can deploy**, which creates a massive risk if either of them becomes unavailable.
2. **There’s no single, consistent version of the deployment process**—more on that in a moment.
The first issue is fairly straightforward: if something were to happen to these two individuals, the entire project would be at risk. No one else knows how to perform a production deployment. In such a case, the result would likely be significant delays, mistakes, and operational chaos.
![[Bus factor#^definition]]
So yes, the [[Bus factor]] here is very real. But it also reveals a second issue—**consistency**. When two separate people are responsible for the same task, without automation or a shared, enforced procedure, you’re almost guaranteed to have deviations.
![[Segal’s Law#^definition]]
Without standardization, there’s no guarantee that deployments are being done the same way. Something that could be unified in a single document is now split between two people, introducing unnecessary variability and risk.
Now, to be fair, they _do_ have some form of documentation—but it leaves a lot to be desired
> [!example] They _do_ have some form of documentation
> They have a document, which is **a copy** of Atlassian's [[Gitflow]] documentation, with an added suggestion at the start:
>
> `1. Do not merge into develop`
> `2. Do not deploy on Friday`
> ...
^documented-workflow
Not only could they have simply linked to the article instead of copying it, but these added rules (e.g., “don’t deploy on Fridays” or “don’t merge into develop”) are the kind of checks that should be enforced by CI/CD pipelines—not left to human memory or [[Tribal knowledge]].
In reality, there is **no standardized deployment procedure**. The process is **manual**, known only to two people, and lacks safeguards to prevent human error or absence. Which means, in practical terms, **this risk is still very much alive**.
> [!question] Where can you find that **[written]** information?
> Assuming ***you don’t have to ask*** someone else—because they might just... miss their bus.
^right-documentation
### Tests
The next logical area to explore—also closely tied to automation—is **testing**. And when asked whether they had tests in place, the answer was... painfully obvious.

^tests
Why is this such a big deal?
Because tests aren’t just about catching bugs. Well-written tests immediately provide living documentation of what the system is supposed to do. They show that the team knows what should be delivered, and they serve as proof that it actually was.
With proper automation, tests also protect the integrity of previous work—ensuring that new changes don’t break what’s already in place. Just like deployment procedures, automated tests are also one of the most effective onboarding tools. They show how the system behaves and what its constraints are, allowing new team members to ramp up faster.
Ultimately, in any modern development team, tests become the primary tool of communication:
![[Tests as a Documentation#^aspects]]
So what happens when there are no tests?
You not only lose confidence in what’s being delivered—you also lose one of the most powerful signals of shared understanding in the team.
> [!question] And it raises a crucial question
> How can you perform proper code review if you don’t even know what the task was supposed to deliver?
Without tests, reviews become guesswork. Reviewers are left to judge code without a clear benchmark or agreed-upon expectations. And in that situation, both quality and communication inevitably suffer.
### Reviews
Third Time’s the Charm – They _Do_ Code Reviews!
> [!quote] [[Martyna Gola]]
> Since last month, CRs have been a must-have—one guy pushed for it and made the rest comply. **Two people are automatically assigned, but only one approval is needed**.
^code-review
It’s not exactly how I’d like it to work, but it’s a start.
Unfortunately, in many cases, code review ends up being reduced to just finding someone to click `Approve`:
![[Code Review#^just-apoprove]]
But a proper code review should be so much more than that. It carries a whole range of responsibilities and expectations, including:
![[Code Review#^aspects]]
At its core, **code review is yet another form of communication**. It exists not just to catch bugs, but to align the team around shared decisions, uphold quality, prevent critical mistakes, and reinforce standards.
Which brings us to the next area worth examining: [[Decision making]].
### Decision making
It turned out that some time ago, the team had made one of their bigger decisions:
> [!quote] [[Martyna Gola]]
> The project is a few years old. They recently **updated Angular from 15 to 18**, and whenever they have a moment, **they patch up their old code**.
^angular-upgrade
So naturally, the follow-up question was: Why?
And as expected—**no one could answer**.
> 
There was no source of truth for the decision. No documentation, no person, no rationale.
This kind of decision can have serious consequences. You end up cleaning up after something without fully understanding the value it was supposed to bring. Sure, you can dig into Angular’s release notes and learn what changed between versions—but that won't tell you **WHY** the upgrade made sense for **this** project. That’s where a [[Decision Log]] becomes crucial:
![[Decision Log#^definition]]
And to be clear—this isn’t about documenting a decision after the fact.
It’s about making the decision consciously, by writing it down before the work starts, as evidence of understanding:
![[Document-driven development#^definition]]
From a technical team’s perspective, [[Architecture Decision Record]] are especially important. And if they’re aligned with [[Business-oriented implementation decisions]], they become a powerful resource that captures not only _what_was built, but _why_—from both a technical and business perspective:
![[Architecture Decision Record#^definition]]
![[Business-oriented implementation decisions#^parts]]
Finally, decision-making like this should be closely aligned with the product vision. Only then can we ensure that the team is building the right solution to the right problem.
And that brings us to the final point...
### Vision
Does anyone from tech actually know the big picture?
> [!quote] [[Martyna Gola]]
> There is no one who knows the whole system because it's too complex. Developers are siloed per module, which leads to the problem that **they don’t know the entire application**.
^no-one-knows-everything
This wasn’t quite the answer I was expecting. Not because I don’t understand the complexity of the system—but because the real issue isn’t that they don’t grasp how the application works.
It’s that they don’t understand **why** the product exists in the first place.
Sure, there’s likely some high-level awareness. But the level of focus on individual modules is so strong that the overarching purpose—the product vision—fades into the background. Decisions start being made in isolation, disconnected from the [[Vision]]:
![[Vision#^definition]]
But modules don’t live in isolation. They exist to serve the product as a whole. And that’s why I always rely on a simple test to assess the health of a development team:
![[Vision#^ultimate-question]]
### Overall
Sadly, this question—like many others—went mostly unanswered. And there were plenty of areas in need of improvement:
- Teams should work **together**, not in silos
- Code reviews should **add value**, not just grant approvals
- Changes should be made with **confidence**, backed by proper tests
- Decisions should have a **clear history**, through documented ADRs
- Knowledge should exist in **one reliable place**, not scattered across people and platforms
- There should be **no key-person risk**—work shouldn’t grind to a halt if someone disappears
- Deployments should be **faster and safer**, with automation in place
- ...
^improvements
Which led me to a new, deeper concern:
> [!quote] Me
> *Why does no one **feel the urge** to fix these things?*
^the-question
## 3. [[Impact]]
![[Hawthorne effect#^definition]]
This final question wasn’t about the organization, the team, or even communication—it was about the **individual**. Or rather, individuals.
Specifically: **Why weren’t people having as much positive impact on the project as they potentially could?**
This line of thinking naturally brought us closer to one of the most sensitive topics: **assessing team members.**
### Assessment
And, as if on cue, the question appeared—powerful and direct:
> [!quote] [[Martyna Gola]]
> How can we efficiently assess the impact of developers already in the client's team?
^assessment
It wasn’t hard to guess where this question came from. And the reason wasn’t surprising:
> The client needs to **downsize** the team.
^team-reduction
Given everything we’ve seen so far—especially the oversized backend team and the siloed, inefficient structure—it was only a matter of time before someone questioned the effectiveness of the setup. The delivery wasn’t meeting expectations, and eventually, that leads to difficult decisions.
![[Team scalability.png]]
So now the core question becomes:
> **How do you fairly assess each individual and their actual contribution to the team?**
This is where technical, organizational, and interpersonal dimensions collide.
And it’s not just about performance—it’s about context, collaboration, ownership, and alignment with the product’s goals.
### Where is [[Leadership]]?
In a situation like this, my natural instinct is to turn to **team leadership** for insight. But then came the realization:
> **There is no leadership.**
Let me explain.
- The **Project Manager has no insight into the technical team’s skills or capabilities**
- While they _can_ try to assess someone’s impact, the work delivered has been task-based, not outcome- or scenario-based, making meaningful evaluation even harder
- There are **no technical leaders** within the team
> [!quote] [[Martyna Gola]]
> *They have **no clarity** about their seniority within the team, and there’s **no one responsible** or making technical decisions.*
> ***No one tracks** their quality, progress, or growth, and there’s no mentoring..*
^lack-of-tech-leader
Without leadership, there’s **no sense of ownership**. And without ownership, there’s **no reliable way to evaluate contributions**. Which, unfortunately, leaves us clinging to the weakest source of truth: **metrics**.
But how do you actually **measure impact**?
- number of ***commits***?
- number of ***tasks***?
- number of ***story points***?
- ...
^productivity-metrics
None of these truly reflect real value, especially in a context lacking vision, process, or [[Team collaboration]]. Without leadership to guide, support, and evaluate—**everything becomes guesswork**, and decisions start being made based on surface-level numbers rather than meaningful outcomes.
### Metrics trap
Here’s a story that perfectly illustrates a point made by [[Dan North]] in [[The worst programmer]]
![[The worst programmer#The story of Tim]]
This is the trap many managers fall into. It’s tempting to search for "objective" metrics and optimize against them. But the problem is this: **Most metrics measure output, not outcome.**
![[Outcome over output#^difference]]
In Tim’s case, evaluating him based on individual metrics completely missed his real contribution—one that was **team-wide**, not task-bound.
Tim should no longer be judged by how much individual value he produces.
He should be evaluated as a **[[Tech Lead]]** - on how much value the **team delivers over time** under his guidance.
![[Purpose should be prioritized over metrics#^the-quote]]
And yet, this story also reveals something deeper: No one had established what impact was actually expected from them—not as individual contributors, not as mentors, not as leaders. And when expectations are unclear, metrics will always try to fill the void.
Usually… poorly.
### [[Goodhart's Law]]
![[Goodhart's Law#^definition]]
This is, without a doubt, one of my favorite laws—and one I’ve seen play out repeatedly in real projects. In fact, I’d even rephrase it like this:
![[Cobra effect#^definition]]
In Software Development?
Let’s revisit the kinds of metrics we’ve already mentioned:
![[#^productivity-metrics]]
Now watch what happens when they become targets:
- **Commits?**
Developers start making unnecessary micro-commits, cluttering history and slowing down long-term development.
- **Tasks?**
The team begins decomposing work into tiny tasks just to inflate the numbers—wasting time and slowing down real delivery.
- **Story points?**
Developers start **overestimating** tasks to appear more productive, while actual delivery speed declines.
Do you see what’s happening here?
These are all **[[Output]]-based metrics**—and when used as performance indicators, they distort reality. They **create incentives to game the system** rather than improve it. The only meaningful metrics are those that measure **[[Outcome]]**. And in this case, the outcomes were painfully clear:
![[#^lost-control]]
Let me put it another way:
> Failing to deliver value affects both **the client's business** and **the team's morale**.
The client loses confidence. The team loses purpose.
### Where is Tim?
> [!quote] [[Martyna Gola]]
> *Why doesn’t the PM expect the team to take ownership? – dunno yet*
^pm-ownership
The truth is, nothing in this audit was rocket science.
All the insights we surfaced—on silos, broken processes, missing documentation, lack of testing, poor communication, and unclear decision-making—are **not groundbreaking**. They’re foundational. Well-documented. Widely known. In fact, more experienced team members should have already started optimizing these areas themselves.
The real issue we began to uncover wasn’t poorly designed processes. It wasn’t a lack of best practices. It wasn’t even technical debt. The real problem was the absence of a driver for change. The problem was the absence of Tim.
> Why is there still no one like Tim in their team **who brings real impact**?
^where-is-tim
## 4. [[Ownership]]
![[Clean Code#^ownership]]
From the very beginning of this audit, I made a key assumption:
That the team _wanted_ to do things well—but they couldn’t.
Maybe they lacked the right process.
Maybe they were blocked by structural issues.
Maybe they just didn’t have the right tools or guidance.
In short, I assumed **good intent**.
And often, that assumption holds. In many organizations, skilled people are buried under broken systems. But there’s one thing that’s harder to fix than broken processes: **attitude**.
Because someone with limited experience but a clear sense of purpose can move mountains.
While someone with deep expertise but no desire to contribute will slowly **demoralize the team**.
### [[Ownership#Trio]]
The turning point of this entire audit came not from a diagram, metric, or documented gap—but from a single line, buried in a conversation:
> [!quote] [[Martyna Gola]]
> Still, my favorite insight from today is:
>
> *I don’t test [...], **because no one expects it from me.** *
^ownership
That sentence hit hard. Because, as countless sources will tell you:
![[Ownership#^lack-of-ownership]]
And suddenly, things started to click.
It aligned perfectly with the very first problems we identified in the product:
![[Who actually keeps a project on track#^lost-control]]
What About Expertise?
I spent a lot of time questioning whether the team lacked expertise.
But even if they did—expertise is something you can build. You grow it through practice.
So in the end, that wasn’t the real issue. The real problem is this: Even an expert who doesn’t feel responsible for the outcome will make no meaningful difference.
~~![[Who actually keeps a project on track#^expertise]]~~
Which brings us full circle:
> There was no one who felt **[[Responsibility|responsible]]**
^lack-of-responsibility
![[Ownership#^knowledge]]
### [[Responsibility]] and [[Teen ager]]s team
Let’s go back to the two fundamental questions we kept circling around:
![[#^pm-ownership]]
![[#^the-question]]
In the end, it all came down to **a shared absence of [[Responsibility]] for [[Outcome]]s**—on _both_ sides:
- The **team** didn’t feel accountable.
- But the **PM** also didn’t feel accountable—neither for the overall delivery nor for shaping the team in a way that would foster a sense of [[Ownership]].
^lack-of-collaboration
![[Responsibility#^definition]]
This lack of responsibility revealed itself in a phrase that eventually started functioning like a mantra:
![[Teen ager#^mantra]]
And this was the _teenager team_:
A group that knows enough to act independently but doesn’t feel accountable when things go wrong.
![[Teen ager#^definition]]
If a team doesn’t feel responsible for what it delivers, no amount of processes, agile
ceremonies, or DevOps tools will save it.
> **Self-organizing teams only work if the individuals inside them take ownership.**
> Without ownership, all you’re left with is chaos wearing a badge of autonomy.
^self-organizing-team
The fix isn’t more control.
It’s building a culture where **ownership is expected—and felt.**
### You build it, you own it
![[You build it, you run it#^werner-vogels]]
![[Ownership#^alex-ewerlof]]
This is the most important principle to remember:
> **If you build something, you're also responsible for what it does, how it works, and the impact it creates.**
Ownership isn’t a buzzword—it’s the foundation of trust, quality, and autonomy in any development team.
And true ownership requires **three essential components**:
![[Ownership#^trio]]
Without any one of these, ownership collapses. You might have people doing tasks—but no one will truly care about the outcome.
![[Ownership#^mastery]]
And that’s the key point: **Ownership is not a process. It’s a mindset.**
A mindset that says, _“This is mine. I care. I will make it better.”_
### Enable ownership culture
Ownership doesn’t happen automatically.
It doesn’t emerge from job titles, seniority, or process diagrams.
It’s something you **design for**—by creating the right environment, values, and expectations.
At its core, building an ownership culture means creating a space where **people care** about what they’re building, take **responsibility** for its outcome, and continuously **strive to make it better**.
The following principles—rooted in [[Extreme programming]] and deeply aligned with **Brainhub’s values**—offer a practical foundation:
![[Brainhub#^values]]
Ownership culture isn’t about making everyone a hero.
It’s about creating a system where **caring is expected**, supported, and rewarded.
In this audit, the ownership gap was clear—and it existed **on both sides**:
- The **organization** wasn’t talking to the team, wasn’t setting clear expectations, and wasn’t creating space for ownership to develop
- The **team**, in turn, didn’t act to take ownership either—they waited, hesitated, and operated more like passive participants than owners
As mentioned earlier, it felt like a classic case of a **“Teenager Team”**. That kind of environment can’t support autonomy, improvement, or high performance. And the team will never mature unless the culture around them does first.
## 5. [[Leadership]]
![[Putt's Law#^definition]]
This quote perfectly captures a common dysfunction in tech organizations:
[[Mandate]] (authority to make decisions) and [[Knowledge]] (understanding of the system) are often **split between different roles**.
But what makes the situation truly problematic is when **neither side carries [[Responsibility]]**.
When you have:
- PMs with mandate but no understanding,
- Tech leads with knowledge but no authority,
- And no one with ownership of the outcome...
…you’re not managing a team—you’re managing misalignment.
To better understand where leadership should live, let’s look at the system from the **T-POP** perspective:
![[T-POP#^definition]]
When [[Putt's Law]] plays out, the PM becomes responsible for **POP** (People, Operation, Product), and the Tech Lead is isolated to **Tech**.
But when these areas are treated as separate silos, and people retreat into their own “cubicles” of concern, leadership fractures.
What you get instead is:
- PMs optimizing delivery without understanding the technical implications
- Engineers optimizing architecture without considering business value
- Teams working in parallel but not together
The issue arises when people stay hidden in their “cubicles,” protecting their own area without bridging to others. That’s when leadership breaks down.
### Broken [[Business-oriented implementation decisions]]
> Those _who_ **understand** what _they_ **do not manage**
One of the most common dysfunctions is when technical roles isolate themselves within the technology layer, disconnected from actual business value delivery. In that mode, it becomes nearly impossible to make meaningful business-oriented implementation decisions.
![[T-POP#^tech-cubicle]]
Software isn't built for the sake of software.
It's built to solve **real product problems** (**WHY**), delivered through a defined process (**HOW**), by a team (**WHO**), using the right technology (**WHAT**).
> [!danger] You cannot make a proper **Tech** decision without knowing **POP**
> - ~~***WHY*** - Product~~
> - ~~***HOW*** - Operation~~
> - ~~***WHO*** - Poeple~~
> - WHAT - Tech solution
^broken-pop
That’s what made **Tim**, as a Tech Lead, stand out—he covered the full **T-POP** spectrum:
> [!success] Tim
> - **Tech - Strong leadership**
> - People - Mentoring, [[Pair programming]], building capability
> - Operations - Advocating [[Extreme programming]] & improving delivery
> - Product - Driving business value through team delivery
^tim-and-tpop
### Broken [[Project governance]]
> Those _who_ **manage** what _they_ **do not understand**
Just as Tech Leads can isolate into tech-only thinking, **Project Managers** can fail when they manage **POP** (Product, Operation, People) without understanding the technical side.
> [!danger] You cannot manage a **Project** properly without knowing **Tech**
> - WHY - Product
> - HOW - Operation
> - WHO - Poeple
> - ~~***WHAT*** - Tech solution~~
^broken-tech
But how can a PM understand tech if they’ve never been an engineer?
They don’t need to be technical experts—but they **must understand the impact** technical decisions have on delivery, on the team, and on the final product.
I’m reminded of a story involving [[Aleksandra Gepert]] (PM). I once overheard her casually chatting with a Solutions Architect about **Prisma (an ORM)**.
Can you imagine? A PM having a technical conversation about data access tools.

^pm-tech
But Ola didn’t care how Prisma worked.
She was focused on understanding its **impact**:
- Product - *How it would affect the product*?
- Operation - *How it would affect delivery*?
- People - *How it would affect the team*?
^tech-impact
You can’t manage a product effectively without understanding how its parts work together. You don’t have to know how to write the code—but you **do** have to care what it does.
### Broken [Continuous] Delivery
When Project Governance and Business-Oriented Implementation Decisions break down, the ultimate casualty is not just team morale or clean code—it’s Continuous Delivery itself. And when that falls, so does the ability to deliver value to the client.
This project exposed how tightly interwoven the three pillars of [[Software Delivery Excellence]] truly are:
![[Software Delivery Excellence#^pillars]]
Without one, the others begin to wobble.
Without two, delivery slows to a crawl.
Without all three, **everything collapses**.
And that’s exactly what we saw:
- No one felt **responsible** for outcomes
- No one stepped into **ownership**
- And no one took the **lead** to drive change
^leader-steps
At that point, **no process, methodology, or tool can save the project**.
Because at the heart of Software Delivery Excellence isn’t Agile. It isn’t DevOps.
It’s **people who care enough to make things better.**
When the system is broken, the team is passive, and leadership is missing… what next?
### True Leadership
It might sound funny, but the answer is trivially simple: Just take responsibility.
Everyone complains—it’s management’s fault, the tools, or maybe the alignment of the planets. But maybe, just maybe, it’s worth looking inward and asking: Why didn’t I try to change something for the better?
![[Change and not complain]]
And once you do take responsibility, you’ve already assumed a large part of the **ownership** for the team. Sooner or later, your focus shifts from _just your work_ to the **outcome of the entire team**.
![[Most Leaders Don't Even Know the Game They're In#^leaders-forget-their-real-job]]
That’s the hardest moment I’ve seen for most developers stepping into leadership role - The moment when they stop just writing code and start being accountable for more.
In extreme cases, a Tech Lead may even stop contributing code entirely - Because they’re no longer responsible for **just the code**. They’re responsible for **how the entire team delivers**.
![[Most Leaders Don't Even Know the Game They're In#^leader-transition]]
And you don’t get there by telling people what to do—
You get there by **mentoring** and **coaching**—
Exactly the way **Tim** did.
Tim was empathetic.
He understood that his job was to **create the right environment** for his people.
![[Most Leaders Don't Even Know the Game They're In#^leader-environment]]
And if you’re wondering how to become a “Tim” in your team, start by asking:
![[Most Leaders Don't Even Know the Game They're In#^the-ultimate-leader-question]]
And then, simply:
**Help them.**
### T-POP = TL + PM
As I mentioned earlier, when **both sides take responsibility for the whole**, something important starts to happen: **Boundaries begin to blur**, and collaboration forms around a **shared outcome**, across every domain of delivery.
![[T-POP#^definition]]
Of course, the **Tech Lead** will naturally focus more on the **Tech** side, and the **Project Manager** on **Product/Operations**—
But they’re no longer disconnected. That’s the key to **effective leadership**:
> You take responsibility for the **team’s outcome**, regardless of where your perspective starts.
^responsibility-for-tpop
Different leadership setups exist—
You might have one person leading each T-POP area in larger organizations, or one person covering all in smaller ones.
But the rule doesn’t change:
> **You take responsibility for the whole.**
So How Did We Help?
We didn’t fix it with a framework or magic template.
We helped by:
- Enabling an **ownership** and **collaboration** culture
- Supporting the development of real **leadership**
^action-steps
It wasn’t easy.
And no one expected instant results.
But that brings us to the natural question:
> **How did we know it worked?**
Simple:
We asked them how the original problem was resolved.
And the answer?
- **They** regained control:
- **Started delivering** more predictably
- **Became** more **satisfied** with their work
- Implementing **PM + TL excellence** same as mine with [[Martyna Gola]]
^prove
It still wasn’t a perfect state—but it was clear progress.
And ultimately, what validated this **TL + PM** setup was my own cooperation with [[Martyna Gola]].
That collaboration enabled us to **genuinely help** the team—
Not by enforcing roles,
But by **sharing responsibility**,
And aiming for one common goal:
> **Helping the team deliver better, together**
---
## There and Back Again
In this article, I walked through a journey across several key elements, to understand **where the problems in the product team came from** - a team that originally looked like this:
![[Team scalability.png]]
We focused on the following areas:
- We started with **structure**, which immediately revealed likely communication issues.
- Then we dove deeper into **communication itself**, especially looking at whether their standards acted as a communication layer.
- That led us to assess their **impact—or rather, the lack of it—on the project**.
- Which ultimately revealed the **core issue**:
→ **A lack of responsibility for delivery** and a missing sense of **ownership**.
- Ownership, when activated, would have allowed **true leaders to emerge**—on both the project and technical sides.
- And with real leadership in place, the full **T-POP** spectrum would be covered, enabling the team to deliver and to regain **satisfaction in their work**.
### [[Conway's Law]] one more time
Let’s now consider—just for the sake of reflection—what would happen if we followed **Conway’s Law** in reverse:
![[Conway's Law#^definition]]
![[#^agenda]]
What if we flipped the process?
- What if the **right leaders were already there**?
- What if they not only had **ownership**, but actively **cultivated a culture of ownership** within the team?
- What if **everyone contributed with real impact**?
- What if that naturally enforced **strong collaboration and communication**?
^reverse-order
### The potato team
Then the structure that would emerge wouldn’t be siloed or fragmented. It would likely be what I call a **“potato team”** - a team that works together, collaborates, takes ownership, delivers real value,
and most importantly—**feels satisfaction in doing it**.
![[One team.png]]