Now that we understand their structure clearly reflects communication gaps, let’s dive deeper into this issue—and reverse [[Conway's Law]] for a moment.
![[Conway's Law#^me-about-structure-problems]]
Given how their setup looks, we can pinpoint areas of potential risk. Since we’re dealing with a critically siloed structure, the problems are most likely to impact their ability to deliver as a team. This affects every stage—from product vision, through development, reviews, and testing, all the way to the deployment process itself.
All of these stages require strong team communication—which in turn depends on having shared processes, standards, and collaboration practices in place.
- [[Deployment]]
- Testing
- Review
- Design & Development [Decision making]
- Vision
^development-cycle
Let’s walk through these elements—starting from the end: the deployed product and the way their **deployment procedure** is handled.
### Deploy
The First—and Simplest—Question: Do they have automated deployment?

^automated-deploy
So, How Do They Deploy?
> There are dedicated two people for deploying to [[Production environment]].
> As far as I understand, **only these two people have the skills to do it**.
^deploy
This immediately raises two serious concerns:
1. **Only two people can deploy**, which creates a massive risk if either of them becomes unavailable.
2. **There’s no single, consistent version of the deployment process**—more on that in a moment.
The first issue is fairly straightforward: if something were to happen to these two individuals, the entire project would be at risk. No one else knows how to perform a production deployment. In such a case, the result would likely be significant delays, mistakes, and operational chaos.
![[Bus factor#^definition|Bus factor]]
So yes, the [[Bus factor]] here is very real. But it also reveals a second issue—**consistency**. When two separate people are responsible for the same task, without [[automation]] or a shared, enforced procedure, you’re almost guaranteed to have deviations.
![[Segal’s Law#^definition|Segal’s Law]]
Without standardization, there’s no guarantee that deployments are being done the same way. Something that could be unified in a single document is now split between two people, introducing unnecessary variability and risk.
Now, to be fair, they _do_ have some form of documentation—but it leaves a lot to be desired
They have a document, which is **a copy** of Atlassian's [[Gitflow]] documentation, with an added suggestion at the start:
`1. Do not merge into develop`
`2. Do not deploy on Friday`
...
^documented-workflow
Not only could they have simply linked to the article instead of copying it, but these added rules (e.g., “don’t deploy on Fridays” or “don’t merge into develop”) are the kind of checks that should be enforced by CI/CD pipelines—not left to human memory or [[Tribal knowledge]].
In reality, there is **no standardized deployment procedure**. The process is **manual**, known only to two people, and lacks safeguards to prevent human error or absence. Which means, in practical terms, **this risk is still very much alive**.
> [!question] Where can you find that **[written]** information?
> Assuming ***you don’t have to ask*** someone else—because they might just... miss their bus.
^right-documentation
### Tests
The next logical area to explore—also closely tied to automation—is **testing**. And when asked whether they had tests in place, the answer was... painfully obvious.

^tests
Why is this such a big deal?
Because tests aren’t just about catching bugs. Well-written tests immediately provide living documentation of what the system is supposed to do. They show that the team knows what should be delivered, and they serve as proof that it actually was.
With proper automation, tests also protect the integrity of previous work—ensuring that new changes don’t break what’s already in place. Just like deployment procedures, automated tests are also one of the most effective onboarding tools. They show how the system behaves and what its constraints are, allowing new team members to ramp up faster.
Ultimately, in any modern development team, tests become the primary tool of communication:
![[Tests as a Documentation#^aspects]]
So what happens when there are no tests?
You not only lose confidence in what’s being delivered—you also lose one of the most powerful signals of shared understanding in the team.
> [!question] And it raises a crucial question
> How can you perform proper code review if you don’t even know what the task was supposed to deliver?
Without tests, reviews become guesswork. Reviewers are left to judge code without a clear benchmark or agreed-upon expectations. And in that situation, both quality and communication inevitably suffer.
### Reviews
Third Time’s the Charm – They _Do_ Code Reviews!
> [!quote] [[Martyna Gola]]
> Since last month, CRs have been a must-have—one guy pushed for it and made the rest comply. **Two people are automatically assigned, but only one approval is needed**.
^code-review
It’s not exactly how I’d like it to work, but it’s a start.
Unfortunately, in many cases, code review ends up being reduced to just finding someone to click `Approve`:
![[Code Review#^just-apoprove]]
But a proper code review should be so much more than that. It carries a whole range of responsibilities and expectations, including:
![[Code Review#^aspects]]
At its core, **code review is yet another form of communication**. It exists not just to catch bugs, but to align the team around shared decisions, uphold quality, prevent critical mistakes, and reinforce standards.
Which brings us to the next area worth examining: [[Decision making]].
### Decision making
It turned out that some time ago, the team had made one of their bigger decisions:
> The project is a few years old. They recently **updated Angular from 15 to 18**, and whenever they have a moment, **they patch up their old code**.
^angular-upgrade
So naturally, the follow-up question was: Why?
And as expected—**no one could answer**.
> 
There was no source of truth for the decision. No documentation, no person, no rationale.
This kind of decision can have serious consequences. You end up cleaning up after something without fully understanding the value it was supposed to bring. Sure, you can dig into Angular’s release notes and learn what changed between versions—but that won't tell you **WHY** the upgrade made sense for **this** project. That’s where a [[Decision log]] becomes crucial:
![[Decision log#^definition]]
And to be clear—this isn’t about documenting a decision after the fact.
It’s about making the decision consciously, by writing it down before the work starts, as evidence of understanding:
![[Documentation-driven development#^definition]]
From a technical team’s perspective, [[Architecture Decision Record]] are especially important. And if they’re aligned with [[Impact-oriented decision making]], they become a powerful resource that captures not only _what_was built, but _why_—from both a technical and business perspective:
![[Architecture Decision Record#^definition]]
![[Impact-oriented decision making#^parts]]
Finally, decision-making like this should be closely aligned with the product vision. Only then can we ensure that the team is building the right solution to the right problem.
And that brings us to the final point...
### Vision
Does anyone from tech actually know the big picture?
> [!quote] [[Martyna Gola]]
> There is no one who knows the whole system because it's too complex. Developers are siloed per module, which leads to the problem that **they don’t know the entire application**.
^no-one-knows-everything
This wasn’t quite the answer I was expecting. Not because I don’t understand the complexity of the system—but because the real issue isn’t that they don’t grasp how the application works.
It’s that they don’t understand **why** the product exists in the first place.
Sure, there’s likely some high-level awareness. But the level of focus on individual modules is so strong that the overarching purpose—the product vision—fades into the background. Decisions start being made in isolation, disconnected from the [[Vision]]:
![[Vision#^definition]]
But modules don’t live in isolation. They exist to serve the product as a whole. And that’s why I always rely on a simple test to assess the health of a development team:
![[Vision#^ultimate-question]]
### Overall
Sadly, this question—like many others—went mostly unanswered. And there were plenty of areas in need of improvement:
- Teams should work **together**, not in silos
- Code reviews should **add value**, not just grant approvals
- Changes should be made with **confidence**, backed by proper tests
- Decisions should have a **clear history**, through documented ADRs
- Knowledge should exist in **one reliable place**, not scattered across people and platforms
- There should be **no key-person risk**—work shouldn’t grind to a halt if someone disappears
- Deployments should be **faster and safer**, with [[automation]] in place
- ...
^improvements
Which led me to a new, deeper concern:
> *Why does no one **feel the urge** to fix these things?*
^the-question