All posts
Operations Internal Tools Process Automation

What we actually find when a project is '80% done'

Made Right Software

The conversation usually starts the same way. A founder or ops lead comes to a new team with a project that a previous developer left unfinished. The project is described as 80% complete. They need someone to take it over and wrap up the remaining work. The timeline they have in mind is a few weeks, and the budget is whatever remains after paying the previous developer.

Almost none of those assumptions survive first contact with the codebase.

What “80% done” actually measures

When a developer describes a project as 80% complete, they are typically measuring features that have code written for the happy path. A screen exists. A button does something. The demo works when you click through it in the right order, with clean data, on the developer’s machine.

That is not the same as 80% of the work required to ship a production-ready product. The gap between those two definitions is where most inherited project disasters live.

The missing pieces are predictable. Error handling and edge cases are never built until someone forces the issue. Tests were skipped to move faster. Deployment infrastructure does not exist. The developer worked with clean fixtures throughout, so real data handling was never addressed. Security and permission logic is half-built. Integration work that connects all the pieces was deferred to the end, because it cannot begin until the pieces exist.

Each of those categories is real weeks of work. None of them appear in a feature count.

Why the next team needs to audit before they can quote

Any team that gives you a fixed price to finish an inherited project without first auditing the codebase is quoting blind. They are either padding the number heavily to cover unknown risk, or they are underquoting and will run into trouble when the hidden gaps surface.

A proper audit takes one to three weeks and costs real money. It covers what was actually built versus what was promised, what the deployment setup looks like, whether tests exist, where the security gaps are, and what undocumented decisions the previous developer made. The output is not a quote. It is a report that tells you where the project actually stands.

Clients who have been through a bad first engagement often resist this. They have already paid for a project that should be done. Paying again just to find out how much more they will pay feels unfair. That frustration is understandable. But the alternative is accepting a quote based on the previous developer’s self-assessment, and that is how the cycle repeats.

The hardest thing to replace is what the previous developer knew

Every codebase contains hundreds of decisions that look arbitrary until you understand why they were made. Why is this piece of logic here instead of there? Why does this database table have this structure? Why does this API call happen at this point in the workflow?

The previous developer knew the answers. They held those answers in their head. When they left, the answers left with them.

The incoming team has to reverse-engineer intent from code. That process is slower than writing code from intent. On a well-documented project it might cost two weeks of orientation. On an undocumented inherited project it can cost four to six weeks before the new team is moving at normal speed. That time is not wasted. It is the cost of not having documentation, and it is rarely in the client’s mental budget when they come looking for someone to finish the last 20%.

What a realistic timeline looks like

A project described as 20% from done, handed to a new team, will almost never finish in 20% of the original budget or timeline. Once audit, onboarding, and the gaps the previous developer did not count are factored in, the realistic additional cost is typically 50 to 100 percent of what the original project cost. Sometimes more.

That number lands hard. Clients push back. And the new team, if they are honest, holds the line. Not because they are trying to extract more money, but because the work exists regardless of whether anyone accounts for it.

In most inherited projects we have taken on, the first thing discovery surfaces is scope that was quietly deferred. The previous developer marked things done when the happy path worked. Error handling was left for later. Validation logic was skipped. A third-party integration was mocked rather than built against the real API. Each item looked small in isolation. Collectively they represented weeks of unaccounted work.

When starting over makes more sense than continuing

Continuing from an existing codebase is almost always the right call, even when the code is messy. Rewrites take longer than expected and reintroduce every problem the original developer already solved. Working code, even imperfect code, encodes real decisions that have real value.

The cases where starting over is worth seriously considering are narrow. The previous stack cannot support what the product actually needs. The data model is wrong in ways that cannot be corrected without rebuilding. The audit reveals that more of the codebase will be replaced than kept. In those situations, continuing is not finishing. It is rebuilding piece by piece while pretending to continue, which is slower and more expensive than an honest rewrite.

That decision belongs in the discovery report, not in the initial conversation. Nobody can answer it honestly without looking at the code.

What to bring to the first conversation

The most useful thing a client can do before approaching a new team is to gather everything the previous developer produced. Not just the code, but any documentation, design files, requirement notes, and deployment credentials if they exist. The more context the incoming team has, the shorter and cheaper the audit.

The second most useful thing is to arrive without a fixed timeline. The timeline the client has in mind is based on the previous developer’s assessment of remaining work. That assessment is almost always wrong. The new team’s audit will produce a better number. Holding the audit hostage to the old number before it exists does not help anyone.

The practical question for any client in this situation is whether the previous developer’s 80% claim was based on features built or on features ready to ship. Those are different questions with very different answers, and the distance between them is usually where the next several months of work are hiding.