How HTek Consultants manage bugs with Clients
- Hector Flores
- Aug 23, 2021
- 5 min read

Bugs... Bugs can cause the interaction between clients and consultants to get a little messy. There are several areas where bugs can throw a wrench in how consultants operate and run a project. It's important to lay down rules and guidelines early so definitions don't get skewed by either the consultant's benefit or the client's benefit.
Step 1 - Communicate the Severity and Priority definitions
To start off with a base we need to have a documented agreement with the client on what our definitions are for the severity and the priority of a bug. These two metrics combined will be the core of determining if a bug fix is in scope and the timeline the fix should be expected.
Severity
Severity is a metric between 1-4 used to determine the amount of impact the bug has on the quality of the deliverable. Meaning, depending on the value of the severity it is making a large or small impact on how well the product is performing. The Severity that is assigned to a bug is determined by an agreed criteria
HTek Consultants use the following criteria:
S1 - Bocking the use of more than one feature or module with no workaround
S2 - Blocking the use of one feature or module with no workaround
S3 - Non-blocking incorrectness with a requirement of a feature or module
S4 - Cosmetic, nothing directly wrong with the core requirement
Examples:
The login page is not working - S1 (Blocks all features)
Payments are not going through - S2 (Blocking the whole feature of payments)
Reservations reserve the date 1 hour off - S3 (Not blocking, incorrect with the feature)
Calendar weekdays are wrapping on mobile - S4 (Cosmetic)
The Severity is where clients and consultants sometimes don't agree. This is where we need to be calculated and follow the mapping that is defined above. We as consultants have the responsibility to give assurance to the client. To to do that we need to make it clear that the Severity does not directly determine if a fix for the bug is done. Later in Step 2, we will see how that is true
Priority
Priority is a metric between 1-4 used to determine how important a bug is to the client. This can sometimes get confused with severity. In fact, this always gets confused with Severity. The main difference between Priority and Severity is in the way the value is determined and what role each one plays in determining when the bug is going to be fixed. The value that is assigned to the bug is determined directly by the client
There's no real example of priority since this metric is directly determined by the client. It could be that a client says "...that having the wrong color is a Priority 1..." which could be the case... That is still valid. This value cannot be disputed by the consultant, what the client says goes
Step 2 - Agree on criteria for deployments
This is where we will give all the control to the client (Where it should be).
Here we provide the different "gates" where we need to define criteria for success, here are some examples:
Alpha Testing
Beta Testing
Production
For each one of the "gates," we need to define our criteria to move to those phases.
This exercise is meant to have set in stone what is considered "quality" code for the client. To be successful as consultants we need to provide the clients means of measuring our success. Without it, the client is left relying on their emotions and intuition and induces false perceptions of the actual quality of a deliverable.
Examples:
Alpha Testing - No S1s
Beta Testing - No S1s, S2s
Production - No S1s, S2s, S3s, and S4 + P1
This means we as consultants guaranty the quality of the deliverable by committing to above the criteria. It is possible the client has other aspects of the business that mean different importance so this is the time to bring those things up and add them to the criteria. This ensures that the client is protected against having a product that doesn't align with their expectations
Step 4 - Include bug fixing into estimations
Consultants now need to account for fixing the bugs that are in scope as part of the estimate of implementations of requirements. There are some clients that have the assumption that if they paid for the product, why would need to pay to fix bugs in the product. In a classic waterfall development process, this is true, but HTek Consultants follow an agile development process. Therefore, the cost is reduced. Only paying for the features being developed ensures the budget is used efficiently. Furthermore, it ensures the product is always at a level of quality the client expects.
Step 5 - Include bug dashboard
Clients need a nice dashboard to show the number of bugs in each state according to their priority/severity metrics.
This will allow the client and consultant to always be aligned on the current state of the product and its quality.

Step 6 - Define who can perform user acceptance testing
User Acceptance Testing is one of the many ways to test an application. I will not go over all the other ways of testing since that is not in scope for this article. User Acceptance Testing is testing to determine if the application is accepted by the stakeholders (Client). This means we need to have a resource on the clients' side to take responsibility for this task. The clients' resource is then responsible for running through the application and determining if the solution is following the agreed requirements.

It's important to make clear to the client that this is their responsibility. The clients' user acceptance testing is them approving that the work they paid for is what they wanted. In other words, there is no way for that role to be assigned to the consultant or the consultants' staff since their bias will not allow them to provide an accurate assessment.
Step 7 - Refind the process
There are some minor details that this article doesn't cover but it should provide a base on how HTek Consultants manage bugs and how we interact with the client to facilitate bug fixes. Going forward, the consultant will iteratively refine the process. That includes either modifying the severity definition table based on clients' feedback, sharing responsibility for User Access Testing, or even including some automation to cover some of the quality assurance testings. The adaptation will be determined based on how the client is able to handle the strict process.
In my experience, I have seen that clients have trouble following such strict processes, more specificity the user acceptance testing portion. For that reason, it may be important to communicate the potential risks of having only the consultants perform testing.
What we don't want is the client having the expectation that when we deploy into production it is exactly what they asked for without them actually seeing the product before hand in every iteration to provide feedback.