Four Kitchens

Testing and quality assurance

We use a continually refined mix of unit, functional, and visual regression testing to seamlessly catch regressions while minimizing time—and budget—spent on test maintenance.

All Four Kitchens projects incorporate automated testing that includes code linting and functional tests of key workflows. All code is peer-reviewed prior to acceptance. Manual testing criteria are established on an as-needed basis. Emulsify, our open-source tool for creating design systems, includes automated accessibility testing.

Acceptance Criteria

Before a user story (feature) is ready for development, it must have clear acceptance criteria. All user stories must include criteria that (1) clearly define what steps are needed to confirm that the user story is complete and (2) are easily understood by our Developers and our internal Product Owner. The Product Owner is accountable for these steps.

Developing Without Side Effects

Whenever practical, we use automated tools like code linters and static analysis as part of our deployment and local development environments. These tools are built into our open-source tools Emulsify and Sous.

Peer Review Isn’t Just Code Review

Our peer-review process is vital to consistently shipping excellent software. No stories ship without a review from a peer on the project. We typically call this step “code review,” but it’s only one part of our peer-review process. 

Our peer reviews include:

  1. Code review. A second developer reviews the code for best practices, correct use of APIs, and architectural review.
  2. Validation of requirements. The developer then confirms the feature does everything it is intended to do.
  3. Functional testing. Finally, the second developer goes through the steps to ensure that the feature works as expected.

Balanced Automated Testing

Automated testing is a crucial part of our quality assurance (QA) process and always needs to be strategically applied so that it minimizes test maintenance and QA debt. 

At a minimum, all projects must include:

  • Automated code linting
  • Test automation infrastructure for appropriate testing tools as outlined in the project

When appropriate, we may also implement:

  • Visual regression tests of representative site sections
  • Functional tests of key workflows
  • Unit tests for the project’s custom code
  • Automated accessibility testing

Client Review and Acceptance

The client’s Product Owner is accountable for the final review of all features and is empowered to reject a story if it doesn’t meet the requirements or project standards. The client’s Product Owner is not responsible for functional testing each story; the entire project team is responsible for ensuring that proper testing is being completed.


As a project nears launch, additional testing is always required. We utilize a standard checklist for every project. Roughly two to four weeks prior to launch, the Tech Leads tailor and customize this list. The launch checklist ensures that commonly overlooked items like development and performance settings are properly configured. Site security and SEO are also properly audited before launch. 

We Test Throughout the Project—Not Just at the End

Apart from our launch checklists, our projects do not have a testing, quality assurance (QA), or user acceptance testing (UAT) “phase.” Although well-intentioned, we feel separating testing into different phases of a project—or worse, into separate roles—is an antiquated practice. Here’s why:

  • Separating testing into a later phase of a project is at odds with agile software development’s principles of iteration and “thin vertical slices” of functionality. A feature cannot truly be considered done until it is tested and meets the Product Owner’s definition of done—the agile equivalent of user acceptance testing.
  • Waiting until the end of a project to test adds unnecessary cost. When a problem is found weeks or months after it’s introduced, it takes much longer to uncover the root cause.
  • Bugs have a tendency to multiply. If you’re not testing regularly, you may be introducing bad results or false behavior that future code gets built on top of, introducing yet more bugs. Untangling the mess at the end of a project is expensive and can lead to delayed launches.

Our approach is more modern, cost-effective, and reliable.

  • We perform testing on every code commit. This allows us to identify the root cause of issues within hours, not days or weeks, because the problems exist in the code that was just committed.
  • User acceptance testing happens every two weeks at the sprint demo. By completing and delivering “thin vertical slices” of functionality that are fully tested and approved, our client partners can begin learning how to use the site while we continue to build additional functionality.