From Capybara to Cypress: Successful Test Automation Transition in a React + Rails App

Photo of Artur Włodarczyk

Artur Włodarczyk

Updated Oct 31, 2024 • 14 min read
group of young business people having fun, relaxing and working in creative room space at modern startup office-4

Do you sometimes need to make difficult changes in your testing strategy without being entirely sure where it’s going to get you?

If yes, you need to consider their overall impact on the product and how much effort such changes might necessitate.

Changing the end-to-end testing framework from Capybara to Cypress in a commercial project was one of such decisions. We couldn’t have predicted how impactful it would become and how it was going to change our future work. Let’s step back in time to 2019 and check this out.

Ruby on Rails app with Capybara automated tests

The product was an accounting web application for a small business. It was built in Ruby on Rails and, at some point, migrated its frontend development from native Ruby on Rails to React. The application was monorepo with a PostgreSQL database, Rails server, Sidekiq background processes server, and React for the frontend app.

When I joined the project, we used the Capybara testing library written in Ruby to run end-to-end integration tests. Capybara is a well-known tool in the Ruby community. It helps developers to automate users' interaction with the app, creating tests for defined flows and running them inside the browser. It works out of the box with Rails Apps and provides an extensive API for different kinds of interactions.

We used Capybara to automate the most critical test scenarios (having 110+ scenarios altogether). The test suite was fired on the Continuous Integration server each time a new commit was pushed to the branch.

The reasons why we went away from Capybara to Cypress

As the number of developers (both JavaScript/TypeScript and Ruby) increased and the product was becoming more successful, we needed to deploy more often, minimizing the risks of introducing errors at the same time. Expectations of investors and managers were high. Once something was checked, why couldn’t we deploy it? I’d asked myself the same question many times, and there was always the same sad answer: Our automated testing suite didn’t cover all major functionalities and every single release had to be supported with manual regression tests taking ages. QAs didn’t have time for automation because they performed manual regression tests, which was a vicious circle. The number of test cases increased, and it was not exactly clear who should handle integration testing automation and when it should be done.

There was actually a significant problem, as the React developers (JavaScript/TypeScript) who produced the whole UI were not experts in the Ruby-based Capybara framework. This caused issues. Let me highlight just a few of them:

  • Each new frontend developer in the team required one-on-one mentoring sessions about the basics of Capybara and its debugging techniques.
  • When a Capybara test failed, frontend developers had to debug the script and fix it. In some cases, that required support from backend developers, which resulted in frustration and a decrease in velocity.
  • Developers were not keen to write automated integration tests, since Ruby was not their primary language to code. I would consider this point as a major disadvantage of the previous setup, since we wasted a lot of human potential.

Another issue that we found while working with the Capybara test suite was that the tests were not stable in the Continuous Integration (CI) server. We had many flaky tests with unsolved reasons for their failure. At some point, the test configuration was set to rerun a single Capybara scenario up to six times when it failed, just to make CI build pass. As you might already have guessed, such a test suite was slow. No fewer than 109 scenarios were executed in parallelization, utilizing ten containers, running for about nine minutes.

Our expectations for the new framework

We decided to start doing research to try to find something that could replace Capybara, so we created a list of requirements with arguments why a given point was important to us.

Five major points from the list:

  • Tests need to run faster than Capybara – enlarging the test suite should not result in a significant increase in CI server costs.
  • Tests should be written in JavaScript/TypeScript to use the potential of our great Frontend Team.
  • Tests should be easy to debug.
  • Due to the complexity of the app, we required full access to the application database state. Test data had to be created before test execution and needed to be easily modified. The biggest advantage of Capybara was its integration with the Rails app and the ability to use supported ORM to create test resources directly in the database. Access to resource creation and edition from inside the test code is a very powerful approach, and it allows for the testing of even the most complex scenarios on the integration level. We definitely needed the new framework to have a similar mechanism.
  • No randomly failing tests on CI – the aim was to not disturb the team.

The final decision of moving to Cypress tests

Cypress covered most of the points from the list. Back then, in 2019, when we finally decided to search for a new framework (it was not as popular as it is now, but it had a gradually growing community). It was fast, JavaScript-based, and offered an option to write tests in TypeScript. I was personally determined to use Cypress after my first debugging experience with a UI-friendly Cypress runner logging all important events, API calls, errors, snapshotting DOM, and giving the option to follow the scenario step by step in an easy and relaxing way. That was a game changer at that time.

Still, we had an important challenge to solve: How to combine the JS code with the Ruby on Rails backend to be able to create Cypress tests with a mechanism used by Capybara. It was essential as we needed to have direct access to the backend code execution so we could create test data based on application models.

Test data management solution

One of our RoR backend developers, Jan Kamyk, suggested a very clever solution. We hid an endpoint inside our API, which only worked in testing environments. This endpoint allowed us to execute code from a selected Ruby class and a selected method placed in a defined location inside our project structure. The aim of this method was to create the necessary data required for a single test.

The endpoint was wrapped in Cypress custom command with two arguments required ​​– class name and method name ​​– that could be executed from inside the JS/TS test file. Almost every Cypress test file had a separate corresponding Ruby class inherited from the base class (including the most common data creation methods) in which you could create code to execute. Editing, deletion, creation, any calculations, returning the ID of resources or any data you wanted – all the things you could do from inside the Ruby code were available to Cypress as well. It seemed like a perfect approach, but was it really so beneficial?

Some disadvantages of Cypress migration

It was difficult to work with it in the beginning, since we needed some time to uncover relationships with models to generate the required resources. For every single test, we created separate Ruby classes and methods where we generated all necessary resources. Then, methods were called inside each hook, just to ensure that Cypress correctly ordered the calls in its built-in event loop. Writing such tests required programming skills in Ruby, JavaScript, and later TypeScript. We often sought support in the backend team just to gain a better understanding of how things worked beneath. Soon, we enlarged the base class for data creation and writing tests started to become much easier.

As our project evolved, we had some changes in architecture that required model restructures and data migration. This caused extra maintenance for our test suite. We had to refactor all the Cypress tests related to Ruby classes containing connections to the previous models and needed to rewrite them to the new ones in order to make the tests pass again. That was unplanned, somewhat unexpected, and arduous.

Test automation transition and the overall impact on the project

Despite initial difficulties with writing the first tests, we pushed the initiative further and soon realized that the new setup was starting to have a positive impact on the project's quality. We took an approach where Cypress end-to-end integration tests were written for every new feature that we delivered to the end user. At the same time, we kept our Capybara tests running for critical scenarios (it had been a temporary solution until we rewrote all those scenarios to Cypress).

Frontend developers participated in the development of integration tests and wrote basic scenarios which were further extended by QAs to include additional assertions or new scenarios. Bugs were caught at the early stage of development, since the whole test suite was triggered on every commit pushed to the branch.

The list of manual test cases to be checked before release was not growing, but still there were many tests to write – an automation debt to pay. We discussed the case within the team and decided to include one additional task for automation in the sprint. The task was taken by QA or a developer – whoever had the capacity to deliver it.

Soon, the list of unautomated test scenarios was shorter, and we decided to start to rewrite existing critical Capybara tests to Cypress. It was a good moment to evaluate those tests, think about missing scenarios, and include missing assertions. One year after introducing Cypress, we had 54 test files with 209 test scenarios, 830 different kinds of assertions, and still 62 scenarios from Capybara tests to be rewritten.

Our QA daily work changed significantly. Instead of manual exploration tests performed after the feature was ready in a deployed testing environment, we started writing automation tests locally, which gave us a much bigger understanding of the solution. In the process of writing, we obviously noticed bugs which could also be found during manual tests. Fixes had been made before any code got to a stable testing environment. We worked closely with the frontend developers to decide which scenarios should be covered by unit tests and which assertions should be included in Cypress tests.

We consistently worked on a testing suite to get rid of flaky test scenarios. In the end, we had full confidence in the test suite – once a test failed, it was clear that some problem had come up.

Tests were run on the integration level to test the local frontend, local backend, and the new, fresh database. Both backend and frontend teams had their own unit tests developed. In order to make sure our application worked properly for already existing resources, we added another test layer which ran through the app in the deployed testing environment. Finally, we supported our project with automated visual regression tests. In the end, we built a stable testing system ensuring high quality of the product in two important areas: backend-frontend integration and UI/visual consistency.

With such a setup in place, we changed how we deployed the application to production. Once a feature was developed, the end-to-end integration tests were already there. We had waited for the final confirmation from stakeholders performing acceptance tests and, following that, the code was deployed to production. Production deployment took place every day.

Positive outcomes of Cypress implementation

Our relationship with the investors changed positively. We were able to deliver planned features in a more predictable way. We had fewer regression bugs found at the later stage of development. Once a bug had been found, we wrote proper tests to ensure it wouldn't appear again. There were no longer any manual regression tests before releases other than occasionally scheduled exploratory testing sessions.

One and a half years after introducing Cypress, we finally rewrote all the Capybara specs. Here are some important facts about that:

  • Our Cypress integration test suite consisted of over 400 test scenarios.
  • We decreased the initial CI build time from nine minutes for 109 Capybara scenarios with the parallelization of ten containers to seven and a half minutes for 419 Cypress scenarios with parallelization of ten containers – almost four times more tests and still visibly faster builds.
  • We reduced CI server costs.
  • We built confidence in the test suite by removing flaky tests.
  • We moved our working flow towards continuous delivery, being able to deploy any fix or feature to production thanks to our powerful test automation solution.

My thoughts after the successful test automation transition

In a period of one and a half years, I made over 1,100 commits to the project's repository. I’ve spent hours on fixing code review requests and learned how to collaborate on the shared code, contribute to others' work. I got a much deeper understanding of the product and learned a lot from the developers I’d worked with.

The most important thing I would like to share is that whenever something is not working as projected, you should start the discussion within your team. Plan your tasks step by step, and do not be discouraged by the amount of work left to achieve your goal. Do not give up when you face an issue: ask for support, talk to people. Think about automation as a strategic investment in both the project and your well being – less stressful deployments will make the job easier.

Every major change needs a driving force – someone who has a sense of ownership, who monitors the progress, who is deeply convinced of the necessity to change, and who pushes the work further. Being the driving force behind such a change was a huge responsibility, but it was definitely one of the most rewarding achievements in my professional career so far.

Photo of Artur Włodarczyk

More posts by this author

Artur Włodarczyk

Artur graduated from tourism management but he doesn’t like mass tourism at all. He always...
Secure your product's success  Maintain an amazing experience that your customers can rely on Check how

Read more on our Blog

Check out the knowledge base collected and distilled by experienced professionals.

We're Netguru

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency.

Let's talk business