Loading…
This event has ended. Visit the official site or create your own event on Sched.
Friday, June 4 • 13:30 - 14:10
5 phases of API test automation

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
In my context we run a micro service architecture with a number (300+) of API endpoints both synchronous and asynchronous. Testing these in a shared environment with cross dependencies is both challenging and very necessary to make sure this distributed monolith operates correctly. Traditionally we would test by invoking an endpoint with the relevant query params or payload and then assert the response code or body for valid data / type definitions. This proved to be more and challenging as the push for CI and having common data sources meant dependencies would go up and down per deployment which meant flaky tests.

I will demonstrate how we leveraged newer technologies and split our API testing into 5 levels to increase our overall confidence. The levels are: (ignoring developer focused unit and unit integration tests):
  1. Mocked black box testing - where you start up an API (docker image) identical version to the one that would go to PROD but mock out all its surrounding dependencies. This gives you freedom for any known data permutations and one can simulate network or failure states of those dependencies.
  2. Temporary name-spaced API in your CI environment - here you start up ur API as it would in a normal integrated env but it’s in a temp space that can be completed destroyed if tests fail… never gets to the deploy stage and no need to roll back if errors/failures occur, here we use Kubernetes and CI config to orchestrate these tests. The tests focus is to check 80-20 functionality and confirm that the api will meet all the acceptance criteria.
  3. Post deployment tests - usually called smoke testing to verify that an API is up and critical functionality is working in a fully integrated environment.We should be happy by now right? Fairly happy that API does what it says on the box… but wait there is more...
  4. Environment stability tests - tests that run every few min in an integrated env and makes sure all services are highly available given the deployments that have completed successfully. Here we use gitlab to control the scheduling.
  5. Data explorer tests - these are tests that run periodically but use some randomization to either generate or extract random data with which to invoke the API with. These sort of tests are crucial for finding those edge cases that are usually missed. Sometimes of low occurrence but generally high risk issues. I wrote a custom data extractor that runs against our DBs to find strange data sets to use as tests data.
I would like to elaborate and demonstrate these layers and execution and how this has changes the way we test and look at APIs. Would also touch on the tooling we use to achieve this and the pros/cons of using this approach.

Speakers
avatar for Shekhar Ramphal

Shekhar Ramphal

Quality assurance technical lead, Allan Gray
Shekhar is passionate about software testing, Computer engineer by qualification. He has experience in full stack testing in all areas from manual QA, system design and architecture, to Performance and security as well as automation in different languages.


Friday June 4, 2021 13:30 - 14:10 EEST
Puupakusaal