NTD has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Deep dive [clear filter]
Thursday, June 4

13:30 EEST

Visual Regression Testing. Check Your Pixels!
This workshop examines what visual regression testing actually is, how it works (BackstopJS as an example) and what added value it has. There are some applications that may surprise you! In any case, visual regression tests are worth considering as a supplement to your current test suite!

Practical examples are shown, tips and tricks are given on how to get started, methods of daily use shared and integration with an existing (testing) process is proposed. In the workshop part BackstopJS will be used to execute the learned theory and get a hands on in a sandbox environment (*exact assignments are in development)

Note: This workshop is derived from the initial presentation on this topic. The CFP submit is as a workshop but if desired can also be done as a track presentation.

Max 20 people

Key takeaways:
  • End to End testing not only WITH the GUI but also testing OF the GUI (pixel perfect). 
  • Automate more with with less effort (opposed to Selenium Webdriver implementation).
  • Not only check layout and design, but also content and input/output!

avatar for Mehmet Sahingoz

Mehmet Sahingoz

Senior Test Consultant, Triage-IT
Mehmet calls himself ‘a tenacious test automation professional’. He drives and thrives when he and his team embrace each other’s strengths and weaknesses to keep sprinting. Using his skills to distill information vital for the refinement of the product, to detect and prevent... Read More →

Thursday June 4, 2020 13:30 - 15:30 EEST
Väike Saal
Friday, June 5

13:30 EEST

5 phases of API test automation
In my context we run a micro service architecture with a number (300+) of API endpoints both synchronous and asynchronous. Testing these in a shared environment with cross dependencies is both challenging and very necessary to make sure this distributed monolith operates correctly. Traditionally we would test by invoking an endpoint with the relevant query params or payload and then assert the response code or body for valid data / type definitions. This proved to be more and challenging as the push for CI and having common data sources meant dependencies would go up and down per deployment which meant flaky tests.

I will demonstrate how we leveraged of newer technologies and split our API testing into 5 levels to increase our overall confidence. The levels are (ignoring developer focused unit and unit integration tests):
  1. Mocked black box testing - where you start up an API (docker image) identical version to the one that would go to PROD but mock out all its surrounding dependencies. This gives you freedom for any known data permutations and one can simulate network or failure states of those dependencies.
  2. Temporary name-spaced API in your CI environment - here you start up API. As it would in a normal integrated environment, but it’s in a temp space that can be completed destroyed if tests fail… never gets to the deploy stage and no need to roll back if errors/failures occur, here we use kubernetes and CI config to orchestrate these tests. The tests focus is to check 80-20 functionality and confirm that the API will meet all the acceptance criteria.
  3. Post deployment tests - usually called smoke testing to verify that an API is up and critical functionality is working in a fully integrated environment.We should be happy by now right? Fairly happy that API does what it says on the box… but wait there is more...
  4. Environment stability tests - tests that run every few min in an integrated environment and makes sure all services are highly available given the deployments that have completed successfully. Here we use Gitlab to control the scheduling.
  5. Data explorer tests - these are tests that run periodically but use some randomization to either generate or extract random data with which to invoke the API with. These sort of tests are crucial for finding those edge cases that are usually missed. Sometimes of low occurrence but generally high risk issues. I wrote a custom data extractor that runs against our DBs to find strange data sets to use as tests data.
I would like to elaborate and demonstrate these layers and execution and how this has changes the way we test and look at APIs. Would also touch on the tooling we use to achieve this and the pros/cons of using this approach.

avatar for Shekhar Ramphal

Shekhar Ramphal

Quality assurance technical lead, Allan Gray
Shekhar is passionate about software testing, Computer engineer by qualification. He has experience in full stack testing in all areas from manual QA, system design and architecture, to Performance and security as well as automation in different languages.

Friday June 5, 2020 13:30 - 14:10 EEST

Twitter Feed