Forming an API Test Strategy - Where to Start

11 0 30.6K

Introduction

When beginning an API program it is important to form a strategy around testing.

Agreeing on the approach and tooling to be used for API testing is key both in terms of confidence in the APIs being developed and reducing maintenance costs of the API catalogue.

This article will cover some of the principles behind a sound test strategy but will not be a deep dive into technical details, there are references at the bottom of the article where the technical details are discussed in more depth.

Objectives

The main goal of any test strategy is to agree on how to automate both integration and [if necessary] unit level test harness in order to:

  1. Prove implementation is working correctly as expected, i.e. bugs
  2. Ensure implementation is working as specified, i.e. according to requirements specification
  3. Prevent regressions in between releases

The key here is automation - irrespective of various techniques employed by QA teams, humans are not capable of executing complex test scenarios as consistently, frequently and as quickly as machines can.

  • There are tools available to continuously run unit tests whenever a code file is saved on disk
  • There are continuous integration systems to run unit and integration tests when code is committed to local branch or to central repo
  • Test script can be scheduled to run every x minutes continuously to check for certain conditions 24x7

These test harnesses can be reused to build monitoring checks for health checking purposes, e.g. "make a reservation to hotel room then cancel the booking" repeated every 2 minutes.

Monitoring Strategies will be the subject of a separate article.

Questions

There are a number of key questions to ask when formulating your strategy in order make sure it reflects your current level of testing and Agile maturity.

  • What is the current testing strategy being used? Building on existing testing best practice and understanding the current strengths and weaknesses of your testing strategies is key to building your API Testing Strategy.
  • Are you familiar with testing automation? Are tests currently automated? Do you have the tools available to automate new tests as they are built?
  • Are you familiar with Test Driven Development (TDD) and Behaviour Driven Development (BDD) concepts / principles? Bringing together the development and test phases of any API build will provide a more coordinated approach to building APIs and can provide significant benefits which we'll discuss later in this article.
  • How stable / reliable are the non production versions of the target system being exposed? This will inform decisions around how much to mock the target systems.
  • How mature is your Agile methodology in general? See http://community.apigee.com/articles/2935/agile-assurance-advice-for-starting-the-agile-jour.html for some advice on assessing your Agile maturity.

Integration Testing

Arguably the most important testing for an API layer is integration testing. These tests simulate an API client sending certain request combinations and assert response received from the API.

For Apigee integration testing, the objective is to test client to Apigee + Apigee to Target + Target to backend systems integration and all integrations in middle, e.g. to other 3rd party APIs.

Policy Coverage

The objective for integration testing is to build a test harness to execute each and every policy - perhaps with many requests running multiple scenarios. We need to use our judgment and trace tool to understand what percentage of policies are covered by test scenarios.

Apigee does not have any builtin tooling to report test coverage.

Target Mocking

Mocking Target systems may help automate testing and is recommended in following scenarios:

  • When target APIs are not mature or reliable enough
    • Availability of target APIs - deployment, migrations, lifecycle-impedance
    • When target APIs are being developed at the same time as Apigee proxies - create independent and parallel development streams for target APIs and Apigee proxies
    • There are network, systems, data stability or maturity issues
  • When data is constantly changing or nature of data is such that it is not predictable to be asserted consistently using automated testing
  • When it is not possible (or very difficult) to simulate certain scenarios for testing purposes
    • 5xx errors from target
    • timeouts
    • data collisions, conflicts
  • When tests rely on previous data population, e.g. user change password, reset, duplicate email, forgot password cases
  • Target API has bad response times, e.g. API that responds in 2 minutes. We need fast tests and should be relatively cheap to execute them.

Building mocks for targets systems should be seen as a long term strategy and the cost of keeping these mocks up to date and how much time is invested in building the mocks should be considered as part of the testing strategy.

Assertion Points

Integration tests generally assert following points:

  • Target request: assert parts of HTTP request Apigee makes to target, e.g. url, headers, body, etc. This is generally done by defining a special route that hits a request mirroring mock which echo's Apigee's request back. httpbin.org is good for that purpose.
  • Apigee response: assert parts of HTTP response sent by Apigee to client, e.g. url, headers, body, etc.
  • Data processed/output by policies that are not visible in target request or proxy response - this is generally done by collecting such data separately and echo'ing them back in Apigee response as part of test headers.

Using BDD for integration testing

Integration testing using low-level code that only developers can read, understand and run is good but its effects are limited.

BDD introduces a business centric vocabulary (e.g. gherkin) that is specifically built to ensure better understandability and participation by business audience.

By writing specifications in plain text with vocabulary that is common to the entire organisation, we are documenting the scenarios, expected outcomes and our understanding of what the software should do all in one go.

This will then improve communication between teams - technical and non-technical.

Compare the below test scripts and assess the intended audience profile, usefulness and their reach within the organisation:

describe('Check weather in cities', function() {
  async.each(weatherData.simpleWeatherArray() , function(cityData, callback) {
    it('you should be able to get forecast weather for ' + cityData.name + ' from this API Proxy.', function(done) {
       var options = {
                url: cityData.url, //'https://testmyapi-test.apigee.net/weathergrunt/apigee/forecastrss?w=2502265',
                headers: {
                  'User-Agent': 'request'
                }
       }
        request(options, function (error, response, body) {
            expect(body).to.contain(cityData.name) //Sunnyvale, Madrid
            assert.equal(cityData.responseCode, response.statusCode)
            done()
          })
        })
    callback();
  });
});
Scenario: Check whether in cities
	Given I set "User-Agent" header to request
	When I GET /weathergrunt/apigee/forecastrss?w=2502265
	Then response body should contain "Sunnyvale, Madrid"
	And response code should be 200
Scenario: Check whether in cities
	Given I am looking for information about Sunnyvalue, Madrid
	When I make a request for weather information
	Then response should be for "Sunnyvale, Madrid"
	And response should be valid

The further down the list you progress the more 'human readable' the test cases become and the less technical knowledge is required to write the test but additional 'wiring' will be required to get the test to run.

When assessing how far to go down the BDD route a few questions worth asking are

  1. Who will be writing the original use cases?
  2. Who will be converting these into test cases?
  3. How technical are the different members of the API Team?
  4. What audience needs to understand the test cases?

Whatever approach you decide on needs to maximise the skills of the team you have and also minimise communication barriers between technical and non technical members of the team form the Product Owner through to developers and testers.

If everyone has a clear, common understanding of what the team is trying to deliver this will clearly improve the quality of what is produce and will also minimise the need for rework and likelihood of misunderstanding of requirements during development.

Unit Testing

With an Apigee implementation, when we talk about unit testing we are mostly referring to "unit testing" custom code written in javascript, java, python, node.js as extension policy.

Scenarios where unit testing is useful are:

  • Operations that our integration testing cannot intercept and therefore cannot assert. An example for this case can be service callouts to external APIs, such as Loggly integration somewhere between client and target. Another example might be testing javascript code that performs IP blacklisting/whitelisting functionality - need to simulate different IP addresses coming in to Apigee with many proxies in the middle.
  • Very important code - e.g. security code, encryption code, signature generators/validators.
  • Where coverage is extremely important, e.g. security code (again)

Unit testing has nice-to-have advantages over integration testing:

  • Code can be tested locally without the need to deploy it to Apigee first
  • This enables us to create hooks (i.e. git) to enforce testing with coverage before we deploy or commit
  • Much faster to execute than integration testing (no network activity, etc)

Any test strategy needs to balance the amount if Unit vs Integration testing performed. Overlap between the two is inefficient and requires test maintenance in two places so consider carefully what you're looking to test and what is the most efficient way of testing.

Apigee environments and testing

It is important to analyse what level of testing will be performed in each Apigee environment as a bundle is promoted on the path to production.

In general 'moving testing to the left' would be the suggested approach where as much testing as possible is done as early as possible with testing moving to more smoke testing in the further right environments i.e. those closer to production.

This can be used as a suggested starting point:

EnvironmentTestingNotes
DEVUnit testing

Integration testing

Target mocked if possible in order to simulate every condition easily and quickly
INTGUnit testing

Integration testing

Target pointing to an instance of non-prod API

Using different dataset than DEV environment

Some tests might be disabled e.g. target 500 handling

Other non-prod environmentsUnit testing

Smoke testing

Mostly testing to ensure successful deployment and configuration

Very simple assertions

PRODSmoke testingMostly testing to ensure successful deployment and configuration

Performance Testing

The main objective is to find the capacity limit point of the whole system: the uppermost limit in terms of load in which, system behaves within the acceptable range, that is in terms of response latency, and number of successful transactions. Then use this point to do capacity planning and to define SLAs.

Up until we hit the capacity limit point, average response times stay stable and relative increase in response times are very small. When we hit this point

  • the relative rate of increase in response times grows exponentially
  • even though we continue to increase the load on the system, the rate of successful responses stay more or less the same
  • error rate starts to increase

We may not see all of those at the same time. For example, it is possible to see an increase in error rate without degradation in latency.

The way we usually conduct the performance testing:

  1. Come up with a test plan, document this and agree with the customer
  2. Select a performance testing tool and implement test scripts that executes the test plan
    1. Depending on the tool you choose, you can borrow/reuse functional test scripts and modify them accordingly
  3. Extract reports from testing tool and record them in a child page of the test plan

The readings that we are interested in are TPS values and response times in a classic REST api testing. If we are testing on private cloud, then hardware readings from each node is also important, e.g. the classic mem, cpu, IO trio.

Mocking during performance testing

If we are testing Apigee proxy performance only - independent of performance characteristics of target systems, we can choose to:

  1. Mock all external systems - It is important to configure the mocks to respond with average response times of the respective endpoint to make the test as realistic as possible.
  2. Disable callouts to external systems, e.g. add a new empty target route rule and pass the requests through there if target proxy is empty.

This important if you don't want to get isolated readings from Apigee infrastructure ONLY.

Summary

Forming a test strategy that reflects the skills of your team, the complexity of your APIs and the maturity of your Agile process should be a key part of planning your API Program.

If the strategy is too lightweight then quality will suffer and your APIs will become costly to maintain as too much time will have to be spent on bug fixing and manual regression testing.

If the strategy is over-engineered then the cost of testing may outweigh the benefits and more time can be spent setting up testing than developing APIs.

The testing strategy should ensure that the Product Owner, Developers and Testers have a clear idea of the functionality being developed and the test coverage required.

References

Agile Maturity

http://community.apigee.com/articles/2935/agile-assurance-advice-for-starting-the-agile-jour.html

Unit Testing

https://community.apigee.com/articles/3964/unit-testing-javascript-code-with-mocha-sinon-and.html

https://community.apigee.com/articles/4188/unit-testing-code-coverage.html

BDD

https://community.apigee.com/articles/2685/apickli-rest-api-integration-testing-framework-bas.html

Version history
Last update:
‎01-27-2016 03:49 AM
Updated by: