What do you think is the role of QA in an API Program?

I was discussing in the office what we (as API Proxy Developers) expect from the QA team and came up with a wish list:

1) Functionality

Making sure the proxy fulfills the business requirements. Ideally this is incorporated into some kind of test driven development where tests are fed into the development cycle and the proxy developers automate the execution of those tests and use this as part of the definition of done. In this scenario QA team would be contributing to the test scenarios that development team are actively building during the sprint.

2) Performance

This is likely outside the standard development cycles but critical to overall API delivery. QA team to be responsible for building and automating this testing with support from the development team as required.

3) API Experience

This is not something that the QA team traditionally have responsibility for but would provide a significant benefit. Even if the API fulfills the functional and performance requirements how easy is it to use? How easy to understand is the related documentation? How easy is it to take advantage of new API functionality or move to another version of the API? These kind of API Consumer experiences can be the difference between an API being widely adopted or withering on the vine and count (should?) for part of the QA process? An idea would be for QA team to develop a simple client application to measure "developer experience" acting as a member of external/internal development community. This will measure "speed of development", "ease of use", "documentation quality", etc.

What do you think? Do you think it would be useful to expand this definition of the QA role and is this a fair definition of the remit of the QA function?

1 2 4,135
2 REPLIES 2

This is a big topic and I use many words, especially when it's something I am passionate about!

tl;dr: QA is a process executed throughout the lifecycle, not a function executed by a single team; APIs are going to eat the world; Transform traditional development and test teams to MicroServices DevOps and Test Enablement teams.

To start I think we need to separate the process of QA from the function of a specific QA team. Please indulge me as I generalize a bit before returning to some specifics.

QA is the process of assuring that something meets some criteria. You need to define these elements before you can do QA:

  • What are the criteria that your product is expected to meet?
  • How can the product's achievement against those criteria be measured?
  • Where/when are the appropriate points through the product lifecycle to measure?

A holistic approach to QA of any software product/program should consider:

WHAT:

  • Functionality: does it work the way it's expected to, both happy path and negative?
  • Performance and Scalability: does it scale and handle volumes the way it's expected?
  • Reliability: is it stable and predictable?
  • Security: does it protect sensitive data, resist and detect attempts to exploit?
  • Compliance: does it meet legal/regulatory requirements and company standards?
  • Usability/Experience: is it discoverable, intuitive, consistent and efficient?

HOW:

  • Techniques: define the most appropriate approach / experiments (tests) to validate the product meets the criteria.
  • Tools: identify existing tools or requirements to create new tools that make it as easy as possible to execute the experiments in a repeatable and efficient manner.

WHEN:

  • Maturity: identify the points along the product lifecycle when it's most appropriate to perform the experiments, keeping them as close as possible to that point so you identify issues as early as possible.

WHO:

  • Team(s): identify who is responsible for carrying out the above evaluations.

It's a paradox that a combination of art and science, experience and naivety, familiarity and ignorance is needed in order to comprehensively execute QA.

You need to know enough about how it's supposed to work to know what needs to be tested. But you also need to know nothing about how it's supposed to work in order to free yourself from your own assumptions and discover the problems that consumers of the product will discover if (when) you don't.

In a traditional software development process this is implicitly part of the reason for the separation and specialization of roles into developer and tester because it's hard for the same team to do both effectively.

The power of APIs is only just starting to be realized.

APIs are ushering in a world of unimaginable complexity from a testing perspective. The number of permutations and combinations enabled through APIs will far outpace the ability to execute tests to cover all of those. But it's this very complexity that enables extraordinary innovation and opportunity.

The scale at which APIs will operate, the world of trillions of API calls, will also reveal more obscure problems more frequently. For example if a given bug manifested itself 1 in a billion requests and you were processing 1 million requests a day, you could go ~3 years between occurrences. At API scale, processing 1 billion or more requests per day, you see it every day.

This is Digital Transformation kicking into high gear and accelerating the fundamental restructuring of software development already underway. The separation between Developers and Testers needs to transform. The world of MicroServices and DevOps is beckoning.

Decomposing into MicroServices enables smaller, more agile development teams that can take full responsibility for the whole life of their service. Developers cannot develop in isolation from the real production consumption of their code. However Developers cannot truly do it all. To embrace this new world, instead of a traditional test team, invest in a test enablement team. The difference seems small but is profound.

  1. Test enablement teams are responsible for creating and running production-like environments by understanding production use cases and traffic, creating synthetic traffic patterns based on this knowledge and operating these environments so that they provide a grueling proving ground. Supplement that with providing tools for the MicroServices teams to use to do their own testing and transparent visibility of test coverage and results.
  2. MicroServices teams deploy their services into that shared integration environment and using the provided tools, execute service-specific tests, monitor the health of their services, investigate service issues and resolve them before releasing to production.

So in summary, the traditional boundaries between "developer" and "tester" need to be challenged, broken and remade. Developers need to own the whole life of their services and need to be enabled to perform their own QA rather than rely on some external test teams to do that for them.

More thoughts here around forming a general strategy around testing https://community.apigee.com/articles/16169/forming-an-api-test-strategy-where-to-start.html

Summary

Forming a test strategy that reflects the skills of your team, the complexity of your APIs and the maturity of your Agile process should be a key part of planning your API Program.

If the strategy is too lightweight then quality will suffer and your APIs will become costly to maintain as too much time will have to be spent on bug fixing and manual regression testing.

If the strategy is over-engineered then the cost of testing may outweigh the benefits and more time can be spent setting up testing than developing APIs.

The testing strategy should ensure that the Product Owner, Developers and Testers have a clear idea of the functionality being developed and the test coverage required.