Continuous integration is a software development practice where members of a team integrate their work frequently and each integration is verified by an automated build. This results in multiple integrations a day allowing the team to develop cohesive software more rapidly. Therefore it is an essential practice for any agile team.
Continuous integration reduces integration headaches (conflicts, merge issues) and increases the speed of finding and fixing code errors. It’s automated build verification step ensures the system is still working as expected after many integrations and helps building confidence in the code.
The objective of this article is to explain continuous integration practice with respect to API proxy development and details some of the techniques and tools used for achieving it within the Apigee platform.
This article mentions some of the tools that are used by Apigee Customer Success team however it doesn’t necessarily recommend those tools over others. The emphasis should be on the capabilities of the tools rather than the tools themselves.
Continuous integration cycle consists of the following steps:
Source control is an integral part of any software development project. It is where all source files, configuration, commits and history live - therefore it is where most of the integration happens between multiple developers in a team. All commits for a particular project must be made against a single mainline branch within a repository - we call this the master branch.
Source control is the single source of truth in terms of all code and configuration - even if an artifact control (like artifactory type system) is also being used to store packaged project artifacts. Once the artifact package is constructed from SCM, it can then be reused in subsequent build iterations as a ready-made package bypassing initial package creation from SCM.
More information on source control, especially for API proxy development can be found here: https://community.apigee.com/articles/34868/source-control-for-api-proxy-development.html
So our virtuous cycle starts with developer(s) within a team pushes their feature branch containing a number of commits to the repository (origin).
Apigee has the functionality to execute custom code written in Java, JS and Python placed in any flow (request or response). It also has the functionality to execute node.js code placed as a target to the API proxy. It is recommended that you run static code analysis for these custom code to promote consistency and language specific best practices within the team.
There are various tools available to do this for all of those languages. Most of them allow editor integration and source control hooks that can be adopted in the development environment. Most of the popular task runners offer watchers that can be used to trigger such analysis on changed files only. So there are no excuses really not to run quality analysis on code for local development.
When it comes to integration though we shouldn’t rely solely on local analysis but need to integrate such tools into our continuous integration process so that the process is automated to catch any quality issues if they are missed in the local development environment. Our recommendation is to choose a CI tool that can run your code quality analysis and fail the build fast if it finds any problems.
Code analysis tools come with predefined rules that checks various aspects of the code. Your team may not agree with some of the rules or conventions that tool is enforcing but nowadays most tools have the option of disabling some of the rules so you can configure the tool according to your team’s conventions.
Running such analysis will be especially useful if the whole team or some members are new to that language as you will be warned of common pitfalls, e.g. the use of === in JS.
There are two main areas where it is recommended to write unit tests for Apigee API proxies:
Test coverage is especially important for the second area mentioned above. Integration testing on its own also will not give you any coverage indicators. If you rely on code coverage for such complex code, unit testing is the way to go.
Other advantages of unit testing compared to integration testing for Apigee API proxies are:
It is important to write unit tests for the custom code that your team is developing rather than for the out-of-the-box policies that are built by Apigee which are already passed through extensive testing before they are made available in the product. So the recommendation is to set the testing boundary to Java Callout, JavaScript Callout, Python Script policies and Node.JS targets only.
More information and implementation tricks on API proxy unit testing can be found:
The basic means of deploying an API proxy is via Apigee Management API. All other tools, including Apigee Management UI are using APIs to deploy a proxy.
This page from Apigee product documentation explains how API proxies are deployed to Apigee: http://docs.apigee.com/api-services/content/understanding-deployment
For deployment via the Apigee Management API, see http://docs.apigee.com/api-services/content/deploy-api-proxies-using-management-api and http://docs.apigee.com/management/apis/post/organizations/%7Borg_name%7D/environments/%7Benv_name%7D...
For deployment via the command line, see http://docs.apigee.com/api-services/content/deploying-proxies-command-line.
There are also various tools that are built within the community as open source projects to help with deployment:
The recommendation is to integrate one of the above deployment tools with your CI to perform API proxy deployment.
Integration testing is one of the most obvious and important types of testing for API proxies. The general idea is to have your integration testing tool of choice simulate user requests hitting the API proxy which will then be hitting your target APIs.
We should be designing the integration testing such that it is executing requests on each request and asserting behaviour and data. Examples of behaviour in API proxies are traffic management, OAuth handshake or any particular behaviour exposed by the target or 3rd party APIs. Examples of data are error codes, response payload values and structure.
Please note that an API proxy must be deployed to an Apigee environment before it can serve HTTP requests from integration testing tool. Therefore integration testing step must be executed after deployment step in CI configuration.
Please see the following Apigee community articles for testing strategies and implementation guidance:
As API proxy interfaces are evolving over time, their documentation must also be kept in sync. Apigee Developer Services Portal has a feature called SmartDocs which lets you document your APIs in a way that makes the API documentation fully interactive. Interactive documentation means portal users can:
You can represent your APIs by creating a model using WADL or OpenAPI (formerly Swagger) specification which can be modified during development and pushed to Apigee Developer Services Portal from CI using APIs so that your documentation is kept up to date with the API proxy interface.
Please refer to the following Apigee community articles for documentation strategies and automating API documentation:
A typical Apigee deployment includes modifications to environment configuration together with policies and custom code. These include changes to KVM, cache resources, target servers, products, applications, keystores, truststores, etc. Apigee exposes management API resources that can be used to manage environment configurations.
The recommendation is to automate modifications to Apigee environment configuration during CI builds using Management API resources or deployment tool of your choice.
The benefits include:
Refer to the following resources for implementation:
The overall process for continuous integration for Apigee proxies looks like the following:
The following general best practices for Continuous integration also applies to Apigee proxy development:
On the whole the main objective of CI is to reduce the risk of breaking changes reaching and deployed to production environment. By relying on an automated process that validates and deploys changes as they are happening increases the trust and improves the speed of new features and improvements delivered to your target audience.
Awesome post @Ozan Seymen. Very insightful and great ideas for proxy/api development.
With Apigee API proxies getting developed (create, configure, and manage) in the cloud directly in the Edge management UI,
1. Can I assume these proxies are not included in the CI process? If not, do you expect the developer to export these proxies (manually or through management APIs) and check-in into git repo?
2. Are only custom plugin developments like custom policies, node.js / java code built outside (of the Egde UI) etc. considered for this CI process?
Thanks.
Yes, we can use Management API calls to manually push the proxy source to SCM. Within CICD pipeline we can Apigee deploy using wide range of plugins like Apigee Maven Plugin.
Please check Apigee Maven Plugin , it can do many things.
https://community.apigee.com/articles/8729/apigee-tools-plugins-apigee-development-made-easy.html
Hi @Vet D
1. Edge Management UI doesn't have any integration with source control at the moment. Therefore the only way is to download the revision and push to source control manually. That is why I recommend editing proxy configuration and custom code locally in an editor of your choice instead of using management UI IF it is your intention to do CI/CD.
2. Yes, a deployment "bundle" for Apigee contains all custom code under /resources folder. The root folder ("apiproxy") is zipped which contains all code.
Excellent post!
I see there is a case not covered, related to the sync between the development pipeline of the backend system, and the api itself.
What is the best way to ensure changes in both components be aligned?
Thanks
Indeed a great post, thanks for sharing.
Timeless guidance, Still valuable and relevant. Thank you!