This article discusses the various methods and artefacts that can be used for the purposes of sharing or re-use in the API proxy development process. It also provides links to a set of tools that complement or support the reuse of these artefacts that can be incorporated into the proxy development lifecycle.
Using files that contain partial proxy flows called flow fragments, logic that is common to multiple proxies can be shared. This allows re-use, and avoids replication of the same policy steps (flows) in each proxy that needs to implement that flow.
Steps to implement are listed below:
<Request> <Step> <Name>validateInput</Name> </Step> #validate_token# <Step> <Name>XmlToJson</Name> </Step> </Request>
... <plugins> <plugin> ... <executions> <execution> ... <configuration> <proxyRefs> <proxyRef>../CommonProxy</proxyRef> </proxyRefs> </configuration> </execution> </executions> </plugin> </plugins>
This method of proxy re-use involves developing a standalone API proxy on Edge, and using this proxy as either a target endpoint or in a service callout policy of another proxy. The second proxy chains to the first proxy where the request sent to the second proxy flows thru to the chained proxy using a local connection. The first or chained proxy must reside in the same Organization and deployed to the same Environment as the second proxy. Chaining to a local proxy in this manner is highly efficient since the connection bypasses any network components such as load balancers, routers, and message processors.
More details on proxy chaining are documented here.
Steps to implement are listed below:
<TargetEndpoint name="datamanager"> ... <LocalTargetConnection> <APIProxy>data-manager</APIProxy> <ProxyEndpoint>default</ProxyEndpoint> </LocalTargetConnection> </TargetEndpoint>
<TargetEndpoint name="datamanager"> ... <LocalTargetConnection> <Path>/v1/streetcarts/foodcarts/data-manager</Path> </LocalTargetConnection> </TargetEndpoint>
API proxies can share common code written in Javascript, Java, Python, Node.js that are deployed to Apigee Edge as resources. These resources can be deployed at the Organization, Environment, or Proxy level. Based on this scope, the resource is available to either the specific proxy, any proxy deployed to that environment, or all proxies in a given Organization. By deploying a resource to the Org or Env level, they can be shared by multiple proxies. In addition to code resources, .wsdl, .xsd. .xsl type resources are also supported. More details can be found here.
Steps to deploy resources :
Using the submodule feature of Git, you can create a separate common repository of shared code that is used by other API proxies. The common repository is kept in a subdirectory of the main API proxies directory and referenced as a dependent project. Change history and version control of both repositories are independent of each other. Then using Maven or a similar build tool, the correct version of the common repository can be checked out and built, and the main API proxy can be deployed along with the necessary dependencies.
Please refer to additional documentation on this feature in the Tools section below.
If you are developing API proxies using Node.js, and want to leverage common node modules as dependencies, then consider creating and using NPM private repositories. NPM allows dependencies of a node.js application to be sourced from various locations like local file system, or remotely over the network from git, or other private repositories.
More documentation on NPM private repositories can be found in this tutorial.
Apigee provides various objects to enable sharing of configuration data across API proxies. These include:
Hi @Sai Saran Vaidyanathan - Please review this article on artefact reuse for Accelerator Methodology. thanks
@Hansel Miranda - one thing to note - especially in section "Compare vs Global Resources" - is that a change in a global resource will not be immediately/automatically picked up by all proxies referencing to it. Such changes will only be picked up when a referencing proxy is redeployed.
I'd also suggest adding our recommendations on why and how to version global resources. We should note that all global resources are stored in one place per type, so I am using file naming conventions, e.g. logToLoggly-v1.js. Version number only changes when there is a backwards incompatible change. CI job needs to be configured to trigger jobs for all dependent proxies for certain environments. This will also bring further advise on how to version control global resources.
Some advise on when to choose global resources managed thru Apigee vs shared resources injected via git submodules would be useful. For example, I would go with shared resources injected via submodules for small artefacts that wouldn't adversely affect deployment times. If I have a big JAR file with certificates, external libraries, etc, I would go with global resources. The main reason is that looking at paragraph above, global resources need a high degree of maintenance which is not natural to many development processes, i.e. parts of code separately installed/maintained and managed.
Thanks for the feedback @oseymen@apigee.com. I've updated the article accordingly.
Nice piece, @Hansel Miranda. I wanted to add an important note: encrypted KVMs are here. Details are in our documentation: http://docs.apigee.com/api-services/reference/key-value-map-operations-policy . You now have an option for encrypted data without having to use Node.js.