Brief summary of this article:
Integration service access security
We use OAuth2 for securing our internal APIs, each service call must contain an OAuth token with a special set of scopes requested by the service.
Data isolation
Native integration use Postgres as underlying storage. Each Postgres table has an account column. This account column is used to split one account data from another during the operating. A set of integration tests verify that data is isolated on the regular basis.Note: not actual for Private Cloud / On-premises instances.
Supported authentication methods
- Basic Authentication via Personal Access Token
Data storage format
Database
We use Postgres DB as the main storage for integration.
What is stored in Postgres:
- Integration profile
- Credentials. Credentials are encrypted with AES-256 algorithm (see details below in paragraph 5)
- Routings. Targetprocess and ADO projects (their IDs and names)
- Types mappings, types ids and names, types fields and fields names
- Entity shares. Pair of Targetprocess entity and related work item in Azure DevOps. For entities we do not store any data except their issue key / entity ID and their entity type.
Database hosts on the same node inside the Kuberntes cluster.
Caching
We use in-memory cache for some almost static information(up to 30min):
- Azure DevOps work item types
- Azure DevOps work item fields
- Configured profiles info
Logs
We use Elastic Cloud for logging, no sensitive data is stored in the logs. Elastic Cloud in Ireland is used and logging can be disabled per request. For on-premise installations there is an option to use separate dedicated elastic instance.
RabbitMQ
We use rabbit MQ for underlying service communication. Rabbit messages contain changes done in one tool and changes that need to be applied to the target tool.
Rabbit is located on the same Kubernetes nodes that Azure DevOps Integration service is.
Credentials encryption
All credentials are stored in the PostgreSQL database along with other integration data in an encrypted form with AES-256 algorithm. Overview diagram and detailed flow below:
- Public/private key pair generated during creation of the cluster, private key stored locally. (using PKCS#7 with RSA 2048 bit)
- There's "secrets" git repository created per cluster that contains encrypted or non-encrypted secrets depending on the sensitivity of the data.
- Azure DevOps integration tokens encryption key (to be used by AES256 algorithm) is generated automatically (encryptor service) or manually (DevOps Team in our case) and encrypted with public part of the cluster key.
- Azure DevOps integration service invokes Secrets / Encryptor services and send "Data Encryption key"
- Data Encryption key is decrypted using Private part of the cluster key, returned and used to encrypt ADO integration token.
- Same process for key retrieval - Azure DevOps integration services invokes Secrets / Encryptor services to retrieve Azure DevOps integration token decryption key and caches for the duration of the container lifecycle.
Processing flow
Generic for all native integrations, every tool integration uses its own adaptor
From Azure DevOps to Apptio Targetprocess
Azure DevOps adapter service listens for webhooks from Azure DevOps account. When an update occurs in Azure DevOps, ADO adapter converts it to generic format. It can queue some additional data via api if it is needed for conversion. When an event is converted it sends it to the generic events queue.
Entity sync service listens for generic events. When a generic event is received, if it has some rules configured that match that event (for example work item updated in Azure DevOps, and this item is already shared with some Targetprocess issue) entity sync applies configured rules and generates 1 or many commands that need to be applied to the target entity. Entity sync publishes these commands to specific adapter commands RabbitMQ queue. During mappings apply entity sync could go to specific tools via adapters api to proceed some conversions like (attachment, comment, user, state etc.)
Targetprocess adapter - receives commands that need to be applied to the target entity and apply changes via tool api. If a command contains multiple field updates but not all of them are valid, adapter split initial update to few small updates and applies all valid updates ignoring not valid.
From Targetproces to Azure DevOps
The flow from Targetprocess to Azure DevOps mirrors the backward flow described in the above paragraph. The updates are generated on the Targetprocess side and applied to Azure DevOps by the same rules.
Entity share flow
The flows are also similar, not depending on the tool. Let's take a look at the share flow from Targetprocess to Azure DevOps.
When you try share some entity toAzure DevOps:
- Entity sync checks via Targetprocess adapter API if entity matches configured route – is an issue in the proper project, assigned to the proper team, matches WIQL filters, etc. If it, it resolves entity share target destination (target project)
- Entity sync checks if profile has a type mapping for source entity and resolves target entity type.
- If target scope(project) and type resolved, adapter creates a target entity stub in Azure DevOps. It tries to fill all required fields with some stubs.
- If target entity created, entity sync set up sync rules between source entity and target entity.
- Entity sync transfers Targetprocess entity state. (Values of all fields configured in profile)
- Entity sync applies configured mappings to calculate target entity state and send it as 1 or few update commands to ADO adapter.
- Azure DevOps adapter applies commands to Azure DevOps work item making it be as expected to the configured mappings.
Network connectivity overview:
There are two sets of ACL's and Security Groups used to limit access to Targetprocess instances from the Internet:
- For application - inbound to only allow HTTPS access to proxy which then routes data to corresponding server or service such as Azure DevOps integration
- For management - access to Bastion host for management allowed only from Targetprocess office or only if connected to the office via VPN.
Azure DevOps Webhooks
To receive updates from Azure DevOps targetprocess uses webhooks. Webhooks can be set up manually or automatically.
For receiving all the necessary updates from Azure DevOps, integration needs at least 5 webhooks in every project for the following events:
- Work item created
- Work item restored
- Work item updated
- Work item deleted
- Work item commented on
Considering performance and big ammount of projects, webhooks are created with no additional filters by Area Path, Work Item Type, etc.
In case you updated webhooks manually to respect some filters, you need to switch webhooks into manual setup mode, otherwise all filters will be reset on every profile save.
Azure DevOps Service User Permissions
Personal Access Token must be granted the following access level:
Work Items (Read, write & manage), Project and Teams (Read), Identity (Read)
In case you want to start from one-way sync and pull data from AzureDevops to Targetprocess, select only 'Read' scope access.
Azure DevOps APIs Used by Targetprocess
GET /_apis/identities POST/DELETE /_apis/hooks/subscriptionsQuery GET /_apis/work/processes/lists/{listId} GET /_apis/ConnectionData GET /_apis/projects GET /_apis/projects/{projectName} GET /_apis/wit/fields POST /_apis/wit/wiql GET /_apis/wit/workItemRelationTypes POST /_apis/wit/workItemsBatch GET /_apis/wit/workItems GET/DELETE/PATCH /_apis/wit/workItems/{workItemId} POST /{projectName}/_apis/wit/workItems/$ GET /_apis/wit/workItems/{workItemId}/updates/{udpateId} GET /{projectName}/_apis/wit/workItems/{workItemId}/comments GET/DELETE/PATCH /{projectName}/_apis/wit/workItems/{workItemId}/comments/{commentId} POST /{projectName}/_apis/wit/workItems/{workItemId}/comments GET /{projectName}/_apis/wit/workitemtypes/states GET /{projectName}/_apis/wit/workItemTypes
Still have a question?
We're here to help! Just contact our friendly support team.