Skip to main content
You are viewing the documentation for Interana version 2. For documentation on the most recent version of Interana, go to


Scuba Docs

What you should know about Interana security in AWS


This document describes Interana Managed Edition deployments in AWS. It provides details on when and how the cluster communicates with the outside world, how access to the cluster is managed and controlled, and the application configurations that may potentially impact security. 

Click a link to jump to a topic:

Network architecture overview

Interana is deployed in a Virtual Private Cloud (VPC) that is created within your company's AWS account. Interana establishes two subnets within the VPC:

  • Public subnet—contains API nodes (web application servers), and a single admin node
  • Private subnet—contains data, string, config, and import nodes (data persists here)

In the Interana cluster, the admin node is the primary host used for cluster management. The API nodes serve requests made by users of the Interana web (UI) application. Cluster import nodes access data in a designated S3 bucket where access is controlled with a bucket policy. Communication between Interana nodes happens over SSL.

image (1).png

Usage Tracking logs can be configured to stay within the VPC rather than being uploaded to Interana.

Communication with the Interana cluster (within the VPC) consists of the following:

  • Incoming web requests from application users
  • Files downloaded from the customer owned data source S3 bucket
  • Communication with the customer network for cluster access
  • Outbound communication to Datadog for cluster monitoring
  • (Optional) log upload to an Interana-owned S3 bucket

image (2).png

AWS configuration

This section provides an overview of the AWS configuration for an Interana cluster. Click a link to jump to a topic:


Interana creates two subnets with the VPC, one public and one private. The public subnet contains the admin and API nodes and uses CIDR External inbound traffic is routed to the public subnet. The private subnet contains the config, import, data, and string nodes, and uses CIDR A Network Access Control List (ACL) controls the communication between the two subnets.

Security groups

Interana creates a default security group to control ingress to and egress from the cluster, and applies to the API nodes on the public subnet. Additionally, Interana creates a private security group that controls access to nodes on the private subnet, as well as an admin security group that controls access to the admin node.

AWS Identity Access Management

The Interana must have AWS credentials in order to provision, deploy, and manage an Interana cluster. This requires that you create an Interana user account with the necessary permissions restricted to your target AWS account (see Policies below). Interana stores these credentials securely and accesses them to deploy the cluster and perform cluster upgrades and maintenance. 

Users and Groups

Interana requires a single privileged user account for administration purposes. As such, there are no hard requirements on group membership for the chosen user. The configuration is left to your discretion.


Interana creates a role that is applied to EC2 instances created in your AWS account. This role allows all actions for the EC2 instances, and is attached to the EC2 instances with an instance profile. 


Interana defines the policy for the EC2 instances created in the account (described in the above section). This policy enables each instance to take snapshots for backup purposes.

Additionally, the customer must define a policy for the user that Interana then uses to administer the account. The policy should allow the following actions:

Cluster Standup role Cluster Maintenance role Cluster Teardown role
cloudformation:CreateStack ec2:AttachVolume cloudformation:DeleteStack
cloudformation:DescribeAccountLimits ec2:CreateKeyPair ec2:DeleteDhcpOptions
ec2:AssociateAddress ec2:CreateSnapshot ec2:DeleteInternetGateway
ec2:AssociateDhcpOptions ec2:DeleteSnapshot ec2:DeleteNetworkAcl
ec2:AssociateRouteTable ec2:DescribeSnapshots ec2:DeleteNetworkAclEntry
ec2:AttachInternetGateway ec2:CreateTags ec2:DeleteRoute
ec2:AttachVolume ec2:CreateVolume ec2:DeleteRouteTable
ec2:AuthorizeSecurityGroupEgress ec2:DeleteVolume ec2:DeleteSecurityGroup
ec2:AuthorizeSecurityGroupIngress ec2:DetachVolume ec2:DeleteSubnet
ec2:CreateDhcpOptions ec2:ModifyInstanceAttribute ec2:DeleteVolume
ec2:CreateInternetGateway ec2:RebootInstances ec2:DeleteVpc
ec2:CreateKeyPair ec2:ReplaceNetworkAclAssociation ec2:DeleteVpcEndpoints
ec2:CreateNetworkAcl ec2:RunInstances ec2:DetachInternetGateway
ec2:CreateNetworkAclEntry ec2:StartInstances ec2:DetachVolume
ec2:CreateRoute ec2:StopInstances ec2:DisassociateRouteTable
ec2:CreateRouteTable ec2:TerminateInstances ec2:RevokeSecurityGroupEgress
ec2:CreateSecurityGroup   ec2:RevokeSecurityGroupIngress
ec2:CreateSnapshot   ec2:TerminateInstances
ec2:CreateSubnet   iam:DeleteInstanceProfile
ec2:CreateTags   iam:DeleteRole
ec2:CreateVolume   iam:DeleteRolePolicy
ec2:CreateVpc   iam:RemoveRoleFromInstanceProfile

Key storage

Interana stores AWS credentials in an encrypted database, and utilizes a key vault solution to encrypt and restrict access to the keys exclusively to Interana. All AWS API requests are made over SSL. Interana supports rotating AWS credentials at the customer's request.

Access control

Security groups on the cluster restrict access to approved Interana IP addresses. Additionally, each Interana Customer Support Engineer that accesses your system must have a unique login with key access. Interana uses a third-party cloud LDAP service that syncs with Interana internal LDAP servers for account management. Each Customer Support Engineer must have an unique SSH key, and every login attempt is validated. This ensures that only an authorized account and SSH key can access your account. Access is granted based on group membership that is approved by you.

Interana collects logs that record SSH access to the cluster for auditing purposes. Access logs are uploaded to a host in the Interana AWS environment for analysis. Customers can request a report of access to their cluster.


Interana provisions the cluster, installs Interana software and dependencies on each node in the cluster, then configures monitoring and scheduled backups. Interana deploys the VPC and sets up the subnets, security groups, and other AWS requirements, then invokes the AWS API to create the cluster nodes. After the nodes are created, Interana performs system updates and configures access control.

System maintenance

Interana automates the monitoring and upgrading of cluster nodes. Click a link to jump to a topic:

Monitoring a cluster

Interana monitors the health of Interana cluster nodes and application processes. Each node in the cluster is configured to send metrics to a web API on a secure server. An API key is used to authenticate with the server that is authorized for reporting telemetry only (no read access, no account configuration access, no access to data stored on cluster). 

The following metrics are collected:

  • Node availability
  • Node reboot
  • CPU
  • Thread count per process
  • Memory - RSS, VM, Malloc, MMAP
  • File descriptors
  • TCP ports
  • Disk space (root, data, log, backup volumes)
  • Disk I/O and throughput
  • Application disk space (import, config/database, cache)
  • Crons
  • Bytes sent and received

Within the cluster, the following ports are used for collecting and transmitting cluster metrics:

Port Type  Description
8125 udp Agent for collecting metrics
17123 tcp Agent forwarder for buffering traffic
123 udp NTP

Automating cluster maintenance 

Interana manages the state of the cluster with an orchestration and configuration management tool. Processes run on each node and communicate to the server in the Interana AWS environment on ports 4505 and 4506.

Upgrading a cluster

Software upgrades are similar to the initial Interana software installation. Interana manages upgrades, connecting to the admin node and using it as a proxy to run upgrades on the other nodes in the cluster. 

Application configuration

This section covers the ways in which you can configure the Interana cluster environment. Click a link to jump to a topic:

Ingesting data

The Interana import nodes manage a pipeline for ingesting data into the cluster. Import nodes are configured to pull files from a designated S3 bucket. After the files are downloaded, they are processed and deleted.

Access to the source data bucket is configured through a bucket policy that is set by you. Read-only access must be granted for the Interana IAM user. The bucket policy should allow the following actions on the data source bucket and resources within the bucket:

  • s3:GetBucketLocation
  • s3:ListBucket
  • s3:GetObject
  • s3:RestoreObject

User authentication

This section provides and overview of the methods Interana supports for authenticating users through the Interana web user interface (UI). For more information, see the article on Using other forms of authentication.

Username and password

The default authentication method is with a unique username and password as login credentials. Usernames are typically email addresses. An Interana admin can configure authorized email domains with which to register accounts. An email is sent to verify ownership of the domain before a user account is created. An admin can also create accounts for arbitrary domains, as needed. User account passwords are encrypted and stored on the config node of the Interana cluster.

Third-party authentication

Interana supports authentication with third-parties that implement the SAML 2.0 or OAuth 2.0 protocols. In this scheme, Interana routes authentication requests to the configured third-party service provider. The Interana supported third-party authentication and single sign-on providers include the following:

  • Okta
  • Azure Active Directory
  • ADFS
  • OneLogin
  • Google
  • AppleConnect (using SAML 2.0)
API access

An Interana admin can generate API tokens for authenticating requests to the Interana external API. API tokens are always associated with an existing Interana user account. An admin can invalidate issued tokens at any time. The customer is responsible for managing token rotation and revocation policies.

Email notifications

The Interana application sends emails for account creation and management (if user/password authentication is enabled), as well as for scheduled dashboard reports. The service used to send emails is configurable, with the basic requirement of an SMTP interface (host, port, username, and password). Interana supports the following email services:

  • Sendgrid—Interana default email service. Emails are sent from the domain.

  • Amazon Simple Email Service (SES)—Amazon SES can be configured to send Interana emails from the domain. If Amazon SES is configured, the DomainKeys Identified Mail (DKIM) signing of emails is enabled.

  • Customer mail server—A customer-owned mail server may be used to send Interana emails. For this, the customer needs to provide SMTP configuration information. Emails are then sent from the customer domain.


Interana processes log system usage and performance information on each node locally. These logs may contain information about Interana users, such as email addresses and information about queries run on the system, including query parameters. Query parameters may include information about data stored in the system, for example a filter that narrows to a specific actor ID. Interana does not log the results of queries that are run in the system.

By default, Interana collects these logs and uploads them to an S3 bucket in the Interana AWS environment for internal use. This behavior can be disabled. Alternatively, logs can be ingested directly into the cluster for analysis.

What's Next

For more information, see the following topics: