AWS CloudShell

Getting started with CloudShell

CloudShell is a browser-based terminal in the AWS Management Console. You can run CLI commands to set up your resources across different services.

Log in to your AWS Management Console, open CloudShell and follow the instructions below to set up your resources.

1. Set shell variables

Replace the values below with the Scanner-provided values and the names of buckets in your account that you want to be indexed.

# These values will be provided by Scanner
REGION="<INSERT_VALUE_HERE>"
SCANNER_AWS_ACCOUNT_ID="<INSERT_VALUE_HERE>"
STS_EXTERNAL_ID="<INSERT_VALUE_HERE>"
SCANNER_SQS_INDEX_QUEUE_ARN="<INSERT_VALUE_HERE>"
S3_INDEX_FILES_BUCKET_NAME="<INSERT_VALUE_HERE>"

# Insert your AWS account ID here
YOUR_AWS_ACCOUNT_ID="<INSERT_VALUE_HERE>"

# List your buckets here (enclosed in parentheses and whitespace-separated)
S3_LOG_FILES_BUCKET_NAMES=("<BUCKET_1>" "<BUCKET_2>" "<BUCKET_3>")

# These are default names for resources to be created
IAM_SCANNER_ROLE_NAME="scnr-ScannerRole"
IAM_SCANNER_ROLE_POLICY_NAME="scnr-ScannerRolePolicy"
SNS_NOTIFICATION_TOPIC_NAME="scnr-LogFilesBucketEventNotificationTopic"

2. Create S3 index files bucket

This bucket is where Scanner stores index files, keeping all log data within your AWS account.

Please ensure this bucket is used exclusively for Scanner indexing. Avoid adding any unrelated files to maintain optimal performance.

# Create bucket
aws s3api create-bucket \
  --region $REGION \
  --bucket $S3_INDEX_FILES_BUCKET_NAME \
  --acl "private" \
  --create-bucket-configuration "LocationConstraint=$REGION"
  
# Set public access block
aws s3api put-public-access-block \
  --bucket $S3_INDEX_FILES_BUCKET_NAME \
  --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
  
# Check public access block
aws s3api get-public-access-block --bucket $S3_INDEX_FILES_BUCKET_NAME
# Set bucket encryption
aws s3api put-bucket-encryption \
  --bucket $S3_INDEX_FILES_BUCKET_NAME \
  --server-side-encryption-configuration '{
    "Rules": [
      {
        "BucketKeyEnabled": true,
        "ApplyServerSideEncryptionByDefault": {
          "SSEAlgorithm": "aws:kms"
        }
      }
    ]
   }'

# Check bucket encryption
aws s3api get-bucket-encryption --bucket $S3_INDEX_FILES_BUCKET_NAME
# Set lifecycle configuration
aws s3api put-bucket-lifecycle-configuration \
  --bucket $S3_INDEX_FILES_BUCKET_NAME \
  --lifecycle-configuration '{
    "Rules": [
      {
        "ID": "ExpireTagging",
        "Filter": {
          "Tag": {
            "Key": "Scnr-Lifecycle",
            "Value": "expire"
          }
        },
        "Status": "Enabled",
        "Expiration": {
          "Days": 1
        }
      },
      {
        "ID": "AbortIncompleteMultiPartUploads",
        "Filter": {},
        "Status": "Enabled",
        "AbortIncompleteMultipartUpload": {
          "DaysAfterInitiation": 1
        }
      }
    ]
  }'

# Check lifecycle configuration
aws s3api get-bucket-lifecycle-configuration --bucket $S3_INDEX_FILES_BUCKET_NAME

3. Create SNS notification topic

When new log files appear in your S3 log files buckets, Scanner will get notified by your SNS topic via a subscription from the Scanner SQS index queue.

If you already have an SNS topic for S3 (object-created) event notifications, you can skip this section and use the existing topic for creating the subscription in the next section.

# Create SNS topic
SNS_TOPIC_NAME="scnr-LogsBucketEventNotificationTopic"

aws sns create-topic \
  --region $REGION \
  --name $SNS_TOPIC_NAME
# Set topic ARN
SNS_TOPIC_ARN="arn:aws:sns:${REGION}:${YOUR_AWS_ACCOUNT_ID}:${SNS_TOPIC_NAME}"

# Create policy to allow S3 event notifications
aws sns set-topic-attributes \
  --region $REGION \
  --topic-arn $SNS_TOPIC_ARN \
  --attribute-name "Policy" \
  --attribute-value '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Principal": { "Service": "s3.amazonaws.com" },
        "Action": "sns:Publish",
        "Resource": "'${SNS_TOPIC_ARN}'"
      }
    ]
  }'
  
# Check policy
aws sns get-topic-attributes \
  --region $REGION \
  --topic-arn $SNS_TOPIC_ARN \
  --query "Attributes.Policy" | jq -r | jq

4. Create SNS -> Scanner SQS queue subscription

Before creating this subscription, be sure to link your AWS account in the Scanner app. Scanner needs to update the queue's permission to receive the subscription confirmation request.

If you haven't done so, the subscription will remain in the state of "pending confirmation". After linking your account, "request confirmation" again in the AWS console to fix it.

If you are using an existing SNS topic, replace the ARN below.

# Create subscription
aws sns subscribe \
  --region $REGION \
  --topic-arn $SNS_TOPIC_ARN \
  --protocol "sqs" \
  --notification-endpoint $SCANNER_SQS_INDEX_QUEUE_ARN \
  --attributes '{ "RawMessageDelivery": "true" }'
  

5. Create S3 -> SNS event notifications

When a new file is created in your S3 log files bucket, send a notification to the SNS topic.

S3 only allows one destination per trigger. If any of these buckets already have SQS/Lambda notifications for object-created events, follow the instructions below to migrate them first.

Migrate existing SQS/Lambda notifications (Optional)

An S3 event notification can only have one destination per trigger, whereas an SNS topic can fan out to multiple subscribers. We will therefore change the existing S3 -> SQS/Lambda notification to S3 -> SNS -> SQS/Lambda:

  1. If you want to keep the notifications separate, create a new SNS topic. If not, use the same SNS topic as above.

  2. Create SNS -> your SQS queue/Lambda function subscription(s).

  3. Create SNS -> Scanner SQS index queue subscription.

  4. Replace existing S3 -> SQS/Lambda event notification(s) with S3 -> SNS event notifications.

If you are using an existing SNS topic, replace the ARN below.

# Create event notification for each bucket
for S3_LOG_FILES_BUCKET_NAME in ${S3_LOG_FILES_BUCKET_NAMES[@]}
do  
  aws s3api put-bucket-notification-configuration \
    --bucket $S3_LOG_FILES_BUCKET_NAME \
    --notification-configuration '{
      "TopicConfigurations": [
        {
          "TopicArn": "'${SNS_TOPIC_ARN}'",
          "Events": ["s3:ObjectCreated:*"]
        }
      ]
    }'
done

# Check event notification (for each bucket)
for S3_LOG_FILES_BUCKET_NAME in ${S3_LOG_FILES_BUCKET_NAMES[@]}
do
  aws s3api get-bucket-notification --bucket $S3_LOG_FILES_BUCKET_NAME
done

6. Create IAM Scanner Role

Scanner will assume this IAM role to perform actions in your AWS account, such as reading and writing log files.

# Create role
aws iam create-role \
  --role-name $IAM_SCANNER_ROLE_NAME \
  --assume-role-policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Principal": {
          "AWS": "arn:aws:iam::'${SCANNER_AWS_ACCOUNT_ID}':root"
        },
        "Action": "sts:AssumeRole",
        "Condition": {
          "StringEquals": {
            "sts:ExternalId": "'${STS_EXTERNAL_ID}'"
          }
        }
      }
    ]
  }'
# Set S3 log files bucket ARNs
S3_LOG_FILES_BUCKET_ARNS=$(printf "%s\n" "${S3_LOG_FILES_BUCKET_NAMES[@]}" | awk '{print "\"arn:aws:s3:::" $0 "\",\"arn:aws:s3:::" $0 "/*\""}' | paste -sd, -)

# Create policy
aws iam put-role-policy \
  --role-name $IAM_SCANNER_ROLE_NAME \
  --policy-name $IAM_SCANNER_ROLE_POLICY_NAME \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "s3:ListAllMyBuckets",
          "s3:GetBucketLocation",
          "s3:GetBucketTagging"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetBucketNotification",
          "s3:ListBucket",
          "s3:GetObject",
          "s3:GetObjectTagging"
        ],
        "Resource": ['$S3_LOG_FILES_BUCKET_ARNS']
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetLifecycleConfiguration",
          "s3:ListBucket",
          "s3:GetObject",
          "s3:GetObjectTagging",
          "s3:PutObject",
          "s3:PutObjectTagging",
          "s3:DeleteObject",
          "s3:DeleteObjectTagging",
          "s3:DeleteObjectVersion",
          "s3:DeleteObjectVersionTagging"
        ],
        "Resource": [
          "arn:aws:s3:::'$S3_INDEX_FILES_BUCKET_NAME'",
          "arn:aws:s3:::'$S3_INDEX_FILES_BUCKET_NAME'/*"
        ]
      }
    ]
  }'
  
# Check policy
aws iam get-role-policy \
  --role-name $IAM_SCANNER_ROLE_NAME \
  --policy-name $IAM_SCANNER_ROLE_POLICY_NAME

Adding more S3 log files buckets

If you need to add more S3 log files buckets from an existing account after the initial setup, you need to do the following:

  • Create an S3 -> SNS event notification for each bucket.

  • Update the Scanner IAM role policy:

    1. Go to AWS Console -> IAM -> <Scanner Role>

    2. Permissions -> <Scanner Role Policy> -> Edit

    3. For each bucket, add two rows under the Resource array containing the log files buckets: one for bucket_arn and one for bucket_arn/*.

Last updated