Categorygithub.com/corymhall/cdk-constructs-go/cdkmonitoringconstructs
modulepackage
0.0.0
Repository: https://github.com/corymhall/cdk-constructs-go.git
Documentation: pkg.go.dev

# README

CDK Monitoring Constructs

NPM version Maven Central PyPI version NuGet version Gitpod Ready-to-Code Mergify

Easy-to-use CDK constructs for monitoring your AWS infrastructure with Amazon CloudWatch.

  • Easily add commonly-used alarms using predefined properties
  • Generate concise CloudWatch dashboards that indicate your alarms
  • Extend the library with your own extensions or custom metrics
  • Consume the library in multiple supported languages

Installation

TypeScript

https://www.npmjs.com/package/cdk-monitoring-constructs

In your package.json:

{
  "dependencies": {
    "cdk-monitoring-constructs": "^7.0.0",

    // peer dependencies of cdk-monitoring-constructs
    "@aws-cdk/aws-redshift-alpha": "^2.112.0-alpha.0",
    "aws-cdk-lib": "^2.112.0",
    "constructs": "^10.0.5"

    // ...your other dependencies...
  }
}
Java

See https://mvnrepository.com/artifact/io.github.cdklabs/cdkmonitoringconstructs

Python

See https://pypi.org/project/cdk-monitoring-constructs/

C#

See https://www.nuget.org/packages/Cdklabs.CdkMonitoringConstructs/

Features

You can browse the documentation at https://constructs.dev/packages/cdk-monitoring-constructs/

ItemMonitoringAlarmsNotes
AWS API Gateway (REST API) (.monitorApiGateway())TPS, latency, errorsLatency, error count/rate, low/high TPSTo see metrics, you have to enable Advanced Monitoring
AWS API Gateway V2 (HTTP API) (.monitorApiGatewayV2HttpApi())TPS, latency, errorsLatency, error count/rate, low/high TPSTo see route level metrics, you have to enable Advanced Monitoring
AWS AppSync (GraphQL API) (.monitorAppSyncApi())TPS, latency, errorsLatency, error count/rate, low/high TPS
Amazon Aurora (.monitorAuroraCluster())Query duration, connections, latency, CPU usage, Serverless Database CapacityConnections, Serverless Database Capacity and CPU usage
AWS Billing (.monitorBilling())AWS account costTotal cost (anomaly)Requires enabling the Receive Billing Alerts option in AWS Console / Billing Preferences
AWS Certificate Manager (.monitorCertificate())Certificate expirationDays until expiration
AWS CloudFront (.monitorCloudFrontDistribution())TPS, traffic, latency, errorsError rate, low/high TPS
AWS CloudWatch Logs (.monitorLog())Patterns present in the log groupMinimum incoming logs
AWS CloudWatch Synthetics Canary (.monitorSyntheticsCanary())Latency, error count/rateError count/rate, latency
AWS CodeBuild (.monitorCodeBuildProject())Build counts (total, successful, failed), failed rate, durationFailed build count/rate, duration
AWS DocumentDB (.monitorDocumentDbCluster())CPU, throttling, read/write latency, transactions, cursorsCPU
AWS DynamoDB (.monitorDynamoTable())Read and write capacity provisioned / usedConsumed capacity, throttling, latency, errors
AWS DynamoDB Global Secondary Index (.monitorDynamoTableGlobalSecondaryIndex())Read and write capacity, indexing progress, throttled events
AWS EC2 (.monitorEC2Instances())CPU, disk operations, network
AWS EC2 Auto Scaling Groups (.monitorAutoScalingGroup())Group size, instance status
AWS ECS (.monitorFargateService(), .monitorEc2Service(), .monitorSimpleFargateService(), monitorSimpleEc2Service(), .monitorQueueProcessingFargateService(), .monitorQueueProcessingEc2Service())System resources and task healthUnhealthy task count, running tasks count, CPU/memory usage, and bytes processed by load balancer (if any)Use for ecs-patterns load balanced ec2/fargate constructs (NetworkLoadBalancedEc2Service, NetworkLoadBalancedFargateService, ApplicationLoadBalancedEc2Service, ApplicationLoadBalancedFargateService)
AWS ElastiCache (.monitorElastiCacheCluster())CPU/memory usage, evictions and connectionsCPU, memory, items count
AWS Glue (.monitorGlueJob())Traffic, job status, memory/CPU usageFailed/killed task count/rate
AWS Kinesis Data Analytics (.monitorKinesisDataAnalytics)Up/Downtime, CPU/memory usage, KPU usage, checkpoint metrics, and garbage collection metricsDowntime, full restart count
AWS Kinesis Data Stream (.monitorKinesisDataStream())Put/Get/Incoming Record/s and ThrottlingThrottling, throughput, iterator max age
AWS Kinesis Firehose (.monitorKinesisFirehose())Number of records, requests, latency, throttlingThrottling
AWS Lambda (.monitorLambdaFunction())Latency, errors, iterator max ageLatency, errors, throttles, iterator max ageOptional Lambda Insights metrics (opt-in) support
AWS Load Balancing (.monitorNetworkLoadBalancer(), .monitorFargateApplicationLoadBalancer(), .monitorFargateNetworkLoadBalancer(), .monitorEc2ApplicationLoadBalancer(), .monitorEc2NetworkLoadBalancer())System resources and task healthUnhealthy task count, running tasks count, (for Fargate/Ec2 apps) CPU/memory usageUse for FargateService or Ec2Service backed by a NetworkLoadBalancer or ApplicationLoadBalancer
AWS OpenSearch/Elasticsearch (.monitorOpenSearchCluster(), .monitorElasticsearchCluster())Indexing and search latency, disk/memory/CPU usageIndexing and search latency, disk/memory/CPU usage, cluster status, KMS keys
AWS RDS (.monitorRdsCluster())Query duration, connections, latency, disk/CPU usageConnections, disk and CPU usage
AWS Redshift (.monitorRedshiftCluster())Query duration, connections, latency, disk/CPU usageQuery duration, connections, disk and CPU usage
AWS S3 Bucket (.monitorS3Bucket())Bucket size and number of objects
AWS SecretsManager (.monitorSecretsManager())Max secret count, min secret sount, secret count changeMin/max secret count or change in secret count
AWS SecretsManager Secret (.monitorSecretsManagerSecret())Days since last rotationDays since last change or rotation
AWS SNS Topic (.monitorSnsTopic())Message count, size, failed notificationsFailed notifications, min/max published messages
AWS SQS Queue (.monitorSqsQueue(), .monitorSqsQueueWithDlq())Message count, age, sizeMessage count, age, DLQ incoming messages
AWS Step Functions (.monitorStepFunction(), .monitorStepFunctionActivity(), monitorStepFunctionLambdaIntegration(), .monitorStepFunctionServiceIntegration())Execution count and breakdown per stateDuration, failed, failed rate, aborted, throttled, timed out executions
AWS Web Application Firewall (.monitorWebApplicationFirewallAclV2())Allowed/blocked requestsBlocked requests count/rate
Custom metrics (.monitorCustom())Addition of custom metrics into the dashboard (each group is a widget)Supports anomaly detection

Getting started

Create a facade

Important note: Please, do NOT import anything from the /dist/lib package. This is unsupported and might break any time.

  1. Create an instance of MonitoringFacade, which is the main entrypoint.
  2. Call methods on the facade like .monitorLambdaFunction() and chain them together to define your monitors. You can also use methods to add your own widgets, headers of various sizes, and more.

For examples of monitoring different resources, refer to the unit tests.

export interface MonitoringStackProps extends DeploymentStackProps {
  // ...
}

// This could be in the same stack as your resources, as a nested stack, or a separate stack as you see fit
export class MonitoringStack extends DeploymentStack {
  constructor(parent: App, name: string, props: MonitoringStackProps) {
    super(parent, name, props);

    const monitoring = new MonitoringFacade(this, "Monitoring", {
      // Defaults are provided for these, but they can be customized as desired
      metricFactoryDefaults: { ... },
      alarmFactoryDefaults: { ... },
      dashboardFactory: { ... },
    });

    // Monitor your resources
    monitoring
      .addLargeHeader("Storage")
      .monitorDynamoTable({ /* Monitor a DynamoDB table */ })
      .monitorDynamoTable({ /* and a different table */ })
      .monitorLambdaFunction({ /* and a Lambda function */ })
      .monitorCustom({ /* and some arbitrary metrics in CloudWatch */ })
      // ... etc.
  }
}

Customize actions

Alarms should have an action setup, otherwise they are not very useful. Currently, we support notifying an SNS topic.

const onAlarmTopic = new Topic(this, "AlarmTopic");

const monitoring = new MonitoringFacade(this, "Monitoring", {
  // ...other props
  alarmFactoryDefaults: {
    // ....other props
    action: new SnsAlarmActionStrategy({ onAlarmTopic }),
  },
});

You can override the default topic for any alarm like this:

monitoring
  .monitorSomething(something, {
    addSomeAlarm: {
      Warning: {
        // ...other props
        threshold: 42,
        actionOverride: new SnsAlarmActionStrategy({ onAlarmTopic }),
      }
    }
  });

Custom metrics

For simply adding some custom metrics, you can use .monitorCustom() and specify your own title and metric groups. Each metric group will be rendered as a single graph widget, and all widgets will be placed next to each other. All the widgets will have the same size, which is chosen based on the number of groups to maximize dashboard space usage.

Custom metric monitoring can be created for simple metrics, simple metrics with anomaly detection and search metrics. The first two also support alarming.

Below we are listing a couple of examples. Let us assume that there are three existing metric variables: m1, m2, m3. They can either be created by hand (new Metric({...})) or (preferably) by using metricFactory (that can be obtained from facade). The advantage of using the shared metricFactory is that you do not need to worry about period, etc.

// create metrics manually
const m1 = new Metric(/* ... */);
const metricFactory = monitoringFacade.createMetricFactory();

// create metrics using metric factory
const m1 = metricFactory.createMetric(/* ... */);

Example: metric with anomaly detection

In this case, only one metric is supported. Multiple metrics cannot be rendered with anomaly detection in a single widget due to a CloudWatch limitation.

monitorCustom({
  title: "Metric with anomaly detection",
  metrics: [
    {
      metric: m1,
      anomalyDetectionStandardDeviationToRender: 3
    }
  ]
})

Adding an alarm:

monitorCustom({
  title: "Metric with anomaly detection and alarm",
  metrics: [
    {
      metric: m1,
      alarmFriendlyName: "MetricWithAnomalyDetectionAlarm",
      anomalyDetectionStandardDeviationToRender: 3,
      addAlarmOnAnomaly: {
        Warning: {
          standardDeviationForAlarm: 4,
          alarmWhenAboveTheBand: true,
          alarmWhenBelowTheBand: true
        }
      }
    }
  ]
})

Example: search metrics

monitorCustom({
  title: "Metric search",
  metrics: [
    {
      searchQuery: "My.Prefix.",
      dimensionsMap: {
        FirstDimension: "FirstDimensionValue",
        // Allow any value for the given dimension (pardon the weird typing to satisfy DimensionsMap)
        SecondDimension: undefined as unknown as string
      }
      statistic: MetricStatistic.SUM,
    }
  ]
})

Search metrics do not support setting an alarm, which is a CloudWatch limitation.

Route53 Health Checks

Route53 has strict requirements as to which alarms are allowed to be referenced in Health Checks. You adjust the metric for an alarm sot hat it can be used in a Route53 Health Checks as follows:

monitoring
  .monitorSomething(something, {
    addSomeAlarm: {
      Warning: {
        // ...other props
        metricAdjuster: Route53HealthCheckMetricAdjuster.INSTANCE,
      }
    }
  });

This will ensure the alarm can be used on a Route53 Health Check or otherwise throw an Error indicating why the alarm can't be used. In order to easily find your Route53 Health Check alarms later on, you can apply a custom tag to them as follows:

import { CfnHealthCheck } from "aws-cdk-lib/aws-route53";

monitoring
  .monitorSomething(something, {
    addSomeAlarm: {
      Warning: {
        // ...other props
        customTags: ["route53-health-check"],
        metricAdjuster: Route53HealthCheckMetricAdjuster.INSTANCE,
      }
    }
  });

const alarms = monitoring.createdAlarmsWithTag("route53-health-check");

const healthChecks = alarms.map(({ alarm }) => {
  const id = getHealthCheckConstructId(alarm);

  return new CfnHealthCheck(scope, id, {
    healthCheckConfig: {
      // ...other props
      type: "CLOUDWATCH_METRIC",
      alarmIdentifier: {
        name: alarm.alarmName,
        region: alarm.stack.region,
      },
    },
  });
});

Custom monitoring segments

If you want even more flexibility, you can create your own segment.

This is a general procedure on how to do it:

  1. Extend the Monitoring class
  2. Override the widgets() method (and/or similar ones)
  3. Leverage the metric factory and alarm factory provided by the base class (you can create additional factories, if you will)
  4. Add all alarms to .addAlarm() so they are visible to the user and being placed on the alarm summary dashboard

Both of these monitoring base classes are dashboard segments, so you can add them to your monitoring by calling .addSegment() on the MonitoringFacade.

Modifying or omitting widgets from default dashboard segments

While the dashboard widgets defined in the library are meant to cover most use cases, they might not be what you're looking for.

To modify the widgets:

  1. Extend the appropriate Monitoring class (e.g., LambdaFunctionMonitoring for monitorLambdaFunction) and override the relevant methods (e.g., widgets):

    export class MyCustomizedLambdaFunctionMonitoring extends LambdaFunctionMonitoring {
      widgets(): IWidget[] {
        return [
          // Whatever widgets you want instead of what LambdaFunctionMonitoring has
        ];
      }
    }
    
  2. Use the facade's addSegment method with your custom class:

    declare const facade: MonitoringFacade;
    
    facade.addSegment(new MyCustomizedLambdaFunctionMonitoring(facade, {
      // Props for LambdaFunctionMonitoring
    }));
    

Custom dashboards

If you want even more flexibility, you can take complete control over dashboard generation by leveraging dynamic dashboarding features. This allows you to create an arbitrary number of dashboards while configuring each of them separately. You can do this in three simple steps:

  1. Create a dynamic dashboard factory
  2. Create IDynamicDashboardSegment implementations
  3. Add Dynamic Segments to your MonitoringFacade

Create a dynamic dashboard factory

The below code sample will generate two dashboards with the following names:

  • ExampleDashboards-HostedService
  • ExampleDashboards-Infrastructure
// create the dynamic dashboard factory.
const factory = new DynamicDashboardFactory(stack, "DynamicDashboards", {
  dashboardNamePrefix: "ExampleDashboards",
  dashboardConfigs: [
    // 'name' is the minimum required configuration
    { name: "HostedService" },
    // below is an example of additional dashboard-specific config options
    {
      name: "Infrastructure",
      range: Duration.hours(3),
      periodOverride: PeriodOverride.AUTO,
      renderingPreference: DashboardRenderingPreference.BITMAP_ONLY
    },
  ],
});

Create IDynamicDashboardSegment implementations

For each construct you want monitored, you will need to create an implementation of an IDynamicDashboardSegment. The following is a basic reference implementation as an example:

export enum DashboardTypes {
  HostedService = "HostedService",
  Infrastructure = "Infrastructure",
}

class ExampleSegment implements IDynamicDashboardSegment {
  widgetsForDashboard(name: string): IWidget[] {
    // this logic is what's responsible for allowing your dynamic segment to return
    // different widgets for different dashboards
    switch (name) {
      case DashboardTypes.HostedService:
        return [new TextWidget({ markdown: "This shows metrics for your service hosted on AWS Infrastructure" })];
      case DashboardTypes.Infrastructure:
        return [new TextWidget({ markdown: "This shows metrics for the AWS Infrastructure supporting your hosted service" })];
      default:
        throw new Error("Unexpected dashboard name!");
    }
  }
}

Add Dynamic Segments to MonitoringFacade

When you have instances of an IDynamicDashboardSegment to use, they can be added to your dashboard like this:

monitoring.addDynamicSegment(new ExampleSegment());

Now, this widget will be added to both dashboards and will show different content depending on the dashboard. Using the above example code, two dashboards will be generated with the following content:

  • Dashboard Name: "ExampleDashboards-HostedService"

    • Content: "This shows metrics for your service hosted on AWS Infrastructure"
  • Dashboard Name: "ExampleDashboards-Infrastructure"

    • Content: "This shows metrics for the AWS Infrastructure supporting your hosted service"

Monitoring scopes

You can monitor complete CDK construct scopes using an aspect. It will automatically discover all monitorable resources within the scope recursively and add them to your dashboard.

monitoring.monitorScope(stack, {
  // With optional configuration
  lambda: {
    props: {
      addLatencyP50Alarm: {
        Critical: { maxLatency: Duration.seconds(10) },
      },
    },
  },

  // Some resources that aren't dependent on nodes (e.g. general metrics across instances/account) may be included
  // by default, which can be explicitly disabled.
  billing: { enabled: false },
  ec2: { enabled: false },
  elasticCache: { enabled: false },
});

Contributing

See CONTRIBUTING for more information.

Security policy

See SECURITY for more information.

License

This project is licensed under the Apache-2.0 License.

# Packages

Package jsii contains the functionaility needed for jsii packages to initialize their dependencies and themselves.

# Functions

Checks if `x` is a construct.
Returns true if the construct was created by CDK, and false otherwise.
Check whether the given construct is a Resource.
Checks if `x` is a construct.
Experimental.
Checks if `x` is a construct.
Returns true if the construct was created by CDK, and false otherwise.
Check whether the given construct is a Resource.
Checks if `x` is a construct.
No description provided by the author
Checks if `x` is a construct.
Checks if `x` is a construct.
Experimental.
Checks if `x` is a construct.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Creates a new construct node.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Create a dashboard segment representing a single widget.
Create a dashboard segment representing a single widget.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Deprecated: Use MathExpression from aws-cdk-lib/aws-cloudwatch instead.
Deprecated: Use MathExpression from aws-cdk-lib/aws-cloudwatch instead.
No description provided by the author
Experimental.
Checks if `x` is a construct.
No description provided by the author
No description provided by the author
No description provided by the author

# Constants

Experimental.
Experimental.
Experimental.
Experimental.
trigger only if all the alarms are triggered.
trigger if any of the alarms is triggered.
Create standard set of dashboards with bitmap widgets only.
Create a two sets of dashboards: standard set (interactive) and a copy (bitmap).
Create standard set of dashboards with interactive widgets only.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
average of all datapoints.
maximum of all datapoints.
minimum of all datapoints.
number of datapoints.
100th percentile of all datapoints.
50th percentile of all datapoints.
70th percentile of all datapoints.
90th percentile of all datapoints.
95th percentile of all datapoints.
99th percentile of all datapoints.
99.9th percentile of all datapoints.
99.99th percentile of all datapoints.
sum of all datapoints.
trimmed mean; calculates the average after removing the 50% of data points with the highest values.
trimmed mean; calculates the average after removing the 30% of data points with the highest values.
trimmed mean; calculates the average after removing the 30% lowest data points and the 30% highest data points.
trimmed mean; calculates the average after removing the 25% lowest data points and the 25% highest data points.
trimmed mean; calculates the average after removing the 20% lowest data points and the 20% highest data points.
trimmed mean; calculates the average after removing the 15% lowest data points and the 15% highest data points.
trimmed mean; calculates the average after removing the 10% of data points with the highest values.
trimmed mean; calculates the average after removing the 10% lowest data points and the 10% highest data points.
trimmed mean; calculates the average after removing the 5% of data points with the highest values.
trimmed mean; calculates the average after removing the 5% lowest data points and the 5% highest data points.
trimmed mean; calculates the average after removing the 95% lowest data points.
trimmed mean; calculates the average after removing the 1% of data points with the highest values.
trimmed mean; calculates the average after removing the 1% lowest data points and the 1% highest data points.
trimmed mean; calculates the average after removing the 99% lowest data points.
trimmed mean; calculates the average after removing the 0.1% of data points with the highest values Experimental.
trimmed mean; calculates the average after removing the 99.9% lowest data points Experimental.
trimmed mean; calculates the average after removing the 0.01% of data points with the highest values Experimental.
trimmed mean; calculates the average after removing the 99.99% lowest data points Experimental.
winsorized mean; calculates the average while treating the 50% of the highest values to be equal to the value at the 50th percentile.
winsorized mean; calculates the average while treating the 30% of the highest values to be equal to the value at the 70th percentile.
winsorized mean; calculates the average while treating the highest 30% of data points to be the value of the 70% boundary, and treating the lowest 30% of data points to be the value of the 30% boundary.
winsorized mean; calculates the average while treating the highest 25% of data points to be the value of the 75% boundary, and treating the lowest 25% of data points to be the value of the 25% boundary.
winsorized mean; calculates the average while treating the highest 20% of data points to be the value of the 80% boundary, and treating the lowest 20% of data points to be the value of the 20% boundary.
winsorized mean; calculates the average while treating the highest 15% of data points to be the value of the 85% boundary, and treating the lowest 15% of data points to be the value of the 15% boundary.
winsorized mean; calculates the average while treating the 10% of the highest values to be equal to the value at the 90th percentile.
winsorized mean; calculates the average while treating the highest 10% of data points to be the value of the 90% boundary, and treating the lowest 10% of data points to be the value of the 10% boundary.
winsorized mean; calculates the average while treating the 5% of the highest values to be equal to the value at the 95th percentile.
winsorized mean; calculates the average while treating the highest 5% of data points to be the value of the 95% boundary, and treating the lowest 5% of data points to be the value of the 5% boundary.
winsorized mean; calculates the average while treating the 1% of the highest values to be equal to the value at the 99th percentile.
winsorized mean; calculates the average while treating the highest 1% of data points to be the value of the 99% boundary, and treating the lowest 1% of data points to be the value of the 1% boundary.
winsorized mean; calculates the average while treating the 0.1% of the highest values to be equal to the value at the 99.9th percentile Experimental.
winsorized mean; calculates the average while treating the 0.01% of the highest values to be equal to the value at the 99.99th percentile Experimental.
Experimental.
Experimental.
Number of occurrences relative to requests.
Number of occurrences per day.
Number of occurrences per hour.
Number of occurrences per minute.
Number of occurrences per second.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.

# Structs

Properties necessary to create a single alarm and configure it.
Properties necessary to create a composite alarm and configure it.
Properties necessary to append actions to an alarm.
Experimental.
Experimental.
Experimental.
Experimental.
Metadata of an alarm.
Experimental.
Experimental.
Experimental.
Representation of an alarm with additional information.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Props to create ApplicationLoadBalancerMetricFactory.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Base of Monitoring props for load-balancer metric factories.
Base class for properties passed to each monitoring construct.
Props to create BaseServiceMetricFactory.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Monitoring props for CodeBuild projects.
Experimental.
Common customization that can be attached to each alarm.
Experimental.
Experimental.
Custom metric group represents a single widget.
Experimental.
Custom metric search.
Custom metric with an alarm defined.
Custom metric with anomaly detection.
Experimental.
Experimental.
Properties of a custom widget.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Monitoring props for EC2 service with application load balancer and plain service.
Experimental.
Experimental.
Experimental.
Monitoring props for EC2 service with network load balancer and plain service.
Monitoring props for load-balanced EC2 service.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Monitoring props for Fargate service with application load balancer and plain service.
Monitoring props for Fargate service with network load balancer and plain service.
Monitoring props for load-balanced Fargate service.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
These are the globals used for each metric, unless there is some kind of override.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Props to create NetworkLoadBalancerMetricFactory.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Monitoring props for Secrets Manager secrets.
Monitoring props for Simple EC2 service.
Monitoring props for Simple Fargate service.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Custom wrapper class for MathExpressionProps that supports account and region customization.

# Interfaces

Experimental.
Experimental.
Wrapper of Alarm Status Widget which auto-calcultes height based on the number of alarms.
Experimental.
Experimental.
Experimental.
Captures specific MathExpression for anomaly detection, for which alarm generation is different.
Experimental.
Experimental.
Experimental.
Experimental.
Metric factory to create metrics for application load-balanced service.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Metric factory for a base service (parent class for e.g.
Experimental.
Experimental.
Specific subtype of dashboard that renders supported widgets as bitmaps, while preserving the overall layout.
Support for rendering bitmap widgets on the server side.
Experimental.
Experimental.
To get the CloudFront metrics from the CloudWatch API, you must use the US East (N.
Experimental.
Experimental.
Experimental.
Experimental.
Allows to apply a collection of {@link IMetricAdjuster} to a metric.
Experimental.
Experimental.
Custom monitoring is a construct allowing you to monitor your own custom metrics.
A dashboard widget that can be customized using a Lambda.
Composite dashboard which keeps a normal dashboard with its bitmap copy.
Default annotation strategy that returns the built-in alarm annotation.
Experimental.
Applies the default metric adjustments.
Experimental.
Experimental.
Experimental.
Default dedupe strategy - does not add any prefix nor suffix.
Line graph widget with both left and right axes.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
See: https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/CacheMetrics.html Experimental.
Experimental.
Experimental.
Dedupe string processor that adds prefix and/or suffix to the dedupe string.
Experimental.
Annotation strategy that fills the annotation provided, using the input and user requirements.
Experimental.
Experimental.
Experimental.
An object that appends actions to alarms.
Helper class for creating annotations for alarms.
Experimental.
Strategy used to finalize dedupe string.
Strategy used to name alarms, their widgets, and their dedupe strings.
Experimental.
Experimental.
Experimental.
This dashboard factory interface provides for dynamic dashboard generation through IDynamicDashboard segments which will return different content depending on the dashboard type.
Experimental.
Experimental.
Common interface for load-balancer based service metric factories.
Adjusts a metric before creating adding an alarm to it.
Experimental.
Strategy for creating widgets.
Experimental.
Experimental.
Experimental.
See: https://docs.aws.amazon.com/kinesisanalytics/latest/java/metrics-dimensions.html Experimental.
Experimental.
See: https://docs.aws.amazon.com/streams/latest/dev/monitoring-with-cloudwatch.html Experimental.
Experimental.
See: https://docs.aws.amazon.com/firehose/latest/dev/monitoring-with-cloudwatch-metrics.html Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Monitors a CloudWatch log group for various patterns.
Experimental.
An independent unit of monitoring.
A CDK aspect that adds support for monitoring all resources within scope.
An implementation of a {@link MonitoringScope}.
Experimental.
Utility class to unify approach to naming monitoring sections.
A scope where all monitored constructs are managed from (i.e., alarms, dashboards, etc.).
Alarm action strategy that combines multiple actions in the same order as they were given.
Metric factory to create metrics for network load-balanced service.
Experimental.
Alarm action strategy that does not add any actions.
Backported set of metric functions added in @aws-cdk/[email protected].
Experimental.
Experimental.
Experimental.
Alarm action strategy that creates an AWS OpsCenter OpsItem.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Adjusts a metric so that alarms created from it can be used in Route53 Health Checks.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Line graph widget with one axis only (left).
Experimental.
Alarm action strategy that sends a notification to the specified SNS topic.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Monitoring for CloudWatch Synthetics Canaries.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
https://docs.aws.amazon.com/waf/latest/developerguide/monitoring-cloudwatch.html.
Monitoring for AWS Web Application Firewall.
Custom wrapper class for MathExpression that supports account and region specification.

# Type aliases

Experimental.
Experimental.
Experimental.
Preferred way of rendering dashboard widgets.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Experimental.
Level of a given log.
Metric aggregation statistic to be used with the IMetric objects.
Experimental.
Enumeration of different rate computation methods.
Experimental.
Experimental.