Firehose DG
Firehose DG
Developer Guide
Amazon Kinesis Data Firehose Developer Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Kinesis Data Firehose Developer Guide
Table of Contents
What Is Amazon Kinesis Data Firehose? ................................................................................................ 1
Key Concepts ............................................................................................................................. 1
Data Flow ................................................................................................................................. 2
Setting Up ........................................................................................................................................ 4
Sign Up for AWS ........................................................................................................................ 4
Optional: Download Libraries and Tools ........................................................................................ 4
Creating a Kinesis Data Firehose Delivery Stream ................................................................................... 5
Source, Destination, and Name .................................................................................................... 5
Record Transformation and Record Format Conversion .................................................................... 6
Destination Settings ................................................................................................................... 6
Choose Amazon S3 for Your Destination ............................................................................... 7
Choose Amazon Redshift for Your Destination ....................................................................... 9
Choose OpenSearch Service for Your Destination ................................................................. 10
Choose HTTP Endpoint for Your Destination ........................................................................ 11
Choose Datadog for Your Destination ................................................................................. 12
Choose Dynatrace for Your Destination ............................................................................... 13
Choose LogicMonitor for Your Destination ........................................................................... 14
Choose MongoDB Cloud for Your Destination ...................................................................... 15
Choose New Relic for Your Destination ............................................................................... 16
Choose Splunk for Your Destination ................................................................................... 17
Choose Sumo Logic for Your Destination ............................................................................. 18
Backup and Advanced Settings .................................................................................................. 19
Backup Settings ............................................................................................................... 19
Advanced Settings ............................................................................................................ 20
Testing Your Delivery Stream ............................................................................................................. 21
Prerequisites ............................................................................................................................ 21
Test Using Amazon S3 as the Destination .................................................................................... 21
Test Using Amazon Redshift as the Destination ............................................................................ 21
Test Using OpenSearch Service as the Destination ........................................................................ 22
Test Using Splunk as the Destination .......................................................................................... 22
Sending Data to a Kinesis Data Firehose Delivery Stream ...................................................................... 24
Writing Using Kinesis Data Streams ............................................................................................ 24
Writing Using the Kinesis Data Firehose Agent ............................................................................. 25
Prerequisites .................................................................................................................... 26
Credentials ...................................................................................................................... 26
Custom Credential Providers .............................................................................................. 26
Download and Install the Agent ......................................................................................... 27
Configure and Start the Agent ........................................................................................... 28
Agent Configuration Settings ............................................................................................. 29
Monitor Multiple File Directories and Write to Multiple Streams .............................................. 31
Use the agent to Preprocess Data ...................................................................................... 32
agent CLI Commands ........................................................................................................ 35
Writing Using the AWS SDK ....................................................................................................... 35
Single Write Operations Using PutRecord ............................................................................ 36
Batch Write Operations Using PutRecordBatch ..................................................................... 36
Writing Using CloudWatch Logs ................................................................................................. 36
Writing Using CloudWatch Events ............................................................................................... 37
Writing Using AWS IoT .............................................................................................................. 37
Security ........................................................................................................................................... 38
Data Protection ........................................................................................................................ 38
Server-Side Encryption with Kinesis Data Streams as the Data Source ...................................... 38
Server-Side Encryption with Direct PUT or Other Data Sources ............................................... 39
Controlling Access .................................................................................................................... 39
Grant Your Application Access to Your Kinesis Data Firehose Resources .................................... 40
iii
Amazon Kinesis Data Firehose Developer Guide
iv
Amazon Kinesis Data Firehose Developer Guide
Delivery Across AWS Accounts and Across AWS Regions for HTTP Endpoint Destinations .................... 77
Duplicated Records ................................................................................................................... 78
Monitoring ....................................................................................................................................... 79
Monitoring with CloudWatch Metrics .......................................................................................... 79
Dynamic Partitioning CloudWatch Metrics ........................................................................... 80
Data Delivery CloudWatch Metrics ...................................................................................... 80
Data Ingestion Metrics ...................................................................................................... 85
API-Level CloudWatch Metrics ............................................................................................ 88
Data Transformation CloudWatch Metrics ............................................................................ 90
Format Conversion CloudWatch Metrics .............................................................................. 90
Server-Side Encryption (SSE) CloudWatch Metrics ................................................................. 91
Dimensions for Kinesis Data Firehose .................................................................................. 91
Kinesis Data Firehose Usage Metrics ................................................................................... 91
Accessing CloudWatch Metrics for Kinesis Data Firehose ........................................................ 92
Best Practices with CloudWatch Alarms ............................................................................... 93
Monitoring with CloudWatch Logs ...................................................................................... 93
Monitoring Agent Health ................................................................................................... 99
Logging Kinesis Data Firehose API Calls with AWS CloudTrail ................................................ 100
Custom Amazon S3 Prefixes ............................................................................................................ 105
The timestamp namespace ..................................................................................................... 105
The firehose namespace ....................................................................................................... 105
partitionKeyFromLambda and partitionKeyFromQuery namespaces ................................... 106
Semantic rules ....................................................................................................................... 106
Example prefixes .................................................................................................................... 107
Using Kinesis Data Firehose with AWS PrivateLink .............................................................................. 109
Interface VPC endpoints (AWS PrivateLink) for Kinesis Data Firehose ............................................. 109
Using interface VPC endpoints (AWS PrivateLink) for Kinesis Data Firehose .................................... 109
Availability ............................................................................................................................. 111
Tagging Your Delivery Streams ......................................................................................................... 113
Tag Basics .............................................................................................................................. 113
Tracking Costs Using Tagging ................................................................................................... 113
Tag Restrictions ...................................................................................................................... 114
Tagging Delivery Streams Using the Amazon Kinesis Data Firehose API .......................................... 114
Tutorial: Sending VPC Flow Logs to Splunk ........................................................................................ 115
Step 1: Send Log Data to CloudWatch ...................................................................................... 116
Step 2: Create the Delivery Stream ........................................................................................... 118
Step 3: Send Data to the Delivery Stream ................................................................................. 121
Step 4: Check the Results ........................................................................................................ 122
Troubleshooting ............................................................................................................................. 123
Data Not Delivered to Amazon S3 ............................................................................................ 123
Data Not Delivered to Amazon Redshift .................................................................................... 124
Data Not Delivered to Amazon OpenSearch Service .................................................................... 125
Data Not Delivered to Splunk .................................................................................................. 125
Delivery Stream Not Available as a Target for CloudWatch Logs, CloudWatch Events, or AWS IoT
Action ................................................................................................................................... 126
Data Freshness Metric Increasing or Not Emitted ........................................................................ 126
Record Format Conversion to Apache Parquet Fails ..................................................................... 127
No Data at Destination Despite Good Metrics ............................................................................. 128
Troubleshooting HTTP Endpoints .............................................................................................. 128
CloudWatch Logs ............................................................................................................ 128
Quota ........................................................................................................................................... 131
Appendix - HTTP Endpoint Delivery Request and Response Specifications .............................................. 133
Request Format ...................................................................................................................... 133
Response Format .................................................................................................................... 136
Examples ............................................................................................................................... 137
Document History .......................................................................................................................... 139
AWS glossary ................................................................................................................................. 141
v
Amazon Kinesis Data Firehose Developer Guide
Key Concepts
For more information about AWS big data solutions, see Big Data on AWS. For more information about
AWS streaming data solutions, see What is Streaming Data?
Note
Note the latest AWS Streaming Data Solution for Amazon MSK that provides AWS
CloudFormation templates where data flows through producers, streaming storage, consumers,
and destinations.
Key Concepts
As you get started with Kinesis Data Firehose, you can benefit from understanding the following
concepts:
The underlying entity of Kinesis Data Firehose. You use Kinesis Data Firehose by creating a Kinesis
Data Firehose delivery stream and then sending data to it. For more information, see Creating an
Amazon Kinesis Data Firehose Delivery Stream (p. 5) and Sending Data to an Amazon Kinesis
Data Firehose Delivery Stream (p. 24).
record
The data of interest that your data producer sends to a Kinesis Data Firehose delivery stream. A
record can be as large as 1,000 KB.
data producer
Producers send records to Kinesis Data Firehose delivery streams. For example, a web server that
sends log data to a delivery stream is a data producer. You can also configure your Kinesis Data
Firehose delivery stream to automatically read data from an existing Kinesis data stream, and load
it into destinations. For more information, see Sending Data to an Amazon Kinesis Data Firehose
Delivery Stream (p. 24).
buffer size and buffer interval
Kinesis Data Firehose buffers incoming streaming data to a certain size or for a certain period of time
before delivering it to destinations. Buffer Size is in MBs and Buffer Interval is in seconds.
1
Amazon Kinesis Data Firehose Developer Guide
Data Flow
Data Flow
For Amazon S3 destinations, streaming data is delivered to your S3 bucket. If data transformation is
enabled, you can optionally back up source data to another Amazon S3 bucket.
For Amazon Redshift destinations, streaming data is delivered to your S3 bucket first. Kinesis Data
Firehose then issues an Amazon Redshift COPY command to load data from your S3 bucket to your
Amazon Redshift cluster. If data transformation is enabled, you can optionally back up source data to
another Amazon S3 bucket.
For OpenSearch Service destinations, streaming data is delivered to your OpenSearch Service cluster, and
it can optionally be backed up to your S3 bucket concurrently.
2
Amazon Kinesis Data Firehose Developer Guide
Data Flow
For Splunk destinations, streaming data is delivered to Splunk, and it can optionally be backed up to your
S3 bucket concurrently.
3
Amazon Kinesis Data Firehose Developer Guide
Sign Up for AWS
Tasks
• Sign Up for AWS (p. 4)
• Optional: Download Libraries and Tools (p. 4)
If you have an AWS account already, skip to the next task. If you don't have an AWS account, use the
following procedure to create one.
1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.
Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.
• The Amazon Kinesis Data Firehose API Reference is the basic set of operations that Kinesis Data
Firehose supports.
• The AWS SDKs for Go, Java, .NET, Node.js, Python, and Ruby include Kinesis Data Firehose support and
samples.
If your version of the AWS SDK for Java does not include samples for Kinesis Data Firehose, you can
also download the latest AWS SDK from GitHub.
• The AWS Command Line Interface supports Kinesis Data Firehose. The AWS CLI enables you to control
multiple AWS services from the command line and automate them through scripts.
4
Amazon Kinesis Data Firehose Developer Guide
Source, Destination, and Name
You can update the configuration of your delivery stream at any time after it’s created, using the Kinesis
Data Firehose console or UpdateDestination. Your Kinesis Data Firehose delivery stream remains in the
ACTIVE state while your configuration is updated, and you can continue to send data. The updated
configuration normally takes effect within a few minutes. The version number of a Kinesis Data Firehose
delivery stream is increased by a value of 1 after you update the configuration. It is reflected in the
delivered Amazon S3 object name. For more information, see Amazon S3 Object Name Format (p. 76).
The following topics describe how to create a Kinesis Data Firehose delivery stream:
Topics
• Source, Destination, and Name (p. 5)
• Record Transformation and Record Format Conversion (p. 6)
• Destination Settings (p. 6)
• Backup and Advanced Settings (p. 19)
Source
• Direct PUT or other sources: Choose this option to create a Kinesis Data Firehose delivery
stream that producer applications write to directly.
• Kinesis stream: Choose this option to configure a Kinesis Data Firehose delivery stream
that uses a Kinesis data stream as a data source. You can then use Kinesis Data Firehose to
read data easily from an existing Kinesis data stream and load it into destinations. For more
information about using Kinesis Data Streams as your data source, see Writing to Amazon
Kinesis Data Firehose Using Kinesis Data Streams.
Delivery stream destination
The destination of your Kinesis Data Firehose delivery stream. Kinesis Data Firehose can send
data records to various destinations, including Amazon Simple Storage Service (Amazon S3),
Amazon Redshift, Amazon OpenSearch Service, and any HTTP endpoint that is owned by you or
any of your third-party service providers. The following are the supported destinations:
• Amazon OpenSearch Service
• Amazon S3
• Datadog
• Dynatrace
5
Amazon Kinesis Data Firehose Developer Guide
Record Transformation and Record Format Conversion
• HTTP Endpoint
• Logic Monitor
• MongoDB Cloud
• New Relic
• Splunk
• Sumo Logic
Delivery stream name
1. In the Transform source records with AWS Lambda section, provide values for the following field:
Data transformation
To create a Kinesis Data Firehose delivery stream that doesn't transform incoming data, choose
Disabled.
To specify a Lambda function for Kinesis Data Firehose to invoke and use to transform incoming
data before delivering it, choose Enabled. You can configure a new Lambda function using one
of the Lambda blueprints or choose an existing Lambda function. Your Lambda function must
contain the status model that is required by Kinesis Data Firehose. For more information, see
Amazon Kinesis Data Firehose Data Transformation (p. 58).
2. In the Convert record format section, provide values for the following field:
To create a Kinesis Data Firehose delivery stream that doesn't convert the format of the
incoming data records, choose Disabled.
To convert the format of the incoming records, choose Enabled, then specify the output format
you want. You need to specify an AWS Glue table that holds the schema that you want Kinesis
Data Firehose to use to convert your record format. For more information, see Record Format
Conversion (p. 68).
For an example of how to set up record format conversion with AWS CloudFormation, see
AWS::KinesisFirehose::DeliveryStream.
Destination Settings
This topic describes the destination settings for your delivery stream.
Topics
• Choose Amazon S3 for Your Destination (p. 7)
• Choose Amazon Redshift for Your Destination (p. 9)
• Choose OpenSearch Service for Your Destination (p. 10)
6
Amazon Kinesis Data Firehose Developer Guide
Choose Amazon S3 for Your Destination
S3 bucket
Choose an S3 bucket that you own where the streaming data should be delivered. You can
create a new S3 bucket or choose an existing one.
S3 bucket prefix - optional
If you don't enable dynamic partitioning, this is an optional field. If you choose to enable
dynamic partitioning, you MUST specify an S3 error bucket prefix for Kinesis Data Firehose
to use when delivering data to Amazon S3 in error conditions. If Kinesis Data Firehose fails
to dynamically partition your incoming data, those data records are delivered to this S3 error
bucket prefix. For more information, see Amazon S3 Object Name Format (p. 76) and Custom
Amazon S3 Prefixes (p. 105)
Dynamic partitioning
This is the process of parsing through the records in the delivery stream and separating them
based either on valid JSON or on the specified new line delimiter.
If you aggregate multiple events, logs, or records into a single PutRecord and PutRecordBatch
API call, you can still enable and configure dynamic partitioning. With aggregated data, when
you enable dynamic partitioning, Kinesis Data Firehose parses the records and looks for multiple
valid JSON objects within each API call. When the delivery stream is configured with Kinesis
Data Stream as a source, you can also use the built-in aggregation in the Kinesis Producer
Library (KPL). Data partition functionality is executed after data is de-aggregated. Therefore,
each record in each API call can be delivered to different Amazon S3 prefixes. You can also
leverage the Lambda function integration to perform any other de-aggregation or any other
transformation before the data partitioning functionality.
Important
If your data is aggregated, dynamic partitioning can be applied only after data
deaggregation is performed. So if you enable dynamic partitioning to your aggregated
data, you must choose Enabled to enable multi record deaggregation.
Kinesis Data Firehose delivery stream preforms the following processing steps in the following
order: KPL (protobuf) de-aggregation, JSON or delimiter de-aggregation, Lambda processing,
data partitioning, data format conversion, and Amazon S3 delivery.
7
Amazon Kinesis Data Firehose Developer Guide
Choose Amazon S3 for Your Destination
If you enabled multi recrord deaggregation, you must specify the method for Kinesis Data
Firehose to deaggregate your data. Use the drop-down menu to choose either JSON or
Delimited.
New line delimiter
When you enable dynamic partitioning, you can configure your delivery stream to add a new
line delimiter between records in objects that are delivered to Amazon S3. To do so, choose
Enabled. To not add a new line delimiter between records in objects that are delivered to
Amazon S3, choose Disabled.
Inline parsing
This is one of the supported mechanisms to dynamically partition your data that is bound
for Amazon S3. To use inline parsing for dynamic partitioning of your data, you must specify
data record parameters to be used as partitioning keys and provide a value for each specified
partitioning key. Choose Enabled to enable and configure inline parsing.
Important
If you specified an AWS Lambda function in the steps above for transforming your
source records, you can use this function to dyncamically partition your data that is
bound to S3 and you can still create your partitioning keys with inline parsing. With
dynamic partitioning, you can use either inline parsing or your AWS Lambda function to
create your partitioning keys. Or you can use both inline parsing and your AWS Lambda
function at the same time to create your partitioning keys.
Dynamic partitioning keys
You can use the Key and Value fields to specify the data record parameters to be used as
dynamic partitioning keys and jq queries to generate dynamic partitioning key values. Kinesis
Data Firehose supports jq 1.6 only. You can specify up to 50 dynamic partitioning keys. You
must enter valid jq expressions for your dynamic partitioning key values in order to successfully
configure dynamic partitioning for your delivery stream.
S3 bucket prefix
When you enable and configure dynamic partitioning, you must specify the S3 bucket prefixes
to which Kinesis Data Firehose is to deliver partitioned data.
In order for dynamic partitioning to be configured correctly, the number of the S3 bucket
prefixes must be identical to the number of the specified partitioning keys.
You can partition your source data with inline parsing or with your specified AWS Lambda
function. If you specified an AWS Lambda function to create partitioning keys for your source
data, you must manually type in the S3 bucket prefix value(s) using the following format:
"partitionKeyFromLambda:keyID". If you are using inline parsing to specify the partitioning
keys for your source data, you can either manually type in the S3 bucket preview values using
the following format: "partitionKeyFromQuery:keyID" or you can choose the Apply dynamic
partitioning keys button to use your dynamic partitioning key/value pairs to auto-generate
your S3 bucket prefixes. While partitioning your data with either inline parsing or AWS Lambda,
you can also use the following expression forms in your S3 bucket prefix: !{namespace:value},
where namespace can be either partitionKeyFromQuery or partitionKeyFromLambda.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
8
Amazon Kinesis Data Firehose Developer Guide
Choose Amazon Redshift for Your Destination
Kinesis Data Firehose supports Amazon S3 server-side encryption with AWS Key Management
Service (AWS KMS) for encrypting delivered data in Amazon S3. You can choose to not encrypt
the data or to encrypt with a key from the list of AWS KMS keys that you own. For more
information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys
(SSE-KMS).
Cluster
The Amazon Redshift cluster to which S3 bucket data is copied. Configure the Amazon Redshift
cluster to be publicly accessible and unblock Kinesis Data Firehose IP addresses. For more
information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination
(p. 43).
User name
An Amazon Redshift user with permissions to access the Amazon Redshift cluster. This user
must have the Amazon Redshift INSERT permission for copying data from the S3 bucket to the
Amazon Redshift cluster.
Password
The password for the user who has permissions to access the cluster.
Database
(Optional) The specific columns of the table to which the data is copied. Use this option if the
number of columns defined in your Amazon S3 objects is less than the number of columns
within the Amazon Redshift table.
Intermediate S3 destination
Kinesis Data Firehose delivers your data to your S3 bucket first and then issues an Amazon
Redshift COPY command to load the data into your Amazon Redshift cluster. Specify an S3
bucket that you own where the streaming data should be delivered. Create a new S3 bucket, or
choose an existing bucket that you own.
Kinesis Data Firehose doesn't delete the data from your S3 bucket after loading it to your
Amazon Redshift cluster. You can manage the data in your S3 bucket using a lifecycle
configuration. For more information, see Object Lifecycle Management in the Amazon Simple
Storage Service User Guide.
9
Amazon Kinesis Data Firehose Developer Guide
Choose OpenSearch Service for Your Destination
Intermediate S3 prefix
(Optional) To use the default prefix for Amazon S3 objects, leave this option blank. Kinesis
Data Firehose automatically uses a prefix in "YYYY/MM/dd/HH" UTC time format for delivered
Amazon S3 objects. You can add to the start of this prefix. For more information, see Amazon S3
Object Name Format (p. 76).
COPY options
Parameters that you can specify in the Amazon Redshift COPY command. These might be
required for your configuration. For example, "GZIP" is required if Amazon S3 data compression
is enabled. "REGION" is required if your S3 bucket isn't in the same AWS Region as your Amazon
Redshift cluster. For more information, see COPY in the Amazon Redshift Database Developer
Guide.
COPY command
The Amazon Redshift COPY command. For more information, see COPY in the Amazon Redshift
Database Developer Guide.
Retry duration
Time duration (0–7200 seconds) for Kinesis Data Firehose to retry if data COPY to your Amazon
Redshift cluster fails. Kinesis Data Firehose retries every 5 minutes until the retry duration ends.
If you set the retry duration to 0 (zero) seconds, Kinesis Data Firehose does not retry upon a
COPY command failure.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
S3 compressions and encryption
Kinesis Data Firehose supports Amazon S3 server-side encryption with AWS Key Management
Service (AWS KMS) for encrypting delivered data in Amazon S3. You can choose to not encrypt
the data or to encrypt with a key from the list of AWS KMS keys that you own. For more
information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys
(SSE-KMS).
The OpenSearch Service index name to be used when indexing data to your OpenSearch Service
cluster.
Index rotation
Choose whether and how often the OpenSearch Service index should be rotated. If index
rotation is enabled, Kinesis Data Firehose appends the corresponding timestamp to the specified
10
Amazon Kinesis Data Firehose Developer Guide
Choose HTTP Endpoint for Your Destination
index name and rotates. For more information, see Index Rotation for the OpenSearch Service
Destination (p. 77).
Type
The OpenSearch Service type name to be used when indexing data to your OpenSearch Service
cluster. For Elasticsearch 7.x and OpenSearch 1.x, there can be only one type per index. If you try
to specify a new type for an existing index that already has another type, Kinesis Data Firehose
returns an error during runtime.
Time duration (0–7200 seconds) for Kinesis Data Firehose to retry if an index request to your
OpenSearch Service cluster fails. Kinesis Data Firehose retries every 5 minutes until the retry
duration ends. If you set the retry duration to 0 (zero) seconds, Kinesis Data Firehose does not
retry upon an index request failure.
Destination VPC connectivity
If your OpenSearch Service domain is in a private VPC, use this section to specify that VPC. Also
specify the subnets and subgroups that you want Kinesis Data Firehose to use when it sends
data to your OpenSearch Service domain. You can use the same security group that the doma
OpenSearch Servicein uses or different ones. If you specify different security groups, ensure
that they allow outbound HTTPS traffic to the OpenSearch Service domain's security group.
Also ensure that the OpenSearch Service domain's security group allows HTTPS traffic from
the security groups that you specified when you configured your delivery stream. If you use the
same security group for both your delivery stream and the OpenSearch Service domain, make
sure the security group's inbound rule allows HTTPS traffic. For more information about security
group rules, see Security group rules in the Amazon VPC documentation.
Buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
Specify a user friendly name for the HTTP endpoint. For example, My HTTP Endpoint
Destination.
HTTP endpoint URL
Specify the URL for the HTTP endpoint in the following format: https://
xyz.httpendpoint.com. The URL must be an HTTPS URL.
Access key - optional
Contact the endpoint owner to obtain the access key (if it is required) to enable data delivery to
their endpoint from Kinesis Data Firehose.
11
Amazon Kinesis Data Firehose Developer Guide
Choose Datadog for Your Destination
Content encoding
Kinesis Data Firehose uses content encoding to compress the body of a request before sending
it to the destination. Choose GZIP or Disabled to enable/disable content encoding of your
request.
Retry duration
Specify how long Kinesis Data Firehose retries sending data to the selected HTTP endpoint.
After sending data, Kinesis Data Firehose first waits for an acknowledgment from the HTTP
endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment
timeout period, Kinesis Data Firehose starts the retry duration counter. It keeps retrying until
the retry duration expires. After that, Kinesis Data Firehose considers it a data delivery failure
and backs up the data to your Amazon S3 bucket.
Every time that Kinesis Data Firehose sends data to the HTTP endpoint (either the initial
attempt or a retry), it restarts the acknowledgement timeout counter and waits for an
acknowledgement from the HTTP endpoint.
Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout period is reached. If the acknowledgment
times out, Kinesis Data Firehose determines whether there's time left in the retry counter. If
there is time left, it retries again and repeats the logic until it receives an acknowledgment or
determines that the retry time has expired.
If you don't want Kinesis Data Firehose to retry sending data, set this value to 0.
Parameters - optional
Kinesis Data Firehose includes these key-value pairs in each HTTP call. These parameters can
help you identify and organize your destinations.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
Important
For the HTTP endpoint destinations, if you are seeing 413 response codes from the
destination endpoint in CloudWatch Logs, lower the buffering hint size on your delivery
stream and try again.
Choose the HTTP endpoint URL from the following options in the drop down menu:
• Datadog logs - US
• Datadog logs - EU
• Datadog logs - GOV
• Datadog metrics - US
• Datadog metrics - EU
12
Amazon Kinesis Data Firehose Developer Guide
Choose Dynatrace for Your Destination
API key
Contact Datadog to obtain the API key required to enable data delivery to this endpoint from
Kinesis Data Firehose.
Content encoding
Kinesis Data Firehose uses content encoding to compress the body of a request before sending
it to the destination. Choose GZIP or Disabled to enable/disable content encoding of your
request.
Retry duration
Specify how long Kinesis Data Firehose retries sending data to the selected HTTP endpoint.
After sending data, Kinesis Data Firehose first waits for an acknowledgment from the HTTP
endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment
timeout period, Kinesis Data Firehose starts the retry duration counter. It keeps retrying until
the retry duration expires. After that, Kinesis Data Firehose considers it a data delivery failure
and backs up the data to your Amazon S3 bucket.
Every time that Kinesis Data Firehose sends data to the HTTP endpoint (either the initial
attempt or a retry), it restarts the acknowledgement timeout counter and waits for an
acknowledgement from the HTTP endpoint.
Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout period is reached. If the acknowledgment
times out, Kinesis Data Firehose determines whether there's time left in the retry counter. If
there is time left, it retries again and repeats the logic until it receives an acknowledgment or
determines that the retry time has expired.
If you don't want Kinesis Data Firehose to retry sending data, set this value to 0.
Parameters - optional
Kinesis Data Firehose includes these key-value pairs in each HTTP call. These parameters can
help you identify and organize your destinations.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
Choose the HTTP endpoint URL (Dynatrace US, Dynatrace EU, or Dynatrace Global) from the
drop down menu.
API token
Generate the Dynatrace API token required for data delivery from Kinesis Data Firehose. For
more information, see https://www.dynatrace.com/support/help/dynatrace-api/basics/
dynatrace-api-authentication/.
13
Amazon Kinesis Data Firehose Developer Guide
Choose LogicMonitor for Your Destination
API URL
Kinesis Data Firehose uses content encoding to compress the body of a request before sending
it to the destination. Choose GZIP or Disabled to enable/disable content encoding of your
request.
Retry duration
Specify how long Kinesis Data Firehose retries sending data to the selected HTTP endpoint.
After sending data, Kinesis Data Firehose first waits for an acknowledgment from the HTTP
endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment
timeout period, Kinesis Data Firehose starts the retry duration counter. It keeps retrying until
the retry duration expires. After that, Kinesis Data Firehose considers it a data delivery failure
and backs up the data to your Amazon S3 bucket.
Every time that Kinesis Data Firehose sends data to the HTTP endpoint (either the initial
attempt or a retry), it restarts the acknowledgement timeout counter and waits for an
acknowledgement from the HTTP endpoint.
Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout period is reached. If the acknowledgment
times out, Kinesis Data Firehose determines whether there's time left in the retry counter. If
there is time left, it retries again and repeats the logic until it receives an acknowledgment or
determines that the retry time has expired.
If you don't want Kinesis Data Firehose to retry sending data, set this value to 0.
Parameters - optional
Kinesis Data Firehose includes these key-value pairs in each HTTP call. These parameters can
help you identify and organize your destinations.
Important
When using Dynatrace as your specified destination, you must specify at least
one parameter key-value pair. You must name this key dt-url and set its
value to the URL of your Dynatrace environment (for example, https://
xyzab123456.dynatrace.live.com). You can then optionally specify additional
parameter key-value pairs and set them to custom names and values of your choosing.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
Specify the URL for the HTTP endpoint in the following format: https://
ACCOUNT.logicmonitor.com
14
Amazon Kinesis Data Firehose Developer Guide
Choose MongoDB Cloud for Your Destination
API key
Contact LogicMonitor to obtain the API key required to enable data delivery to this endpoint
from Kinesis Data Firehose.
Content encoding
Kinesis Data Firehose uses content encoding to compress the body of a request before sending
it to the destination. Choose GZIP or Disabled to enable/disable content encoding of your
request.
Retry duration
Specify how long Kinesis Data Firehose retries sending data to the selected HTTP endpoint.
After sending data, Kinesis Data Firehose first waits for an acknowledgment from the HTTP
endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment
timeout period, Kinesis Data Firehose starts the retry duration counter. It keeps retrying until
the retry duration expires. After that, Kinesis Data Firehose considers it a data delivery failure
and backs up the data to your Amazon S3 bucket.
Every time that Kinesis Data Firehose sends data to the HTTP endpoint (either the initial
attempt or a retry), it restarts the acknowledgement timeout counter and waits for an
acknowledgement from the HTTP endpoint.
Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout period is reached. If the acknowledgment
times out, Kinesis Data Firehose determines whether there's time left in the retry counter. If
there is time left, it retries again and repeats the logic until it receives an acknowledgment or
determines that the retry time has expired.
If you don't want Kinesis Data Firehose to retry sending data, set this value to 0.
Parameters - optional
Kinesis Data Firehose includes these key-value pairs in each HTTP call. These parameters can
help you identify and organize your destinations.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
Specify the URL for the HTTP endpoint in the following format: https://
webhooks.mongodb-realm.com. The URL must be an HTTPS URL.
API key
Contact MongoDB Cloud to obtain the API key required to enable data delivery to this endpoint
from Kinesis Data Firehose.
15
Amazon Kinesis Data Firehose Developer Guide
Choose New Relic for Your Destination
Content encoding
Kinesis Data Firehose uses content encoding to compress the body of a request before sending
it to the destination. Choose GZIP or Disabled to enable/disable content encoding of your
request.
Retry duration
Specify how long Kinesis Data Firehose retries sending data to the selected third-party provider.
After sending data, Kinesis Data Firehose first waits for an acknowledgment from the HTTP
endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment
timeout period, Kinesis Data Firehose starts the retry duration counter. It keeps retrying until
the retry duration expires. After that, Kinesis Data Firehose considers it a data delivery failure
and backs up the data to your Amazon S3 bucket.
Every time that Kinesis Data Firehose sends data to the HTTP endpoint (either the initial
attempt or a retry), it restarts the acknowledgement timeout counter and waits for an
acknowledgement from the HTTP endpoint.
Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout period is reached. If the acknowledgment
times out, Kinesis Data Firehose determines whether there's time left in the retry counter. If
there is time left, it retries again and repeats the logic until it receives an acknowledgment or
determines that the retry time has expired.
If you don't want Kinesis Data Firehose to retry sending data, set this value to 0.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
Parameters - optional
Kinesis Data Firehose includes these key-value pairs in each HTTP call. These parameters can
help you identify and organize your destinations.
Choose the HTTP endpoint URL from the following options in the drop down menu:
• New Relic logs - US
• New Relic metrics - US
• New Relic metrics - EU
API key
Enter your License Key (40-characters hexadecimal string) from your New Relic One Account
settings. This API key is required to enable data delivery to this endpoint from Kinesis Data
Firehose.
16
Amazon Kinesis Data Firehose Developer Guide
Choose Splunk for Your Destination
Content encoding
Kinesis Data Firehose uses content encoding to compress the body of a request before sending
it to the destination. Choose GZIP or Disabled to enable/disable content encoding of your
request.
Retry duration
Specify how long Kinesis Data Firehose retries sending data to the New Relic HTTP endpoint.
After sending data, Kinesis Data Firehose first waits for an acknowledgment from the HTTP
endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment
timeout period, Kinesis Data Firehose starts the retry duration counter. It keeps retrying until
the retry duration expires. After that, Kinesis Data Firehose considers it a data delivery failure
and backs up the data to your Amazon S3 bucket.
Every time that Kinesis Data Firehose sends data to the HTTP endpoint (either the initial
attempt or a retry), it restarts the acknowledgement timeout counter and waits for an
acknowledgement from the HTTP endpoint.
Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout period is reached. If the acknowledgment
times out, Kinesis Data Firehose determines whether there's time left in the retry counter. If
there is time left, it retries again and repeats the logic until it receives an acknowledgment or
determines that the retry time has expired.
If you don't want Kinesis Data Firehose to retry sending data, set this value to 0.
Parameters - optional
Kinesis Data Firehose includes these key-value pairs in each HTTP call. These parameters can
help you identify and organize your destinations.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
To determine the endpoint, see Configure Amazon Kinesis Firehose to Send Data to the Splunk
Platform in the Splunk documentation.
Splunk endpoint type
Choose Raw endpoint in most cases. Choose Event endpoint if you preprocessed your data
using AWS Lambda to send data to different indexes by event type. For information about what
endpoint to use, see Configure Amazon Kinesis Firehose to send data to the Splunk platform in
the Splunk documentation.
Authentication token
To set up a Splunk endpoint that can receive data from Kinesis Data Firehose, see Installation
and configuration overview for the Splunk Add-on for Amazon Kinesis Firehose in the Splunk
17
Amazon Kinesis Data Firehose Developer Guide
Choose Sumo Logic for Your Destination
documentation. Save the token that you get from Splunk when you set up the endpoint for this
delivery stream, and add it here.
HEC acknowledgement timeout
Specify how long Kinesis Data Firehose waits for the index acknowledgement from Splunk. If
Splunk doesn’t send the acknowledgment before the timeout is reached, Kinesis Data Firehose
considers it a data delivery failure. Kinesis Data Firehose then either retries or backs up the data
to your Amazon S3 bucket, depending on the retry duration value that you set.
Retry duration
Specify how long Kinesis Data Firehose retries sending data to Splunk.
After sending data, Kinesis Data Firehose first waits for an acknowledgment from Splunk. If an
error occurs or the acknowledgment doesn’t arrive within the acknowledgment timeout period,
Kinesis Data Firehose starts the retry duration counter. It keeps retrying until the retry duration
expires. After that, Kinesis Data Firehose considers it a data delivery failure and backs up the
data to your Amazon S3 bucket.
Every time that Kinesis Data Firehose sends data to Splunk (either the initial attempt or a retry),
it restarts the acknowledgement timeout counter and waits for an acknowledgement from
Splunk.
Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout period is reached. If the acknowledgment
times out, Kinesis Data Firehose determines whether there's time left in the retry counter. If
there is time left, it retries again and repeats the logic until it receives an acknowledgment or
determines that the retry time has expired.
If you don't want Kinesis Data Firehose to retry sending data, set this value to 0.
Specify the URL for the HTTP endpoint in the following format: https://deployment
name.sumologic.net/receiver/v1/kinesis/dataType/access token. The URL must
be an HTTPS URL.
Content encoding
Kinesis Data Firehose uses content encoding to compress the body of a request before sending
it to the destination. Choose GZIP or Disabled to enable/disable content encoding of your
request.
Retry duration
Specify how long Kinesis Data Firehose retries sending data to New Relic.
After sending data, Kinesis Data Firehose first waits for an acknowledgment from the HTTP
endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment
timeout period, Kinesis Data Firehose starts the retry duration counter. It keeps retrying until
the retry duration expires. After that, Kinesis Data Firehose considers it a data delivery failure
and backs up the data to your Amazon S3 bucket.
18
Amazon Kinesis Data Firehose Developer Guide
Backup and Advanced Settings
Every time that Kinesis Data Firehose sends data to the HTTP endpoint (either the initial
attempt or a retry), it restarts the acknowledgement timeout counter and waits for an
acknowledgement from the HTTP endpoint.
Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout period is reached. If the acknowledgment
times out, Kinesis Data Firehose determines whether there's time left in the retry counter. If
there is time left, it retries again and repeats the logic until it receives an acknowledgment or
determines that the retry time has expired.
If you don't want Kinesis Data Firehose to retry sending data, set this value to 0.
Parameters - optional
Kinesis Data Firehose includes these key-value pairs in each HTTP call. These parameters can
help you identify and organize your destinations.
S3 buffer hints
Kinesis Data Firehose buffers incoming data before delivering it to the specified destination. The
recommended buffer size for the destination varies from service provider to service provider.
Backup Settings
Kinesis Data Firehose uses Amazon S3 to backup all or failed only data that it attempts to deliver to your
chosen destination. You can specify the S3 backup settings for your Kinesis Data Firehose delivery stream
if you made one of the following choices:
• If you set Amazon S3 as the destination for your Kinesis Data Firehose delivery stream and you choose
to specify an AWS Lambda function to transform data records or if you choose to convert data record
formats for your delivery stream.
• If you set Amazon Redshift as the destination for your Kinesis Data Firehose delivery stream and you
choose to specify an AWS Lambda function to transform data records.
• If you set any of the following services as the destination for your Kinesis Data Firehose delivery
stream: Amazon OpenSearch Service, Datadog, Dynatrace, HTTP Endpoint, LogicMonitor, MongoDB
Cloud, New Relic, Splunk, or Sumo Logic.
The following are the backup settings for your Kinesis Data Firehose delivery stream:
• Source record backup in Amazon S3 - if S3 or Amazon Redshift is your selected destination, this
setting indicates whether you want to enable source data backup or keep it disabled. If any other
supported service (other than S3 or Amazon Redshift) is set as your selected destination, then this
setting indicates if you want to backup all your source data or failed data only.
• S3 backup bucket - this is the S3 bucket where Kinesis Data Firehose backs up your data.
• S3 backup bucket prefix - this is the prefix where Kinesis Data Firehose backs up your data.
• S3 backup bucket error output prefix - all failed data is backed up in the this S3 bucket error output
prefix.
• Buffer hints, compression and encryption for backup - Kinesis Data Firehose uses Amazon S3 to backup
all or failed only data that it attempts to deliver to your chosen destination. Kinesis Data Firehose
19
Amazon Kinesis Data Firehose Developer Guide
Advanced Settings
buffers incoming data before delivering it (backing it up) to Amazon S3. You can choose a buffer size
of 1–128 MiBs and a buffer interval of 60–900 seconds. The condition that is satisfied first triggers
data delivery to Amazon S3. If you enable data transformation, the buffer interval applies from the
time transformed data is received by Kinesis Data Firehose to the data delivery to Amazon S3. If data
delivery to the destination falls behind data writing to the delivery stream, Kinesis Data Firehose
raises the buffer size dynamically to catch up. This action helps ensure that all data is delivered to the
destination.
• S3 compressions and encryption - choose GZIP, Snappy, Zip, or Hadoop-Compatible Snappy data
compression, or no data compression. Snappy, Zip, and Hadoop-Compatible Snappy compression is
not available for delivery streams with Amazon Redshift as the destination.
Kinesis Data Firehose supports Amazon S3 server-side encryption with AWS Key Management Service
(AWS KMS) for encrypting delivered data in Amazon S3. You can choose to not encrypt the data or to
encrypt with a key from the list of AWS KMS keys that you own. For more information, see Protecting
Data Using Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS).
Advanced Settings
The following are the advanced settings for your Kinesis Data Firehose delivery stream:
• Server-side encryption - Kinesis Data Firehose supports Amazon S3 server-side encryption with
AWS Key Management Service (AWS KMS) for encrypting delivered data in Amazon S3. For more
information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-
KMS).
• Error logging - If data transformation is enabled, Kinesis Data Firehose can log the Lambda invocation,
and send data delivery errors to CloudWatch Logs. Then you can view the specific error logs if the
Lambda invocation or data delivery fails. For more information, see Monitoring Kinesis Data Firehose
Using CloudWatch Logs.
• Permissions - Kinesis Data Firehose uses IAM roles for all the permissions that the delivery stream
needs. You can choose to create a new role where required permissions are assigned automatically,
or choose an existing role created for Kinesis Data Firehose. The role is used to grant Kinesis Data
Firehose access to various services, including your S3 bucket, AWS KMS key (if data encryption is
enabled), and Lambda function (if data transformation is enabled). The console might create a role
with placeholders. For more information, see What is IAM?.
• Tags - You can add tags to organize your AWS resources, track costs, and control access.
Once you've chosen your backup and advanced settings, review your choices, and then choose Create
delivery stream.
The new Kinesis Data Firehose delivery stream takes a few moments in the Creating state before it is
available. After your Kinesis Data Firehose delivery stream is in an Active state, you can start sending
data to it from your producer.
20
Amazon Kinesis Data Firehose Developer Guide
Prerequisites
{"TICKER_SYMBOL":"QXZ","SECTOR":"HEALTHCARE","CHANGE":-0.05,"PRICE":84.51}
Note that standard Amazon Kinesis Data Firehose charges apply when your delivery stream transmits the
data, but there is no charge when the data is generated. To stop incurring these charges, you can stop
the sample stream from the console at any time.
Contents
• Prerequisites (p. 21)
• Test Using Amazon S3 as the Destination (p. 21)
• Test Using Amazon Redshift as the Destination (p. 21)
• Test Using OpenSearch Service as the Destination (p. 22)
• Test Using Splunk as the Destination (p. 22)
Prerequisites
Before you begin, create a delivery stream. For more information, see Creating an Amazon Kinesis Data
Firehose Delivery Stream (p. 5).
21
Amazon Kinesis Data Firehose Developer Guide
Test Using OpenSearch Service as the Destination
1. Your delivery stream expects a table to be present in your Amazon Redshift cluster. Connect to
Amazon Redshift through a SQL interface and run the following statement to create a table that
accepts the sample data.
22
Amazon Kinesis Data Firehose Developer Guide
Test Using Splunk as the Destination
4. Check whether the data is being delivered to your Splunk index. Example search terms in Splunk
are sourcetype="aws:firehose:json" and index="name-of-your-splunk-index". For
more information about how to search for events in Splunk, see Search Manual in the Splunk
documentation.
If the test data doesn't appear in your Splunk index, check your Amazon S3 bucket for failed events.
Also see Data Not Delivered to Splunk.
5. When you finish testing, choose Stop sending demo data to stop incurring usage charges.
23
Amazon Kinesis Data Firehose Developer Guide
Writing Using Kinesis Data Streams
Topics
• Writing to Kinesis Data Firehose Using Kinesis Data Streams (p. 24)
• Writing to Kinesis Data Firehose Using Kinesis Agent (p. 25)
• Writing to Kinesis Data Firehose Using the AWS SDK (p. 35)
• Writing to Kinesis Data Firehose Using CloudWatch Logs (p. 36)
• Writing to Kinesis Data Firehose Using CloudWatch Events (p. 37)
• Writing to Kinesis Data Firehose Using AWS IoT (p. 37)
1. Sign in to the AWS Management Console and open the Kinesis Data Firehose console at https://
console.aws.amazon.com/firehose/.
2. Choose Create Delivery Stream. On the Name and source page, provide values for the following
fields:
24
Amazon Kinesis Data Firehose Developer Guide
Writing Using the Kinesis Data Firehose Agent
Source
Choose Kinesis stream to configure a Kinesis Data Firehose delivery stream that uses a Kinesis
data stream as a data source. You can then use Kinesis Data Firehose to read data easily from an
existing data stream and load it into destinations.
To use a Kinesis data stream as a source, choose an existing stream in the Kinesis stream list, or
choose Create new to create a new Kinesis data stream. After you create a new stream, choose
Refresh to update the Kinesis stream list. If you have a large number of streams, filter the list
using Filter by name.
Note
When you configure a Kinesis data stream as the source of a Kinesis Data Firehose
delivery stream, the Kinesis Data Firehose PutRecord and PutRecordBatch
operations are disabled. To add data to your Kinesis Data Firehose delivery stream in
this case, use the Kinesis Data Streams PutRecord and PutRecords operations.
Kinesis Data Firehose starts reading data from the LATEST position of your Kinesis stream. For
more information about Kinesis Data Streams positions, see GetShardIterator. Kinesis Data
Firehose calls the Kinesis Data Streams GetRecords operation once per second for each shard.
More than one Kinesis Data Firehose delivery stream can read from the same Kinesis stream.
Other Kinesis applications (consumers) can also read from the same stream. Each call from any
Kinesis Data Firehose delivery stream or other consumer application counts against the overall
throttling limit for the shard. To avoid getting throttled, plan your applications carefully. For
more information about Kinesis Data Streams limits, see Amazon Kinesis Streams Limits.
3. Choose Next to advance to the Record Transformation and Record Format Conversion (p. 6) page.
By default, records are parsed from each file based on the newline ('\n') character. However, the agent
can also be configured to parse multi-line records (see Agent Configuration Settings (p. 29)).
You can install the agent on Linux-based server environments such as web servers, log servers, and
database servers. After installing the agent, configure it by specifying the files to monitor and the
delivery stream for the data. After the agent is configured, it durably collects data from the files and
reliably sends it to the delivery stream.
Topics
• Prerequisites (p. 26)
• Credentials (p. 26)
• Custom Credential Providers (p. 26)
• Download and Install the Agent (p. 27)
• Configure and Start the Agent (p. 28)
• Agent Configuration Settings (p. 29)
• Monitor Multiple File Directories and Write to Multiple Streams (p. 31)
25
Amazon Kinesis Data Firehose Developer Guide
Prerequisites
Prerequisites
• Your operating system must be Amazon Linux, or Red Hat Enterprise Linux version 7 or later.
• Agent version 2.0.0 or later runs using JRE version 1.8 or later. Agent version 1.1.x runs using JRE 1.7
or later.
• If you are using Amazon EC2 to run your agent, launch your EC2 instance.
• The IAM role or AWS credentials that you specify must have permission to perform the Kinesis
Data Firehose PutRecordBatch operation for the agent to send data to your delivery stream. If you
enable CloudWatch monitoring for the agent, permission to perform the CloudWatch PutMetricData
operation is also needed. For more information, see Controlling Access with Amazon Kinesis Data
Firehose (p. 39), Monitoring Kinesis Agent Health (p. 99), and Authentication and Access Control
for Amazon CloudWatch.
Credentials
Manage your AWS credentials using one of the following methods:
• Create a custom credentials provider. For details, see the section called “Custom Credential
Providers” (p. 26).
• Specify an IAM role when you launch your EC2 instance.
• Specify AWS credentials when you configure the agent (see the entries for awsAccessKeyId and
awsSecretAccessKey in the configuration table under the section called “Agent Configuration
Settings” (p. 29)).
• Edit /etc/sysconfig/aws-kinesis-agent to specify your AWS Region and AWS access keys.
• If your EC2 instance is in a different AWS account, create an IAM role to provide access to the Kinesis
Data Firehose service. Specify that role when you configure the agent (see assumeRoleARN (p. )
and assumeRoleExternalId (p. )). Use one of the previous methods to specify the AWS credentials
of a user in the other account who has permission to assume this role.
To create a custom credentials provider, define a class that implements the AWSCredentialsProvider
interface, like the one in the following example.
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
26
Amazon Kinesis Data Firehose Developer Guide
Download and Install the Agent
AWS invokes the refresh method periodically to get updated credentials. If you want your credentials
provider to provide different credentials throughout its lifetime, include code to refresh the credentials
in this method. Alternatively, you can leave this method empty if you want a credentials provider that
vends static (non-changing) credentials.
This method works only for Amazon Linux instances. Use the following command:
Agent v 2.0.0 or later is installed on computers with operating system Amazon Linux 2 (AL2).
This agent version requires Java 1.8 or later. If required Java version is not yet present, the agent
installation process installs it. For more information regarding Amazon Linux 2 see https://
aws.amazon.com/amazon-linux-2/.
• To set up the agent from the Amazon S3 repository
This method works for Red Hat Enterprise Linux, as well as Amazon Linux 2 instances because it
installs the agent from the publicly available repository. Use the following command to download and
install the latest version of the agent version 2.x.x:
To install a specific version of the agent, specify the version number in the command. For example, the
following command installs agent v 2.0.1.
If you have Java 1.7 and you don’t want to upgrade it, you can download agent version 1.x.x, which is
compatible with Java 1.7. For example, to download agent v1.1.6, you can use the following command:
27
Amazon Kinesis Data Firehose Developer Guide
Configure and Start the Agent
The latest agent v1.x.x can be downloaded using the following command:
1. First, make sure that you have required Java version installed, depending on agent version.
2. Download the agent from the awslabs/amazon-kinesis-agent GitHub repo.
3. Install the agent by navigating to the download directory and running the following command:
1. Open and edit the configuration file (as superuser if using default file access permissions): /etc/
aws-kinesis/agent.json
In this configuration file, specify the files ( "filePattern" ) from which the agent collects data,
and the name of the delivery stream ( "deliveryStream" ) to which the agent sends data. The file
name is a pattern, and the agent recognizes file rotations. You can rotate files or create new files no
more than once per second. The agent uses the file creation time stamp to determine which files
to track and tail into your delivery stream. Creating new files or rotating files more frequently than
once per second does not allow the agent to differentiate properly between them.
{
"flows": [
{
"filePattern": "/tmp/app.log*",
"deliveryStream": "yourdeliverystream"
}
]
}
The default AWS Region is us-east-1. If you are using a different Region, add the
firehose.endpoint setting to the configuration file, specifying the endpoint for your Region. For
more information, see Agent Configuration Settings (p. 29).
2. Start the agent manually:
28
Amazon Kinesis Data Firehose Developer Guide
Agent Configuration Settings
The agent is now running as a system service in the background. It continuously monitors the specified
files and sends data to the specified delivery stream. Agent activity is logged in /var/log/aws-
kinesis-agent/aws-kinesis-agent.log.
Whenever you change the configuration file, you must stop and start the agent, using the following
commands:
assumeRoleARN The Amazon Resource Name (ARN) of the role to be assumed by the user.
For more information, see Delegate Access Across AWS Accounts Using IAM
Roles in the IAM User Guide.
assumeRoleExternalIdAn optional identifier that determines who can assume the role. For more
information, see How to Use an External ID in the IAM User Guide.
awsAccessKeyId AWS access key ID that overrides the default credentials. This setting takes
precedence over all other credential providers.
awsSecretAccessKey AWS secret key that overrides the default credentials. This setting takes
precedence over all other credential providers.
Default: true
Default: monitoring.us-east-1.amazonaws.com
Default: firehose.us-east-1.amazonaws.com
sts.endpoint The regional endpoint for the AWS Security Token Service.
Default: https://sts.amazonaws.com
If you define a custom credentials provider, use this setting to specify the
userDefinedCredentialsProvider.location
absolute path of the jar that contains the custom credentials provider. The
29
Amazon Kinesis Data Firehose Developer Guide
Agent Configuration Settings
To make the agent aggregate records and then put them to the delivery
aggregatedRecordSizeBytes
stream in one operation, specify this setting. Set it to the size that you want
the aggregate record to have before the agent puts it to the delivery stream.
filePattern [Required] A glob for the files that need to be monitored by the agent. Any
file that matches this pattern is picked up by the agent automatically and
monitored. For all files matching this pattern, grant read permission to aws-
kinesis-agent-user. For the directory containing the files, grant read
and execute permissions to aws-kinesis-agent-user.
Important
The agent picks up any file that matches this pattern. To ensure
that the agent doesn't pick up unintended records, choose this
pattern carefully.
initialPosition The initial position from which the file started to be parsed. Valid values are
START_OF_FILE and END_OF_FILE.
Default: END_OF_FILE
maxBufferAgeMillis The maximum time, in milliseconds, for which the agent buffers data before
sending it to the delivery stream.
maxBufferSizeBytes The maximum size, in bytes, for which the agent buffers data before sending
it to the delivery stream.
maxBufferSizeRecordsThe maximum number of records for which the agent buffers data before
sending it to the delivery stream.
Default: 500
30
Amazon Kinesis Data Firehose Developer Guide
Monitor Multiple File Directories
and Write to Multiple Streams
The time interval, in milliseconds, at which the agent polls and parses the
minTimeBetweenFilePollsMillis
monitored files for new data.
Default: 100
The pattern for identifying the start of a record. A record is made of a line
multiLineStartPattern
that matches the pattern and any following lines that don't match the
pattern. The valid values are regular expressions. By default, each new line in
the log files is parsed as one record.
skipHeaderLines The number of lines for the agent to skip parsing at the beginning of
monitored files.
Default: 0 (zero)
The string that the agent uses to truncate a parsed record when the record
truncatedRecordTerminator
size exceeds the Kinesis Data Firehose record size limit. (1,000 KB)
{
"cloudwatch.emitMetrics": true,
"kinesis.endpoint": "https://your/kinesis/endpoint",
"firehose.endpoint": "https://your/firehose/endpoint",
"flows": [
{
"filePattern": "/tmp/app1.log*",
"kinesisStream": "yourkinesisstream"
},
{
"filePattern": "/tmp/app2.log*",
"deliveryStream": "yourfirehosedeliverystream"
}
]
}
For more detailed information about using the agent with Amazon Kinesis Data Streams, see Writing to
Amazon Kinesis Data Streams with Kinesis Agent.
31
Amazon Kinesis Data Firehose Developer Guide
Use the agent to Preprocess Data
The agent supports the following processing options. Because the agent is open source, you can further
develop and extend its processing options. You can download the agent from Kinesis Agent.
Processing Options
SINGLELINE
Converts a multi-line record to a single-line record by removing newline characters, leading spaces,
and trailing spaces.
{
"optionName": "SINGLELINE"
}
CSVTOJSON
{
"optionName": "CSVTOJSON",
"customFieldNames": [ "field1", "field2", ... ],
"delimiter": "yourdelimiter"
}
customFieldNames
[Required] The field names used as keys in each JSON key value pair. For example, if you specify
["f1", "f2"], the record "v1, v2" is converted to {"f1":"v1","f2":"v2"}.
delimiter
The string used as the delimiter in the record. The default is a comma (,).
LOGTOJSON
Converts a record from a log format to JSON format. The supported log formats are Apache
Common Log, Apache Combined Log, Apache Error Log, and RFC3164 Syslog.
{
"optionName": "LOGTOJSON",
"logFormat": "logformat",
"matchPattern": "yourregexpattern",
"customFieldNames": [ "field1", "field2", … ]
}
logFormat
[Required] The log entry format. The following are possible values:
• COMMONAPACHELOG — The Apache Common Log format. Each log entry has the
following pattern by default: "%{host} %{ident} %{authuser} [%{datetime}]
\"%{request}\" %{response} %{bytes}".
32
Amazon Kinesis Data Firehose Developer Guide
Use the agent to Preprocess Data
• COMBINEDAPACHELOG — The Apache Combined Log format. Each log entry has the
following pattern by default: "%{host} %{ident} %{authuser} [%{datetime}]
\"%{request}\" %{response} %{bytes} %{referrer} %{agent}".
• APACHEERRORLOG — The Apache Error Log format. Each log entry has the following pattern
by default: "[%{timestamp}] [%{module}:%{severity}] [pid %{processid}:tid
%{threadid}] [client: %{client}] %{message}".
• SYSLOG — The RFC3164 Syslog format. Each log entry has the following pattern by default:
"%{timestamp} %{hostname} %{program}[%{processid}]: %{message}".
matchPattern
Overrides the default pattern for the specified log format. Use this setting to extract values
from log entries if they use a custom format. If you specify matchPattern, you must also
specify customFieldNames.
customFieldNames
The custom field names used as keys in each JSON key value pair. You can use this setting to
define field names for values extracted from matchPattern, or override the default field
names of predefined log formats.
{
"optionName": "LOGTOJSON",
"logFormat": "COMMONAPACHELOG"
}
Before conversion:
After conversion:
{"host":"64.242.88.10","ident":null,"authuser":null,"datetime":"07/
Mar/2004:16:10:02 -0800","request":"GET /mailman/listinfo/hsdivision
HTTP/1.1","response":"200","bytes":"6291"}
{
"optionName": "LOGTOJSON",
"logFormat": "COMMONAPACHELOG",
"customFieldNames": ["f1", "f2", "f3", "f4", "f5", "f6", "f7"]
}
With this configuration setting, the same Apache Common Log entry from the previous example is
converted to JSON format as follows:
{"f1":"64.242.88.10","f2":null,"f3":null,"f4":"07/Mar/2004:16:10:02 -0800","f5":"GET /
mailman/listinfo/hsdivision HTTP/1.1","f6":"200","f7":"6291"}
33
Amazon Kinesis Data Firehose Developer Guide
Use the agent to Preprocess Data
{
"flows": [
{
"filePattern": "/tmp/app.log*",
"deliveryStream": "my-delivery-stream",
"dataProcessingOptions": [
{
"optionName": "LOGTOJSON",
"logFormat": "COMMONAPACHELOG"
}
]
}
]
}
{
"flows": [
{
"filePattern": "/tmp/app.log*",
"deliveryStream": "my-delivery-stream",
"multiLineStartPattern": "\\[SEQUENCE=",
"dataProcessingOptions": [
{
"optionName": "SINGLELINE"
},
{
"optionName": "CSVTOJSON",
"customFieldNames": [ "field1", "field2", "field3" ],
"delimiter": "\\t"
}
]
}
]
}
{
"optionName": "LOGTOJSON",
"logFormat": "COMMONAPACHELOG",
"matchPattern": "^([\\d.]+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] \"(.+?)\" (\
\d{3})",
"customFieldNames": ["host", "ident", "authuser", "datetime", "request", "response"]
}
Before conversion:
34
Amazon Kinesis Data Firehose Developer Guide
agent CLI Commands
After conversion:
{"host":"123.45.67.89","ident":null,"authuser":null,"datetime":"27/Oct/2000:09:27:09
-0400","request":"GET /java/javaResources.html HTTP/1.0","response":"200"}
/var/log/aws-kinesis-agent/aws-kinesis-agent.log
These examples do not represent production-ready code, in that they do not check for all possible
exceptions, or account for all possible security or performance considerations.
The Kinesis Data Firehose API offers two operations for sending data to your delivery stream: PutRecord
and PutRecordBatch. PutRecord() sends one data record within one call and PutRecordBatch() can
send multiple data records within one call.
Topics
• Single Write Operations Using PutRecord (p. 36)
35
Amazon Kinesis Data Firehose Developer Guide
Single Write Operations Using PutRecord
For more code context, see the sample code included in the AWS SDK. For information about request and
response syntax, see the relevant topic in Amazon Kinesis Data Firehose API Operations.
recordList.clear();
For more code context, see the sample code included in the AWS SDK. For information about request and
response syntax, see the relevant topic in Amazon Kinesis Data Firehose API Operations.
36
Amazon Kinesis Data Firehose Developer Guide
Writing Using CloudWatch Events
To create a target for a CloudWatch Events rule that sends events to an existing delivery
stream
1. Sign in to the AWS Management Console and open the CloudWatch console at https://
console.aws.amazon.com/cloudwatch/.
2. Choose Create rule.
3. On the Step 1: Create rule page, for Targets, choose Add target, and then choose Firehose delivery
stream.
4. For Delivery stream, choose an existing Kinesis Data Firehose delivery stream.
For more information about creating CloudWatch Events rules, see Getting Started with Amazon
CloudWatch Events.
To create an action that sends events to an existing Kinesis Data Firehose delivery stream
1. When creating a rule in the AWS IoT console, on the Create a rule page, under Set one or more
actions, choose Add action.
2. Choose Send messages to an Amazon Kinesis Firehose stream.
3. Choose Configure action.
4. For Stream name, choose an existing Kinesis Data Firehose delivery stream.
5. For Separator, choose a separator character to be inserted between records.
6. For IAM role name, choose an existing IAM role or choose Create a new role.
7. Choose Add action.
For more information about creating AWS IoT rules, see AWS IoT Rule Tutorials.
37
Amazon Kinesis Data Firehose Developer Guide
Data Protection
Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:
• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services
in the AWS Cloud. AWS also provides you with services that you can use securely. The effectiveness
of our security is regularly tested and verified by third-party auditors as part of the AWS compliance
programs. To learn about the compliance programs that apply to Kinesis Data Firehose, see AWS
Services in Scope by Compliance Program.
• Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also
responsible for other factors including the sensitivity of your data, your organization’s requirements,
and applicable laws and regulations.
This documentation helps you understand how to apply the shared responsibility model when using
Kinesis Data Firehose. The following topics show you how to configure Kinesis Data Firehose to meet
your security and compliance objectives. You'll also learn how to use other AWS services that can help
you to monitor and secure your Kinesis Data Firehose resources.
Topics
• Data Protection in Amazon Kinesis Data Firehose (p. 38)
• Controlling Access with Amazon Kinesis Data Firehose (p. 39)
• Monitoring Amazon Kinesis Data Firehose (p. 54)
• Compliance Validation for Amazon Kinesis Data Firehose (p. 55)
• Resilience in Amazon Kinesis Data Firehose (p. 55)
• Infrastructure Security in Kinesis Data Firehose (p. 56)
• Security Best Practices for Kinesis Data Firehose (p. 56)
38
Amazon Kinesis Data Firehose Developer Guide
Server-Side Encryption with
Direct PUT or Other Data Sources
When you send data from your data producers to your data stream, Kinesis Data Streams encrypts your
data using an AWS Key Management Service (AWS KMS) key before storing the data at rest. When your
Kinesis Data Firehose delivery stream reads the data from your data stream, Kinesis Data Streams first
decrypts the data and then sends it to Kinesis Data Firehose. Kinesis Data Firehose buffers the data in
memory based on the buffering hints that you specify. It then delivers it to your destinations without
storing the unencrypted data at rest.
For information about how to enable server-side encryption for Kinesis Data Streams, see Using Server-
Side Encryption in the Amazon Kinesis Data Streams Developer Guide.
You can also enable SSE when you create the delivery stream. To do that, specify
DeliveryStreamEncryptionConfigurationInput when you invoke CreateDeliveryStream.
When the CMK is of type CUSTOMER_MANAGED_CMK, if the Amazon Kinesis Data Firehose service is
unable to decrypt records because of a KMSNotFoundException, a KMSInvalidStateException, a
KMSDisabledException, or a KMSAccessDeniedException, the service waits up to 24 hours (the
retention period) for you to resolve the problem. If the problem persists beyond the retention period, the
service skips those records that have passed the retention period and couldn't be decrypted, and then
discards the data. Amazon Kinesis Data Firehose provides the following four CloudWatch metrics that
you can use to track the four AWS KMS exceptions:
• KMSKeyAccessDenied
• KMSKeyDisabled
• KMSKeyInvalidState
• KMSKeyNotFound
For more information about these four metrics, see the section called “Monitoring with CloudWatch
Metrics” (p. 79).
Important
To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support
asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About
Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.
39
Amazon Kinesis Data Firehose Developer Guide
Grant Your Application Access to
Your Kinesis Data Firehose Resources
different AWS account. The technology for managing all these forms of access is AWS Identity and Access
Management (IAM). For more information about IAM, see What is IAM?.
Contents
• Grant Your Application Access to Your Kinesis Data Firehose Resources (p. 40)
• Allow Kinesis Data Firehose to Assume an IAM Role (p. 40)
• Grant Kinesis Data Firehose Access to AWS Glue for Data Format Conversion (p. 41)
• Grant Kinesis Data Firehose Access to an Amazon S3 Destination (p. 41)
• Grant Kinesis Data Firehose Access to an Amazon Redshift Destination (p. 43)
• Grant Kinesis Data Firehose Access to a Public OpenSearch Service Destination (p. 45)
• Grant Kinesis Data Firehose Access to an OpenSearch Service Destination in a VPC (p. 47)
• Grant Kinesis Data Firehose Access to a Splunk Destination (p. 47)
• Access to Splunk in VPC (p. 49)
• Grant Kinesis Data Firehose Access to an HTTP Endpoint Destination (p. 50)
• Cross-Account Delivery to an Amazon S3 Destination (p. 51)
• Cross-Account Delivery to an OpenSearch Service Destination (p. 52)
• Using Tags to Control Access (p. 53)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"firehose:DeleteDeliveryStream",
"firehose:PutRecord",
"firehose:PutRecordBatch",
"firehose:UpdateDestination"
],
"Resource": [
"arn:aws:firehose:region:account-id:deliverystream/delivery-stream-name"
]
}
]
}
{
"Version": "2012-10-17",
40
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access to
AWS Glue for Data Format Conversion
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
For information about how to modify the trust relationship of a role, see Modifying a Role.
{
"Effect": "Allow",
"Action": [
"glue:GetTable",
"glue:GetTableVersion",
"glue:GetTableVersions"
],
"Resource": "table-arn"
}
Use the following access policy to enable Kinesis Data Firehose to access your S3 bucket and AWS KMS
key. If you don't own the S3 bucket, add s3:PutObjectAcl to the list of Amazon S3 actions. This grants
the bucket owner full access to the objects delivered by Kinesis Data Firehose. This policy also has a
statement that allows access to Amazon Kinesis Data Streams. If you don't use Kinesis Data Streams as
your data source, you can remove that statement.
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
41
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access
to an Amazon S3 Destination
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
},
{
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:ListShards"
],
"Resource": "arn:aws:kinesis:region:account-id:stream/stream-name"
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:region:account-id:key/key-id"
],
"Condition": {
"StringEquals": {
"kms:ViaService": "s3.region.amazonaws.com"
},
"StringLike": {
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix*"
}
}
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:region:account-id:log-group:log-group-name:log-stream:log-
stream-name"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:GetFunctionConfiguration"
],
"Resource": [
"arn:aws:lambda:region:account-id:function:function-name:function-version"
]
}
]
}
For more information about allowing other AWS services to access your AWS resources, see Creating a
Role to Delegate Permissions to an AWS Service in the IAM User Guide.
To learn how to grant Kinesis Data Firehose access to an Amazon S3 destination in another account, see
the section called “Cross-Account Delivery to an Amazon S3 Destination” (p. 51).
42
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access
to an Amazon Redshift Destination
Topics
• IAM Role and Access Policy (p. 43)
• VPC Access to an Amazon Redshift Cluster (p. 44)
Use the following access policy to enable Kinesis Data Firehose to access your S3 bucket and AWS KMS
key. If you don't own the S3 bucket, add s3:PutObjectAcl to the list of Amazon S3 actions, which
grants the bucket owner full access to the objects delivered by Kinesis Data Firehose. This policy also has
a statement that allows access to Amazon Kinesis Data Streams. If you don't use Kinesis Data Streams as
your data source, you can remove that statement.
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:region:account-id:key/key-id"
],
"Condition": {
"StringEquals": {
"kms:ViaService": "s3.region.amazonaws.com"
},
43
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access
to an Amazon Redshift Destination
"StringLike": {
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix*"
}
}
},
{
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:ListShards"
],
"Resource": "arn:aws:kinesis:region:account-id:stream/stream-name"
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:region:account-id:log-group:log-group-name:log-stream:log-
stream-name"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:GetFunctionConfiguration"
],
"Resource": [
"arn:aws:lambda:region:account-id:function:function-name:function-version"
]
}
]
}
For more information about allowing other AWS services to access your AWS resources, see Creating a
Role to Delegate Permissions to an AWS Service in the IAM User Guide.
44
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access to a
Public OpenSearch Service Destination
For more information about how to unblock IP addresses, see the step Authorize Access to the Cluster in
the Amazon Redshift Getting Started Guide guide.
Use the following access policy to enable Kinesis Data Firehose to access your S3 bucket, OpenSearch
Service domain, and AWS KMS key. If you do not own the S3 bucket, add s3:PutObjectAcl to the list
of Amazon S3 actions, which grants the bucket owner full access to the objects delivered by Kinesis Data
Firehose. This policy also has a statement that allows access to Amazon Kinesis Data Streams. If you
don't use Kinesis Data Streams as your data source, you can remove that statement.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
},
{
"Effect": "Allow",
45
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access to a
Public OpenSearch Service Destination
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:region:account-id:key/key-id"
],
"Condition": {
"StringEquals": {
"kms:ViaService": "s3.region.amazonaws.com"
},
"StringLike": {
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix*"
}
}
},
{
"Effect": "Allow",
"Action": [
"es:DescribeDomain",
"es:DescribeDomains",
"es:DescribeDomainConfig",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:region:account-id:domain/domain-name",
"arn:aws:es:region:account-id:domain/domain-name/*"
]
},
{
"Effect": "Allow",
"Action": [
"es:ESHttpGet"
],
"Resource": [
"arn:aws:es:region:account-id:domain/domain-name/_all/_settings",
"arn:aws:es:region:account-id:domain/domain-name/_cluster/stats",
"arn:aws:es:region:account-id:domain/domain-name/index-name*/_mapping/type-
name",
"arn:aws:es:region:account-id:domain/domain-name/_nodes",
"arn:aws:es:region:account-id:domain/domain-name/_nodes/stats",
"arn:aws:es:region:account-id:domain/domain-name/_nodes/*/stats",
"arn:aws:es:region:account-id:domain/domain-name/_stats",
"arn:aws:es:region:account-id:domain/domain-name/index-name*/_stats"
]
},
{
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:ListShards"
],
"Resource": "arn:aws:kinesis:region:account-id:stream/stream-name"
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:region:account-id:log-group:log-group-name:log-stream:log-
stream-name"
]
46
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access to an
OpenSearch Service Destination in a VPC
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:GetFunctionConfiguration"
],
"Resource": [
"arn:aws:lambda:region:account-id:function:function-name:function-version"
]
}
]
}
For more information about allowing other AWS services to access your AWS resources, see Creating a
Role to Delegate Permissions to an AWS Service in the IAM User Guide.
To learn how to grant Kinesis Data Firehose access to an OpenSearch Service cluster in another account,
see the section called “Cross-Account Delivery to an OpenSearch Service Destination” (p. 52).
• ec2:DescribeVpcs
• ec2:DescribeVpcAttribute
• ec2:DescribeSubnets
• ec2:DescribeSecurityGroups
• ec2:DescribeNetworkInterfaces
• ec2:CreateNetworkInterface
• ec2:CreateNetworkInterfacePermission
• ec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out
by creating more ENIs when necessary. You might therefore see a degradation in performance.
When you create or update your delivery stream, you specify a security group for Kinesis Data Firehose
to use when it sends data to your OpenSearch Service domain. You can use the same security group that
the OpenSearch Service domain uses or a different one. If you specify a different security group, ensure
that it allows outbound HTTPS traffic to the OpenSearch Service domain's security group. Also ensure
that the OpenSearch Service domain's security group allows HTTPS traffic from the security group you
specified when you configured your delivery stream. If you use the same security group for both your
delivery stream and the OpenSearch Service domain, make sure the security group inbound rule allows
HTTPS traffic. For more information about security group rules, see Security group rules in the Amazon
VPC documentation.
47
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access to a Splunk Destination
can optionally use an AWS KMS key that you own for Amazon S3 server-side encryption. If error logging
is enabled, Kinesis Data Firehose sends data delivery errors to your CloudWatch log streams. You can also
use AWS Lambda for data transformation. If you use an AWS load balancer, make sure that it is a Classic
Load Balancer. Kinesis Data Firehose supports neither Application Load Balancers nor Network Load
Balancers. Also, enable duration-based sticky sessions with cookie expiration disabled. For information
about how to do this, see Duration-Based Session Stickiness.
You are required to have an IAM role when creating a delivery stream. Kinesis Data Firehose assumes that
IAM role and gains access to the specified bucket, key, and CloudWatch log group and streams.
Use the following access policy to enable Kinesis Data Firehose to access your S3 bucket. If you don't own
the S3 bucket, add s3:PutObjectAcl to the list of Amazon S3 actions, which grants the bucket owner
full access to the objects delivered by Kinesis Data Firehose. This policy also grants Kinesis Data Firehose
access to CloudWatch for error logging and to AWS Lambda for data transformation. The policy also has
a statement that allows access to Amazon Kinesis Data Streams. If you don't use Kinesis Data Streams as
your data source, you can remove that statement. Kinesis Data Firehose doesn't use IAM to access Splunk.
For accessing Splunk, it uses your HEC token.
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:region:account-id:key/key-id"
],
"Condition": {
"StringEquals": {
"kms:ViaService": "s3.region.amazonaws.com"
},
"StringLike": {
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix*"
}
}
},
{
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:ListShards"
],
48
Amazon Kinesis Data Firehose Developer Guide
Access to Splunk in VPC
"Resource": "arn:aws:kinesis:region:account-id:stream/stream-name"
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:region:account-id:log-group:log-group-name:log-stream:*"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:GetFunctionConfiguration"
],
"Resource": [
"arn:aws:lambda:region:account-id:function:function-name:function-version"
]
}
]
}
For more information about allowing other AWS services to access your AWS resources, see Creating a
Role to Delegate Permissions to an AWS Service in the IAM User Guide.
49
Amazon Kinesis Data Firehose Developer Guide
Grant Kinesis Data Firehose Access
to an HTTP Endpoint Destination
You are required to have an IAM role when creating a delivery stream. Kinesis Data Firehose assumes that
IAM role and gains access to the specified bucket, key, and CloudWatch log group and streams.
Use the following access policy to enable Kinesis Data Firehose to access the S3 bucket that you specified
for data backup. If you don't own the S3 bucket, add s3:PutObjectAcl to the list of Amazon S3
actions, which grants the bucket owner full access to the objects delivered by Kinesis Data Firehose. This
policy also grants Kinesis Data Firehose access to CloudWatch for error logging and to AWS Lambda for
data transformation. The policy also has a statement that allows access to Amazon Kinesis Data Streams.
If you don't use Kinesis Data Streams as your data source, you can remove that statement.
Important
Kinesis Data Firehose doesn't use IAM to access HTTP endpoint destinations owned by
supported third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB,
New Relic, Splunk, or Sumo Logic. For accessing a specified HTTP endpoint destination owned
by a supported third-party service provider, contact that service provider to obtain the API
key or the access key that is required to enable data delivery to that service from Kinesis Data
Firehose.
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
50
Amazon Kinesis Data Firehose Developer Guide
Cross-Account Delivery to an Amazon S3 Destination
"Resource": [
"arn:aws:kms:region:account-id:key/key-id"
],
"Condition": {
"StringEquals": {
"kms:ViaService": "s3.region.amazonaws.com"
},
"StringLike": {
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix*"
}
}
},
{
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:ListShards"
],
"Resource": "arn:aws:kinesis:region:account-id:stream/stream-name"
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:region:account-id:log-group:log-group-name:log-stream:*"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:GetFunctionConfiguration"
],
"Resource": [
"arn:aws:lambda:region:account-id:function:function-name:function-version"
]
}
]
}
For more information about allowing other AWS services to access your AWS resources, see Creating a
Role to Delegate Permissions to an AWS Service in the IAM User Guide.
Important
Currently Kinesis Data Firehose does NOT support data delivery to HTTP endpoints in a VPC.
1. Create an IAM role under account A using steps described in Grant Kinesis Firehose Access to an
Amazon S3 Destination.
Note
The Amazon S3 bucket specified in the access policy is owned by account B in this case.
Make sure you add s3:PutObjectAcl to the list of Amazon S3 actions in the access policy,
51
Amazon Kinesis Data Firehose Developer Guide
Cross-Account Delivery to an
OpenSearch Service Destination
which grants account B full access to the objects delivered by Amazon Kinesis Data Firehose.
This permission is required for cross account delivery. Kinesis Data Firehose sets the "x-amz-
acl" header on the request to "bucket-owner-full-control".
2. To allow access from the IAM role previously created, create an S3 bucket policy under account
B. The following code is an example of the bucket policy. For more information, see Using Bucket
Policies and User Policies.
"Version": "2012-10-17",
"Id": "PolicyID",
"Statement": [
{
"Sid": "StmtID",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountA-id:role/iam-role-name"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
}
]
}
3. Create a Kinesis Data Firehose delivery stream under account A using the IAM role that you created
in step 1.
1. Create an IAM role under account A using the steps described in the section called “Grant Kinesis
Data Firehose Access to a Public OpenSearch Service Destination” (p. 45).
2. To allow access from the IAM role that you created in the previous step, create an OpenSearch
Service policy under account B. The following JSON is an example.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
52
Amazon Kinesis Data Firehose Developer Guide
Using Tags to Control Access
3. Create a Kinesis Data Firehose delivery stream under account A using the IAM role that you created
in step 1. When you create the delivery stream, use the AWS CLI or the Kinesis Data Firehose APIs
and specify the ClusterEndpoint field instead of DomainARN for OpenSearch Service.
Note
To create a delivery stream in one AWS account with an OpenSearch Service destination in a
different account, you must use the AWS CLI or the Kinesis Data Firehose APIs. You can't use the
AWS Management Console to create this kind of cross-account configuration.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "firehose:CreateDeliveryStream",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestTag/MyKey": "MyValue"
}
}
}
]
}
53
Amazon Kinesis Data Firehose Developer Guide
Monitoring
UntagDeliveryStream
For the UntagDeliveryStream operation, use the aws:TagKeys condition key. In the following
example, MyKey is an example tag key.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "firehose:UntagDeliveryStream",
"Resource": "*",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:TagKeys": "MyKey"
}
}
}
]
}
ListDeliveryStreams
You can't use tag-based access control with ListDeliveryStreams.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "firehose:DescribeDeliveryStream",
"Resource": "*",
"Condition": {
"Null": {
"firehose:ResourceTag/MyKey": "MyValue"
}
}
]
}
54
Amazon Kinesis Data Firehose Developer Guide
Compliance Validation
For a list of AWS services in scope of specific compliance programs, see AWS Services in Scope by
Compliance Program. For general information, see AWS Compliance Programs.
You can download third-party audit reports using AWS Artifact. For more information, see Downloading
Reports in AWS Artifact.
Your compliance responsibility when using Kinesis Data Firehose is determined by the sensitivity of your
data, your company's compliance objectives, and applicable laws and regulations. If your use of Kinesis
Data Firehose is subject to compliance with standards such as HIPAA, PCI, or FedRAMP, AWS provides
resources to help:
• Security and Compliance Quick Start Guides – These deployment guides discuss architectural
considerations and provide steps for deploying security- and compliance-focused baseline
environments on AWS.
• Architecting for HIPAA Security and Compliance Whitepaper – This whitepaper describes how
companies can use AWS to create HIPAA-compliant applications.
• AWS Compliance Resources – This collection of workbooks and guides might apply to your industry
and location.
• AWS Config – This AWS service assesses how well your resource configurations comply with internal
practices, industry guidelines, and regulations.
• AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS
that helps you check your compliance with security industry standards and best practices.
For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure.
In addition to the AWS global infrastructure, Kinesis Data Firehose offers several features to help support
your data resiliency and backup needs.
Disaster Recovery
Kinesis Data Firehose runs in a serverless mode, and takes care of host degradations, Availability Zone
availability, and other infrastructure related issues by performing automatic migration. When this
happens, Kinesis Data Firehose ensures that the delivery stream is migrated without any loss of data.
55
Amazon Kinesis Data Firehose Developer Guide
Infrastructure Security
You use AWS published API calls to access Kinesis Data Firehose through the network. Clients must
support Transport Layer Security (TLS) 1.0 or later. We recommend TLS 1.2 or later. Clients must also
support cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or
Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support
these modes.
Additionally, requests must be signed by using an access key ID and a secret access key that is associated
with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary
security credentials to sign requests.
Instead, you should use an IAM role to manage temporary credentials for your producer and client
applications to access Kinesis Data Firehose delivery streams. When you use a role, you don't have to use
long-term credentials (such as a user name and password or access keys) to access other resources.
For more information, see the following topics in the IAM User Guide:
• IAM Roles
• Common Scenarios for Roles: Users, Applications, and Services
56
Amazon Kinesis Data Firehose Developer Guide
Implement Server-Side Encryption in Dependent Resources
Using the information collected by CloudTrail, you can determine the request that was made to Kinesis
Data Firehose, the IP address from which the request was made, who made the request, when it was
made, and additional details.
For more information, see the section called “Logging Kinesis Data Firehose API Calls with AWS
CloudTrail” (p. 100).
57
Amazon Kinesis Data Firehose Developer Guide
Data Transformation Flow
recordId
The record ID is passed from Kinesis Data Firehose to Lambda during the invocation. The
transformed record must contain the same record ID. Any mismatch between the ID of the original
record and the ID of the transformed record is treated as a data transformation failure.
result
The status of the data transformation of the record. The possible values are: Ok (the record was
transformed successfully), Dropped (the record was dropped intentionally by your processing logic),
and ProcessingFailed (the record could not be transformed). If a record has a status of Ok or
Dropped, Kinesis Data Firehose considers it successfully processed. Otherwise, Kinesis Data Firehose
considers it unsuccessfully processed.
data
Lambda Blueprints
There are blueprints that you can use to create a Lambda function for data transformation. Some
of these blueprints are in the AWS Lambda console and some are in the AWS Serverless Application
Repository.
58
Amazon Kinesis Data Firehose Developer Guide
Data Transformation Failure Handling
To see the blueprints that are available in the AWS Lambda console
1. Sign in to the AWS Management Console and open the AWS Lambda console at https://
console.aws.amazon.com/lambda/.
2. Choose Create function, and then choose Use a blueprint.
3. In the Blueprints field, search for the keyword firehose to find the Kinesis Data Firehose Lambda
blueprints.
To see the blueprints that are available in the AWS Serverless Application Repository
You can also create a Lambda function without using a blueprint. See Getting Started with AWS Lambda.
If the status of the data transformation of a record is ProcessingFailed, Kinesis Data Firehose treats
the record as unsuccessfully processed. For this type of failure, you can emit error logs to Amazon
CloudWatch Logs from your Lambda function. For more information, see Accessing Amazon CloudWatch
Logs for AWS Lambda in the AWS Lambda Developer Guide.
If data transformation fails, the unsuccessfully processed records are delivered to your S3 bucket in the
processing-failed folder. The records have the following format:
{
"attemptsMade": "count",
"arrivalTimestamp": "timestamp",
"errorCode": "code",
"errorMessage": "message",
"attemptEndingTimestamp": "timestamp",
"rawData": "data",
"lambdaArn": "arn"
}
attemptsMade
The time that the record was received by Kinesis Data Firehose.
errorCode
59
Amazon Kinesis Data Firehose Developer Guide
Duration of a Lambda Invocation
errorMessage
The time that Kinesis Data Firehose stopped attempting Lambda invocations.
rawData
For information about what Kinesis Data Firehose does if such an error occurs, see the section called
“Data Transformation Failure Handling” (p. 59).
60
Amazon Kinesis Data Firehose Developer Guide
Partitioning keys
Partitioning your data minimizes the amount of data scanned, optimizes performance, and reduces
costs of your analytics queries on Amazon S3. It also increases granular access to your data. Kinesis
Data Firehose delivery streams are traditionally used in order to capture and load data into Amazon S3.
To partition a streaming data set for Amazon S3-based analytics, you would need to run partitioning
applications between Amazon S3 buckets prior to making the data available for analysis, which could
become complicated or costly.
With dynamic partitioning, Kinesis Data Firehose continuously groups in-transit data using dynamically
or statically defined data keys, and delivers the data to individual Amazon S3 prefixes by key. This
reduces time-to-insight by minutes or hours. It also reduces costs and simplifies architectures.
Topics
• Partitioning keys (p. 61)
• Amazon S3 Bucket Prefix for Dynamic Partitioning (p. 65)
• Dynamic partitioning of aggregated data (p. 66)
• Adding a new line delimiter when delivering data to S3 (p. 66)
• How to enable dynamic partitioning (p. 66)
• Dynamic Partitioning Error Handling (p. 67)
• Data buffering and dynamic partitioning (p. 67)
Partitioning keys
With dynamic partitioning, you create targeted data sets from the streaming S3 data by partitioning
the data based on partitioning keys. Partitioning keys enable you to filter your streaming data based on
specific values. For example, if you need to filter your data based on customer ID and country, you can
specify the data field of customer_id as one partitioning key and the data field of country as another
partitioning key. Then, you specify the expressions (using the supported formats) to define the S3 bucket
prefixes to which the dynamically partitioned data records are to be delivered.
• Inline parsing - this method uses Amazon Kinesis Data Firehose built-in support mechanism, a jq
parser, for extracting the keys for partitioning from data records that are in JSON format.
• AWS Lambda function - this method uses a specified AWS Lambda function to extract and return the
data fields needed for partitioning.
61
Amazon Kinesis Data Firehose Developer Guide
Creating partitioning keys with inline parsing
Important
When you enable dynamic partitioning, you must configure at least one of these methods to
partition your data. You can configure either of these methods to specify your partitioning keys
or both of them at the same time.
Let's look at the following sample data record and and see how you can define partitioning keys for it
with inline parsing:
{
"type": {
"device": "mobile",
"event": "user_clicked_submit_button"
},
"customer_id": "1234567890",
"event_timestamp": 1565382027, #epoch timestamp
"region": "sample_region"
}
For example, you can choose to partition your data based on the customer_id parameter or the
event_timestamp parameter. This means that you want the value of the customer_id parameter
or the event_timestamp parameter in each record to be used in determining the S3 prefix to which
the record is to be delivered. You can also choose a nested parameter, like device with an expression
.type.device. Your dynamic partitioning logic can depend on multiple parameters.
After selecting data parameters for your partitioning keys, you then map each parameter to a valid jq
expression. The following table shows such a mapping of parameters to jq expressions:
Parameter jq expression
customer_id .customer_id
device .type.device
At runtime, Kinesis Data Firehose uses the right column above to evaluate the parameters based on the
data in each record.
62
Amazon Kinesis Data Firehose Developer Guide
Creating partitioning keys with an AWS Lambda function
transform, parse and return the data fields that you can then use for dynamic partitioning using the
same Lambda function.
The following is an example Amazon Kinesis Firehose delivery stream processing Lambda function
in Python that replays every read record from input to output and extracts partioninig keys from the
records.
# Create output Firehose record and add modified payload and record ID to it.
firehose_record_output = {'recordId': firehose_record_input['recordId'],
'data': firehose_record_input['data'],
'result': 'Ok',
'metadata': { 'partitionKeys': partition_keys }}
firehose_records_output['records'].append(firehose_record_output)
The following is an example Amazon Kinesis Firehose delivery stream processing Lambda function in Go
that replays every read record from input to output and extracts partioninig keys from the records.
package main
63
Amazon Kinesis Data Firehose Developer Guide
Creating partitioning keys with an AWS Lambda function
import (
"fmt"
"encoding/json"
"time"
"strconv"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
currentTime := time.Now()
json.Unmarshal(record.Data, &recordData)
partitionKeys["customerId"] = recordData.CustomerId
partitionKeys["year"] = strconv.Itoa(currentTime.Year())
partitionKeys["month"] = strconv.Itoa(int(currentTime.Month()))
partitionKeys["date"] = strconv.Itoa(currentTime.Day())
partitionKeys["hour"] = strconv.Itoa(currentTime.Hour())
partitionKeys["minute"] = strconv.Itoa(currentTime.Minute())
metaData.PartitionKeys = partitionKeys
transformedRecord.Metadata = metaData
func main() {
lambda.Start(handleRequest)
}
64
Amazon Kinesis Data Firehose Developer Guide
Amazon S3 Bucket Prefix for Dynamic Partitioning
With dynamic partitioning, your partitioned data is delivered into the specified Amazon S3 prefixes.
If you don't enable dynamic partitioning, specifying an S3 bucket prefix for your delivery stream is
optional. However, if you choose to enable dynamic partitioning, you MUST specify the S3 bucket
prefixes to which Kinesis Data Firehose is to deliver partitioned data.
In every delivery stream where you enable dynamic partitioning, the S3 bucket prefix value consists
of expressions based on the specified partitioning keys for that delivery stream. Using the above data
record example again, you can build the following S3 prefix value that consists of expressions based on
the partitioning keys defined above:
"ExtendedS3DestinationConfiguration": {
"BucketARN": "arn:aws:s3:::my-logs-prod",
"Prefix": "customer_id=!{partitionKeyFromQuery:customer_id}/
device=!{partitionKeyFromQuery:device}/
year=!{partitionKeyFromQuery:year}/
month=!{partitionKeyFromQuery:month}/
day=!{partitionKeyFromQuery:day}/
hour=!{partitionKeyFromQuery:hour}/"
}
Kinesis Data firehose evaluates the above expression at runtime. It groups records that match the same
evaluated S3 prefix expression into a single data set. Kinesis Data Firehose then delivers each data set to
the evaluated S3 prefix. The frequency of data set delivery to S3 is determined by the delivery stream
buffer setting. As a result, the record in this example is delivered to the following S3 object key:
s3://my-logs-prod/customer_id=1234567890/device=mobile/year=2019/month=08/day=09/hour=20/
my-delivery-stream-2019-08-09-23-55-09-a9fa96af-e4e4-409f-bac3-1f804714faaa
For dynamic partitioning, you must use the following expression format in your S3 bucket
prefix: !{namespace:value}, where namespace can be either partitionKeyFromQuery or
partitionKeyFromLambda, or both. If you are using inline parsing to create the partitioning keys for
your source data, you must specify an S3 bucket prefix value that consists of expressions specified in the
following format: "partitionKeyFromQuery:keyID". If you are using an AWS Lambda function to
create partitioning keys for your source data, you must specify an S3 bucket prefix value that consists of
expressions specified in the following format: "partitionKeyFromLambda:keyID".
Note
You can also specify the S3 bucket prefix value using the hive style format, for example
customer_id=!{partitionKeyFromQuery:customer_id}.
For more information, see the "Choose Amazon S3 for Your Destination" in Creating an Amazon Kinesis
Data Firehose Delivery Stream and Custom Prefixes for Amazon S3 Objects.
65
Amazon Kinesis Data Firehose Developer Guide
Dynamic partitioning of aggregated data
With aggregated data, when you enable dynamic partitioning, Kinesis Data Firehose parses the records
and looks for either valid JSON objects or delimited records within each API call based on the specified
multi record deaggregation type.
Important
If your data is aggregated, dynamic partitioning can be only be applied if your data is first
deaggregated.
For detailed steps on how to enable and configure dynamic partitioning through the Amazon Kinesis
Data Firehose management console while creating a new delivery stream, see Creating an Amazon
Kinesis Data Firehose Delivery Stream. When you get to the task of specifying the destination for your
delivery stream, make sure to follow the steps in the Choose Amazon S3 for Your Destination section,
since currently, dynamic partitioning is only supported for delivery streams that use Amazon S3 as the
destination.
Once dynamic partitioning on an active delivery stream is enabled, you can update the configuration
by adding new or removing or updating existing partitioning keys and the S3 prefix expressions. Once
updated, Amazon Kinesis Data Firehose starts using the new keys and the new S3 prefix expressions.
Important
Once you enable dynamic partitioning on a delivery stream, it cannot be disabled on this
delivery stream.
66
Amazon Kinesis Data Firehose Developer Guide
Dynamic Partitioning Error Handling
You MUST specify an S3 error bucket prefix for a delivery stream if you want to enable dynamic
partitioning for this delivery stream. If you don't want to enable dynamic partitioning for a delivery
stream, specifying an S3 error bucket prefix is optional.
For a delivery stream where data partitioning is enabled, Kinesis Data Firehose creates one buffer per
each partition in run time based on the record payload. For a delivery stream where data partitioning
is enabled, the buffer size ranges from 64MB to 128MB, with the default set to 128MB, and the buffer
interval ranges from 60 seconds to 900 seconds. A max throughput of 25 MB per second is supported for
each active partition.
The active partition count is the total number of active partitions within the delivery buffer. For
example, if the dynamic partitioning query constructs 3 partitions per second and you have a buffer
hint configuration triggering delivery every 60 seconds, then on average you would have 180 active
partitions. If Kinesis Data Firehose cannot deliver the data in a partition to a destination, this partition is
counted as active in the delivery buffer until it can be delivered.
When dynamic partitioning on a delivery stream is enabled, there is a limit of 500 active partitions that
can be created for that delivery stream. You can use the Amazon Kinesis Data Firehose Limits form to
request an increase of this quota. A new partition is created when an S3 prefix is evaluated to a new
value based on the record data fields and the S3 prefix expressions. A new buffer is created for each
active partition. Every subsequent record with the same evaluated S3 prefix is delivered to that buffer.
Once the buffer meets the buffer size limit or the buffer time interval, Amazon Kinesis Data Firehose
creates an object with the buffer data and delivers it to the specified Amazon S3 prefix. Once the object
is delivered, the buffer for that partition and the partition itself are deleted and removed from the active
partitions count. Amazon Kinesis Data Firehose delivers each buffer data as a single object once the
buffer size or interval are met for each partition separately. Once the number of active partitions reaches
the limit of 500 per deliver stream, the rest of the records in the delivery stream are delivered to the
specified S3 error bucket prefix.
67
Amazon Kinesis Data Firehose Developer Guide
Record Format Conversion Requirements
Topics
• Record Format Conversion Requirements (p. 68)
• Choosing the JSON Deserializer (p. 69)
• Choosing the Serializer (p. 69)
• Converting Input Record Format (Console) (p. 69)
• Converting Input Record Format (API) (p. 70)
• Record Format Conversion Error Handling (p. 70)
• Record Format Conversion Example (p. 71)
• A deserializer to read the JSON of your input data – You can choose one of two types of deserializers:
Apache Hive JSON SerDe or OpenX JSON SerDe.
Note
When combining multiple JSON documents into the same record, make sure that your input
is still presented in the supported JSON format. An array of JSON documents is NOT a valid
input.
For example, this is the correct input: {"a":1}{"a":2}
And this is the INCORRECT input: [{"a":1}, {"a":2}]
• A schema to determine how to interpret that data – Use AWS Glue to create a schema in the AWS
Glue Data Catalog. Kinesis Data Firehose then references that schema and uses it to interpret your
input data. You can use the same schema to configure both Kinesis Data Firehose and your analytics
software. For more information, see Populating the AWS Glue Data Catalog in the AWS Glue Developer
Guide.
• A serializer to convert the data to the target columnar storage format (Parquet or ORC) – You can
choose one of two types of serializers: ORC SerDe or Parquet SerDe.
Important
If you enable record format conversion, you can't set your Kinesis Data Firehose destination to
be Amazon OpenSearch Service, Amazon Redshift, or Splunk. With format conversion enabled,
Amazon S3 is the only destination that you can use for your Kinesis Data Firehose delivery
stream.
You can convert the format of your data even if you aggregate your records before sending them to
Kinesis Data Firehose.
68
Amazon Kinesis Data Firehose Developer Guide
Choosing the JSON Deserializer
The OpenX JSON SerDe can convert periods (.) to underscores (_). It can also convert JSON keys to
lowercase before deserializing them. For more information about the options that are available with this
deserializer through Kinesis Data Firehose, see OpenXJsonSerDe.
If you're not sure which deserializer to choose, use the OpenX JSON SerDe, unless you have time stamps
that it doesn't support.
If you have time stamps in formats other than those listed previously, use the Apache Hive JSON SerDe.
When you choose this deserializer, you can specify the time stamp formats to use. To do this, follow
the pattern syntax of the Joda-Time DateTimeFormat format strings. For more information, see Class
DateTimeFormat.
You can also use the special value millis to parse time stamps in epoch milliseconds. If you don't
specify a format, Kinesis Data Firehose uses java.sql.Timestamp::valueOf by default.
The Hive SerDe doesn't convert nested JSON into strings. For example, if you have {"a":
{"inner":1}}, it doesn't treat {"inner":1} as a string.
69
Amazon Kinesis Data Firehose Developer Guide
Converting Input Record Format (API)
1. Sign in to the AWS Management Console, and open the Kinesis Data Firehose console at https://
console.aws.amazon.com/firehose/.
2. Choose a Kinesis Data Firehose delivery stream to update, or create a new delivery stream by
following the steps in Creating an Amazon Kinesis Data Firehose Delivery Stream (p. 5).
3. Under Convert record format, set Record format conversion to Enabled.
4. Choose the output format that you want. For more information about the two options, see Apache
Parquet and Apache ORC.
5. Choose an AWS Glue table to specify a schema for your source records. Set the Region, database,
table, and table version.
• In BufferingHints, you can't set SizeInMBs to a value less than 64 if you enable record format
conversion. Also, when format conversion isn't enabled, the default value is 5. The value becomes 128
when you enable it.
• You must set CompressionFormat in ExtendedS3DestinationConfiguration or in
ExtendedS3DestinationUpdate to UNCOMPRESSED. The default value for CompressionFormat is
UNCOMPRESSED. Therefore, you can also leave it unspecified in ExtendedS3DestinationConfiguration.
The data still gets compressed as part of the serialization process, using Snappy compression by
default. The framing format for Snappy that Kinesis Data Firehose uses in this case is compatible
with Hadoop. This means that you can use the results of the Snappy compression and run
queries on this data in Athena. For the Snappy framing format that Hadoop relies on, see
BlockCompressorStream.java. When you configure the serializer, you can choose other types of
compression.
{
"attemptsMade": long,
"arrivalTimestamp": long,
"lastErrorCode": string,
"lastErrorMessage": string,
"attemptEndingTimestamp": long,
"rawData": string,
"sequenceNumber": string,
"subSequenceNumber": long,
"dataCatalogTable": {
"catalogId": string,
"databaseName": string,
"tableName": string,
"region": string,
"versionId": string,
70
Amazon Kinesis Data Firehose Developer Guide
Record Format Conversion Example
"catalogArn": string
}
}
71
Amazon Kinesis Data Firehose Developer Guide
Create a Kinesis Data Analytics Application
That Reads from a Delivery Stream
72
Amazon Kinesis Data Firehose Developer Guide
Data Delivery Format
Topics
• Data Delivery Format (p. 73)
• Data Delivery Frequency (p. 74)
• Data Delivery Failure Handling (p. 74)
• Amazon S3 Object Name Format (p. 76)
• Index Rotation for the OpenSearch Service Destination (p. 77)
• Delivery Across AWS Accounts and Across AWS Regions for HTTP Endpoint Destinations (p. 77)
• Duplicated Records (p. 78)
For data delivery to Amazon Redshift, Kinesis Data Firehose first delivers incoming data to your S3
bucket in the format described earlier. Kinesis Data Firehose then issues an Amazon Redshift COPY
command to load the data from your S3 bucket to your Amazon Redshift cluster. Ensure that after
Kinesis Data Firehose concatenates multiple incoming records to an Amazon S3 object, the Amazon S3
object can be copied to your Amazon Redshift cluster. For more information, see Amazon Redshift COPY
Command Data Format Parameters.
For data delivery to OpenSearch Service, Kinesis Data Firehose buffers incoming records based on the
buffering configuration of your delivery stream. It then generates an OpenSearch Service bulk request
to index multiple records to your OpenSearch Service cluster. Make sure that your record is UTF-8
encoded and flattened to a single-line JSON object before you send it to Kinesis Data Firehose. Also,
the rest.action.multi.allow_explicit_index option for your OpenSearch Service cluster
must be set to true (default) to take bulk requests with an explicit index that is set per record. For more
information, see OpenSearch Service Configure Advanced Options in the Amazon OpenSearch Service
Developer Guide.
For data delivery to Splunk, Kinesis Data Firehose concatenates the bytes that you send. If you want
delimiters in your data, such as a new line character, you must insert them yourself. Make sure that
Splunk is configured to parse any such delimiters.
73
Amazon Kinesis Data Firehose Developer Guide
Data Delivery Frequency
When delivering data to an HTTP endpoint owned by a supported third-party service provider, you can
use the integrated Amazon Lambda service to create a function to transform the incoming record(s) to
the format that matches the format the service provider's integration is expecting. Contact the third-
party service provider whose HTTP endpoint you've chosen for your destination to learn more about
their accepted record format.
Amazon S3
The frequency of data delivery to Amazon S3 is determined by the Amazon S3 Buffer size and
Buffer interval value that you configured for your delivery stream. Kinesis Data Firehose buffers
incoming data before it delivers it to Amazon S3. You can configure the values for Amazon S3
Buffer size (1–128 MB) or Buffer interval (60–900 seconds). The condition satisfied first triggers
data delivery to Amazon S3. When data delivery to the destination falls behind data writing to the
delivery stream, Kinesis Data Firehose raises the buffer size dynamically. It can then catch up and
ensure that all data is delivered to the destination.
Amazon Redshift
The frequency of data COPY operations from Amazon S3 to Amazon Redshift is determined by how
fast your Amazon Redshift cluster can finish the COPY command. If there is still data to copy, Kinesis
Data Firehose issues a new COPY command as soon as the previous COPY command is successfully
finished by Amazon Redshift.
Amazon OpenSearch Service
The frequency of data delivery to OpenSearch Service is determined by the OpenSearch Service
Buffer size and Buffer interval values that you configured for your delivery stream. Kinesis Data
Firehose buffers incoming data before delivering it to OpenSearch Service. You can configure the
values for OpenSearch Service Buffer size (1–100 MB) or Buffer interval (60–900 seconds), and the
condition satisfied first triggers data delivery to OpenSearch Service.
Splunk
Kinesis Data Firehose buffers incoming data before delivering it to Splunk. The buffer size is 5 MB,
and the buffer interval is 60 seconds. The condition satisfied first triggers data delivery to Splunk.
The buffer size and interval aren't configurable. These numbers are optimal.
HTTP endpoint destination
Kinesis Data Firehose buffers incoming data before delivering it to the specified HTTP endpoint
destination. The recommended buffer size for the destination varies from service provider to service
provider. For example, the recommended buffer size for Datadog is 4 MiBs and the recommended
buffer size for New Relic and Sumo Logic is 1 MiB. Contact the third-party service provider whose
endpoint you've chosen as your data destination for more information about their recommended
buffer size.
Amazon S3
Data delivery to your S3 bucket might fail for various reasons. For example, the bucket might not
exist anymore, the IAM role that Kinesis Data Firehose assumes might not have access to the bucket,
the network failed, or similar events. Under these conditions, Kinesis Data Firehose keeps retrying
74
Amazon Kinesis Data Firehose Developer Guide
Data Delivery Failure Handling
for up to 24 hours until the delivery succeeds. The maximum data storage time of Kinesis Data
Firehose is 24 hours. If data delivery fails for more than 24 hours, your data is lost.
Amazon Redshift
For an Amazon Redshift destination, you can specify a retry duration (0–7200 seconds) when
creating a delivery stream.
Data delivery to your Amazon Redshift cluster might fail for several reasons. For example, you might
have an incorrect cluster configuration of your delivery stream, a cluster under maintenance, or a
network failure. Under these conditions, Kinesis Data Firehose retries for the specified time duration
and skips that particular batch of Amazon S3 objects. The skipped objects' information is delivered
to your S3 bucket as a manifest file in the errors/ folder, which you can use for manual backfill.
For information about how to COPY data manually with manifest files, see Using a Manifest to
Specify Data Files.
Amazon OpenSearch Service
For the OpenSearch Service destination, you can specify a retry duration (0–7200 seconds) when
creating a delivery stream.
Data delivery to your OpenSearch Service cluster might fail for several reasons. For example,
you might have an incorrect OpenSearch Service cluster configuration of your delivery stream,
an OpenSearch Service cluster under maintenance, a network failure, or similar events. Under
these conditions, Kinesis Data Firehose retries for the specified time duration and then skips
that particular index request. The skipped documents are delivered to your S3 bucket in the
AmazonOpenSearchService_failed/ folder, which you can use for manual backfill. Each
document has the following JSON format:
{
"attemptsMade": "(number of index requests attempted)",
"arrivalTimestamp": "(the time when the document was received by Firehose)",
"errorCode": "(http error code returned by OpenSearch Service)",
"errorMessage": "(error message returned by OpenSearch Service)",
"attemptEndingTimestamp": "(the time when Firehose stopped attempting index
request)",
"esDocumentId": "(intended OpenSearch Service document ID)",
"esIndexName": "(intended OpenSearch Service index name)",
"esTypeName": "(intended OpenSearch Service type name)",
"rawData": "(base64-encoded document data)"
}
Splunk
When Kinesis Data Firehose sends data to Splunk, it waits for an acknowledgment from Splunk. If
an error occurs, or the acknowledgment doesn’t arrive within the acknowledgment timeout period,
Kinesis Data Firehose starts the retry duration counter. It keeps retrying until the retry duration
expires. After that, Kinesis Data Firehose considers it a data delivery failure and backs up the data to
your Amazon S3 bucket.
Every time Kinesis Data Firehose sends data to Splunk, whether it's the initial attempt or a retry, it
restarts the acknowledgement timeout counter. It then waits for an acknowledgement to arrive from
Splunk. Even if the retry duration expires, Kinesis Data Firehose still waits for the acknowledgment
until it receives it or the acknowledgement timeout is reached. If the acknowledgment times out,
Kinesis Data Firehose checks to determine whether there's time left in the retry counter. If there is
time left, it retries again and repeats the logic until it receives an acknowledgment or determines
that the retry time has expired.
A failure to receive an acknowledgement isn't the only type of data delivery error that can occur. For
information about the other types of data delivery errors, see Splunk Data Delivery Errors. Any data
delivery error triggers the retry logic if your retry duration is greater than 0.
75
Amazon Kinesis Data Firehose Developer Guide
Amazon S3 Object Name Format
{
"attemptsMade": 0,
"arrivalTimestamp": 1506035354675,
"errorCode": "Splunk.AckTimeout",
"errorMessage": "Did not receive an acknowledgement from HEC before the HEC
acknowledgement timeout expired. Despite the acknowledgement timeout, it's possible
the data was indexed successfully in Splunk. Kinesis Firehose backs up in Amazon S3
data for which the acknowledgement timeout expired.",
"attemptEndingTimestamp": 13626284715507,
"rawData":
"MiAyNTE2MjAyNzIyMDkgZW5pLTA1ZjMyMmQ1IDIxOC45Mi4xODguMjE0IDE3Mi4xNi4xLjE2NyAyNTIzMyAxNDMzIDYgMSA0M
"EventId": "49577193928114147339600778471082492393164139877200035842.0"
}
When Kinesis Data Firehose sends data to an HTTP endpoint destination, it waits for a response from
this destination. If an error occurs, or the response doesn’t arrive within the response timeout period,
Kinesis Data Firehose starts the retry duration counter. It keeps retrying until the retry duration
expires. After that, Kinesis Data Firehose considers it a data delivery failure and backs up the data to
your Amazon S3 bucket.
Every time Kinesis Data Firehose sends data to an HTTP endpoint destination, whether it's the initial
attempt or a retry, it restarts the response timeout counter. It then waits for a response to arrive
from the HTTP endpoint destination. Even if the retry duration expires, Kinesis Data Firehose still
waits for the response until it receives it or the response timeout is reached. If the response times
out, Kinesis Data Firehose checks to determine whether there's time left in the retry counter. If there
is time left, it retries again and repeats the logic until it receives a response or determines that the
retry time has expired.
A failure to receive a response isn't the only type of data delivery error that can occur. For
information about the other types of data delivery errors, see HTTP Endpoint Data Delivery Errors
{
"attemptsMade":5,
"arrivalTimestamp":1594265943615,
"errorCode":"HttpEndpoint.DestinationException",
"errorMessage":"Received the following response from the endpoint destination.
{"requestId": "109777ac-8f9b-4082-8e8d-b4f12b5fc17b", "timestamp": 1594266081268,
"errorMessage": "Unauthorized"}",
"attemptEndingTimestamp":1594266081318,
"rawData":"c2FtcGxlIHJhdyBkYXRh",
"subsequenceNumber":0,
"dataId":"49607357361271740811418664280693044274821622880012337186.0"
}
76
Amazon Kinesis Data Firehose Developer Guide
Index Rotation for the OpenSearch Service Destination
Depending on the rotation option you choose, Kinesis Data Firehose appends a portion of the UTC arrival
timestamp to your specified index name. It rotates the appended timestamp accordingly. The following
example shows the resulting index name in OpenSearch Service for each index rotation option, where
the specified index name is myindex and the arrival timestamp is 2016-02-25T13:00:00Z.
RotationPeriod IndexName
NoRotation myindex
OneHour myindex-2016-02-25-13
OneDay myindex-2016-02-25
OneWeek myindex-2016-w08
OneMonth myindex-2016-02
Note
With the OneWeek option, Data Firehose auto-create indexes using the format of <YEAR>-
w<WEEK NUMBER> (for example, 2020-w33), where the week number is calculated using UTC
time and according to the following US conventions:
Kinesis Data Firehose also supports data delivery to HTTP endpoint destinations across AWS regions. You
can deliver data from a delivery stream in one AWS region to an HTTP endpoint in another AWS region.
You can also delivery data from a delivery stream to an HTTP endpoint destination outside of AWS
regions, for example to your own on-premises server by setting the HTTP endpoint URL to your desired
destination. For these scenarios, additional data transfer charges are added to your delivery costs. For
more information, see the Data Transfer section in the "On-Demand Pricing" page.
77
Amazon Kinesis Data Firehose Developer Guide
Duplicated Records
Duplicated Records
Kinesis Data Firehose uses at-least-once semantics for data delivery. In some circumstances, such as
when data delivery times out, delivery retries by Kinesis Data Firehose might introduce duplicates if the
original data-delivery request eventually goes through. This applies to all destination types that Kinesis
Data Firehose supports.
78
Amazon Kinesis Data Firehose Developer Guide
Monitoring with CloudWatch Metrics
• Amazon CloudWatch metrics (p. 79)— Kinesis Data Firehose sends Amazon CloudWatch custom
metrics with detailed monitoring for each delivery stream.
• Amazon CloudWatch Logs (p. 93)— Kinesis Data Firehose sends CloudWatch custom logs with
detailed monitoring for each delivery stream.
• Kinesis Agent (p. 99)— Kinesis Agent publishes custom CloudWatch metrics to help assess whether
the agent is working as expected.
• API logging and history (p. 100)— Kinesis Data Firehose uses AWS CloudTrail to log API calls and
store the data in an Amazon S3 bucket, and to maintain API call history.
The metrics that you configure for your Kinesis Data Firehose delivery streams and agents are
automatically collected and pushed to CloudWatch every five minutes. Metrics are archived for two
weeks; after that period, the data is discarded.
The metrics collected for Kinesis Data Firehose delivery streams are free of charge. For information about
Kinesis agent metrics, see Monitoring Kinesis Agent Health (p. 99).
Topics
• Dynamic Partitioning CloudWatch Metrics (p. 80)
• Data Delivery CloudWatch Metrics (p. 80)
• Data Ingestion Metrics (p. 85)
• API-Level CloudWatch Metrics (p. 88)
• Data Transformation CloudWatch Metrics (p. 90)
• Format Conversion CloudWatch Metrics (p. 90)
• Server-Side Encryption (SSE) CloudWatch Metrics (p. 91)
• Dimensions for Kinesis Data Firehose (p. 91)
• Kinesis Data Firehose Usage Metrics (p. 91)
• Accessing CloudWatch Metrics for Kinesis Data Firehose (p. 92)
• Best Practices with CloudWatch Alarms (p. 93)
• Monitoring Kinesis Data Firehose Using CloudWatch Logs (p. 93)
• Monitoring Kinesis Agent Health (p. 99)
• Logging Kinesis Data Firehose API Calls with AWS CloudTrail (p. 100)
79
Amazon Kinesis Data Firehose Developer Guide
Dynamic Partitioning CloudWatch Metrics
Metric Description
Units: Count
PartitionCountExceeded This metric indicates if you are exceeding the partition count
limit. It emits 1 or 0 based on whether limit is breached or
not.
Units: Milliseconds
Units: StandardUnit.BytesSecond
Units: Count
Metric Description
Units: Bytes
80
Amazon Kinesis Data Firehose Developer Guide
Data Delivery CloudWatch Metrics
Metric Description
Units: Seconds
Units: Count
Units: Count
DeliveryToS3.DataFreshness The age (from getting into Kinesis Data Firehose to now)
of the oldest record in Kinesis Data Firehose. Any record
older than this age has been delivered to the S3 bucket.
Kinesis Data Firehose emits this metric only when you enable
backup for all documents.
Units: Seconds
Units: Count
Metric Description
Units: Count
Units: Count
Units: Bytes
81
Amazon Kinesis Data Firehose Developer Guide
Data Delivery CloudWatch Metrics
Metric Description
DeliveryToS3.DataFreshness The age (from getting into Kinesis Data Firehose to now) of
the oldest record in Kinesis Data Firehose. Any record older
than this age has been delivered to the S3 bucket.
Units: Seconds
Units: Count
Units: Count
BackupToS3.DataFreshness Age (from getting into Kinesis Data Firehose to now) of the
oldest record in Kinesis Data Firehose. Any record older than
this age has been delivered to the Amazon S3 bucket for
backup. Kinesis Data Firehose emits this metric when backup
to Amazon S3 is enabled.
Units: Seconds
Units: Count
Delivery to Amazon S3
The metrics in the following table are related to delivery to Amazon S3 when it is the main destination of
the delivery stream.
Metric Description
Units: Bytes
DeliveryToS3.DataFreshness The age (from getting into Kinesis Data Firehose to now) of
the oldest record in Kinesis Data Firehose. Any record older
than this age has been delivered to the S3 bucket.
Units: Seconds
82
Amazon Kinesis Data Firehose Developer Guide
Data Delivery CloudWatch Metrics
Metric Description
Units: Count
Units: Count
BackupToS3.DataFreshness Age (from getting into Kinesis Data Firehose to now) of the
oldest record in Kinesis Data Firehose. Any record older than
this age has been delivered to the Amazon S3 bucket for
backup. Kinesis Data Firehose emits this metric when backup
is enabled (which is only possible when data transformation
is also enabled).
Units: Seconds
Units: Count
Delivery to Splunk
Metric Description
Units: Bytes
Units: Seconds
83
Amazon Kinesis Data Firehose Developer Guide
Data Delivery CloudWatch Metrics
Metric Description
DeliveryToSplunk.DataFreshness Age (from getting into Kinesis Data Firehose to now) of the
oldest record in Kinesis Data Firehose. Any record older than
this age has been delivered to Splunk.
Units: Seconds
Units: Count
DeliveryToSplunk.Success The sum of the successfully indexed records over the sum of
records that were attempted.
Units: Count
BackupToS3.DataFreshness Age (from getting into Kinesis Data Firehose to now) of the
oldest record in Kinesis Data Firehose. Any record older than
this age has been delivered to the Amazon S3 bucket for
backup. Kinesis Data Firehose emits this metric when the
delivery stream is configured to back up all documents.
Units: Seconds
Units: Count
Metric Description
Units: Bytes
84
Amazon Kinesis Data Firehose Developer Guide
Data Ingestion Metrics
Metric Description
Units: Counts
Units: Seconds
DeliveryToHttpEndpoint.Success The sum of all successful data delivery requests to the HTTP
endpoint
Units: Count
Metric Description
DataReadFromKinesisStream.Bytes When the data source is a Kinesis data stream, this metric
indicates the number of bytes read from that data stream.
This number includes rereads due to failovers.
Units: Bytes
Units: Count
Units: Count
Units: Count
Units: Count
85
Amazon Kinesis Data Firehose Developer Guide
Data Ingestion Metrics
Metric Description
Units: Bytes
Units: Seconds
Units: Count
DataReadFromKinesisStream.Bytes When the data source is a Kinesis data stream, this metric
indicates the number of bytes read from that data stream.
This number includes rereads due to failovers.
Units: Bytes
Units: Count
Units: Bytes
Units: Seconds
86
Amazon Kinesis Data Firehose Developer Guide
Data Ingestion Metrics
Metric Description
Units: Count
Units: Bytes
Units: Count
Units: Bytes
DeliveryToS3.DataFreshness The age (from getting into Kinesis Data Firehose to now) of
the oldest record in Kinesis Data Firehose. Any record older
than this age has been delivered to the S3 bucket.
Units: Seconds
Units: Count
Units: Bytes
Units: Seconds
87
Amazon Kinesis Data Firehose Developer Guide
API-Level CloudWatch Metrics
Metric Description
DeliveryToSplunk.DataFreshness Age (from getting into Kinesis Data Firehose to now) of the
oldest record in Kinesis Data Firehose. Any record older than
this age has been delivered to Splunk.
Units: Seconds
Units: Count
DeliveryToSplunk.Success The sum of the successfully indexed records over the sum of
records that were attempted.
Units: Bytes
Units: Count
Units: Count
KinesisMillisBehindLatest When the data source is a Kinesis data stream, this metric
indicates the number of milliseconds that the last read
record is behind the newest record in the Kinesis data
stream.
Units: Millisecond
Units: Count
Units: Count
Metric Description
88
Amazon Kinesis Data Firehose Developer Guide
API-Level CloudWatch Metrics
Metric Description
Units: Milliseconds
Units: Count
Units: Milliseconds
Units: Count
Units: Bytes
Units: Milliseconds
Units: Count
Units: Bytes
Units: Milliseconds
Units: Count
Units: Count
Units: Count
89
Amazon Kinesis Data Firehose Developer Guide
Data Transformation CloudWatch Metrics
Metric Description
Units: Count
Units: Count
Units: Count
Units: Milliseconds
Units: Count
Metric Description
The time it takes for each Lambda function invocation performed by Kinesis
ExecuteProcessing.Duration
Data Firehose.
Units: Milliseconds
The sum of the successful Lambda function invocations over the sum of the
ExecuteProcessing.Success
total Lambda function invocations.
The number of successfully processed records over the specified time period.
SucceedProcessing.Records
Units: Count
The number of successfully processed bytes over the specified time period.
SucceedProcessing.Bytes
Units: Bytes
90
Amazon Kinesis Data Firehose Developer Guide
Server-Side Encryption (SSE) CloudWatch Metrics
Metric Description
Units: Count
Units: Bytes
Units: Count
Units: Bytes
Metric Description
Units: Count
Units: Count
Units: Count
Units: Count
Service quota usage metrics are in the AWS/Usage namespace and are collected every minute.
91
Amazon Kinesis Data Firehose Developer Guide
Accessing CloudWatch Metrics for Kinesis Data Firehose
Currently, the only metric name in this namespace that CloudWatch publishes is ResourceCount. This
metric is published with the dimensions Service, Class, Type, and Resource.
Metric Description
The following dimensions are used to refine the usage metrics that are published by Kinesis Data
Firehose.
Dimension Description
Resource The name of the AWS resource. Currently, when the Service
dimension is Firehose, the only valid value for Resource is
DeliveryStreams.
92
Amazon Kinesis Data Firehose Developer Guide
Best Practices with CloudWatch Alarms
• DeliveryToS3.DataFreshness
• DeliveryToSplunk.DataFreshness
• DeliveryToAmazonOpenSearchService.DataFreshness
For information about troubleshooting when alarms go to the ALARM state, see
Troubleshooting (p. 123).
If you enable Kinesis Data Firehose error logging in the Kinesis Data Firehose console, a log group and
corresponding log streams are created for the delivery stream on your behalf. The format of the log
group name is /aws/kinesisfirehose/delivery-stream-name, where delivery-stream-name
is the name of the corresponding delivery stream. The log stream name is S3Delivery, RedshiftDelivery,
or AmazonOpenSearchServiceDelivery, depending on the delivery destination. Lambda invocation
errors for data transformation are also logged to the log stream used for data delivery errors.
For example, if you create a delivery stream "MyStream" with Amazon Redshift as the destination and
enable Kinesis Data Firehose error logging, the following are created on your behalf: a log group named
aws/kinesisfirehose/MyStream and two log streams named S3Delivery and RedshiftDelivery.
In this example, the S3Delivery log stream is used for logging errors related to delivery failure to the
intermediate S3 bucket. The RedshiftDelivery log stream is used for logging errors related to Lambda
invocation failure and delivery failure to your Amazon Redshift cluster.
You can enable Kinesis Data Firehose error logging through the AWS CLI, the API, or AWS
CloudFormation using the CloudWatchLoggingOptions configuration. To do so, create a log
group and a log stream in advance. We recommend reserving that log group and log stream for
93
Amazon Kinesis Data Firehose Developer Guide
Monitoring with CloudWatch Logs
Kinesis Data Firehose error logging exclusively. Also ensure that the associated IAM policy has
"logs:putLogEvents" permission. For more information, see Controlling Access with Amazon Kinesis
Data Firehose (p. 39).
Note that Kinesis Data Firehose does not guarantee that all delivery error logs are sent to CloudWatch
Logs. In circumstances where delivery failure rate is high, Kinesis Data Firehose samples delivery error
logs before sending them to CloudWatch Logs.
There is a nominal charge for error logs sent to CloudWatch Logs. For more information, see Amazon
CloudWatch Pricing.
Contents
• Data Delivery Errors (p. 94)
• Lambda Invocation Errors (p. 98)
• Accessing CloudWatch Logs for Kinesis Data Firehose (p. 99)
Errors
• Amazon S3 Data Delivery Errors (p. 94)
• Amazon Redshift Data Delivery Errors (p. 95)
• Splunk Data Delivery Errors (p. 96)
• HTTPS Endpoint Data Delivery Errors (p. 97)
• Amazon OpenSearch Service Data Delivery Errors (p. 98)
"The provided AWS KMS key was not found. If you are using what you
S3.KMS.NotFoundException
believe to be a valid AWS KMS key with the correct role, check if there is a
problem with the account to which the AWS KMS key is attached."
"The KMS request per second limit was exceeded while attempting to
S3.KMS.RequestLimitExceeded
encrypt S3 objects. Increase the request per second limit."
For more information, see Limits in the AWS Key Management Service
Developer Guide.
S3.AccessDenied "Access was denied. Ensure that the trust policy for the provided IAM role
allows Kinesis Data Firehose to assume the role, and the access policy allows
access to the S3 bucket."
S3.AccountProblem "There is a problem with your AWS account that prevents the operation from
completing successfully. Contact AWS Support."
S3.AllAccessDisabled"Access to the account provided has been disabled. Contact AWS Support."
S3.InvalidPayer "Access to the account provided has been disabled. Contact AWS Support."
94
Amazon Kinesis Data Firehose Developer Guide
Monitoring with CloudWatch Logs
S3.NotSignedUp "The account is not signed up for Amazon S3. Sign the account up or use a
different account."
S3.NoSuchBucket "The specified bucket does not exist. Create the bucket or use a different
bucket that does exist."
S3.MethodNotAllowed "The specified method is not allowed against this resource. Modify the
bucket’s policy to allow the correct Amazon S3 operation permissions."
InternalError "An internal error occurred while attempting to deliver data. Delivery will be
retried; if the error persists, then it will be reported to AWS for resolution."
"The table to which to load data was not found. Ensure that the specified
Redshift.TableNotFound
table exists."
"The provided user name and password failed authentication. Provide a valid
Redshift.AuthenticationFailed
user name and password."
"Access was denied. Ensure that the trust policy for the provided IAM role
Redshift.AccessDenied
allows Kinesis Data Firehose to assume the role."
"The COPY command was unable to access the S3 bucket. Ensure that the
Redshift.S3BucketAccessDenied
access policy for the provided IAM role allows access to the S3 bucket."
"Loading data into the table failed. Check STL_LOAD_ERRORS system table
Redshift.DataLoadFailed
for details."
"A column in the COPY command does not exist in the table. Specify a valid
Redshift.ColumnNotFound
column name."
For more information, see the Amazon Redshift COPY command in the
Amazon Redshift Database Developer Guide.
95
Amazon Kinesis Data Firehose Developer Guide
Monitoring with CloudWatch Logs
"The connection to the specified Amazon Redshift cluster failed. Ensure that
Redshift.ConnectionFailed
security settings allow Kinesis Data Firehose connections, that the cluster or
database specified in the Amazon Redshift destination configuration or JDBC
URL is correct, and that the cluster is available."
"Amazon Redshift attempted to use the wrong region endpoint for accessing
Redshift.IncorrectOrMissingRegion
the S3 bucket. Either specify a correct region value in the COPY command
options or ensure that the S3 bucket is in the same region as the Amazon
Redshift database."
"The provided jsonpaths file is not in a supported JSON format. Retry the
Redshift.IncorrectJsonPathsFile
command."
"The user does not have permissions to load data into the table. Check the
Redshift.InsufficientPrivilege
Amazon Redshift user permissions for the INSERT privilege."
"The query cannot be executed because the system is in resize mode. Try the
Redshift.ReadOnlyCluster
query again later."
Redshift.DiskFull "Data could not be loaded because the disk is full. Increase the capacity of
the Amazon Redshift cluster or delete unused data to free disk space."
InternalError "An internal error occurred while attempting to deliver data. Delivery will be
retried; if the error persists, then it will be reported to AWS for resolution."
"If you have a proxy (ELB or other) between Kinesis Data Firehose and the
Splunk.ProxyWithoutStickySessions
HEC node, you must enable sticky sessions to support HEC ACKs."
Splunk.DisabledToken"The HEC token is disabled. Enable the token to allow data delivery to
Splunk."
Splunk.InvalidToken "The HEC token is invalid. Update Kinesis Data Firehose with a valid HEC
token."
"The data is not formatted correctly. To see how to properly format data for
Splunk.InvalidDataFormat
Raw or Event HEC endpoints, see Splunk Event Data."
Splunk.InvalidIndex "The HEC token or input is configured with an invalid index. Check your
index configuration and try again."
Splunk.ServerError "Data delivery to Splunk failed due to a server error from the HEC node.
Kinesis Data Firehose will retry sending the data if the retry duration in your
96
Amazon Kinesis Data Firehose Developer Guide
Monitoring with CloudWatch Logs
Splunk.DisabledAck "Indexer acknowledgement is disabled for the HEC token. Enable indexer
acknowledgement and try again. For more info, see Enable indexer
acknowledgement."
Splunk.AckTimeout "Did not receive an acknowledgement from HEC before the HEC
acknowledgement timeout expired. Despite the acknowledgement timeout,
it's possible the data was indexed successfully in Splunk. Kinesis Data
Firehose backs up in Amazon S3 data for which the acknowledgement
timeout expired."
"The connection to Splunk timed out. This might be a transient error and the
Splunk.ConnectionTimeout
request will be retried. Kinesis Data Firehose backs up the data to Amazon
S3 if all retries fail."
"Could not connect to the HEC endpoint. Make sure that the HEC endpoint
Splunk.InvalidEndpoint
URL is valid and reachable from Kinesis Data Firehose."
Splunk.SSLUnverified"Could not connect to the HEC endpoint. The host does not match the
certificate provided by the peer. Make sure that the certificate and the host
are valid."
Splunk.SSLHandshake "Could not connect to the HEC endpoint. Make sure that the certificate and
the host are valid."
The delivery timed out before a response was received and will be retried. If
HttpEndpoint.RequestTimeout
this error persists, contact the AWS Firehose service team.
"The response received from the endpoint is too large. Contact the owner of
HttpEndpoint.ResponseTooLarge
the endpoint to resolve this issue."
"The response received from the specified endpoint is invalid. Contact the
HttpEndpoint.InvalidResponseFromDestination
owner of the endpoint to resolve the issue."
97
Amazon Kinesis Data Firehose Developer Guide
Monitoring with CloudWatch Logs
"Unable to maintain connection with the endpoint. Contact the owner of the
HttpEndpoint.ConnectionReset
endpoint to resolve this issue."
"Trouble maintaining connection with the endpoint. Please reach out to the
HttpEndpoint.ConnectionReset
owner of the endpoint."
"Access was denied. Ensure that the trust policy for the provided IAM role
Lambda.AssumeRoleAccessDenied
allows Kinesis Data Firehose to assume the role."
"Access was denied. Ensure that the access policy allows access to the
Lambda.InvokeAccessDenied
Lambda function."
"There was an error parsing returned records from the Lambda function.
Lambda.JsonProcessingException
Ensure that the returned records follow the status model required by Kinesis
Data Firehose."
For more information, see Data Transformation and Status Model (p. 58).
For more information, see AWS Lambda Limits in the AWS Lambda Developer
Guide.
"Multiple records were returned with the same record ID. Ensure that the
Lambda.DuplicatedRecordId
Lambda function returns unique record IDs for each record."
For more information, see Data Transformation and Status Model (p. 58).
"One or more record IDs were not returned. Ensure that the Lambda function
Lambda.MissingRecordId
returns all received record IDs."
For more information, see Data Transformation and Status Model (p. 58).
"The specified Lambda function does not exist. Use a different function that
Lambda.ResourceNotFound
does exist."
98
Amazon Kinesis Data Firehose Developer Guide
Monitoring Agent Health
"AWS Lambda was not able to set up the VPC access for the Lambda
Lambda.SubnetIPAddressLimitReachedException
function because one or more configured subnets have no available IP
addresses. Increase the IP address limit."
For more information, see Amazon VPC Limits - VPC and Subnets in the
Amazon VPC User Guide.
"AWS Lambda was not able to create an Elastic Network Interface (ENI) in
Lambda.ENILimitReachedException
the VPC, specified as part of the Lambda function configuration, because
the limit for network interfaces has been reached. Increase the network
interface limit."
For more information, see Amazon VPC Limits - Network Interfaces in the
Amazon VPC User Guide.
1. Sign in to the AWS Management Console and open the Kinesis console at https://
console.aws.amazon.com/kinesis.
2. Choose Data Firehose in the navigation pane.
3. On the navigation bar, choose an AWS Region.
4. Choose a delivery stream name to go to the delivery stream details page.
5. Choose Error Log to view a list of error logs related to data delivery failure.
Metrics such as number of records and bytes sent are useful to understand the rate at which the agent
is submitting data to the Kinesis Data Firehose delivery stream. When these metrics fall below expected
thresholds by some percentage or drop to zero, it could indicate configuration issues, network errors, or
agent health issues. Metrics such as on-host CPU and memory consumption and agent error counters
indicate data producer resource usage, and provide insights into potential configuration or host errors.
Finally, the agent also logs service exceptions to help investigate agent issues.
99
Amazon Kinesis Data Firehose Developer Guide
Logging Kinesis Data Firehose
API Calls with AWS CloudTrail
The agent metrics are reported in the region specified in the agent configuration setting
cloudwatch.endpoint. For more information, see Agent Configuration Settings (p. 29).
Cloudwatch metrics published from multiple Kinesis Agents are aggregated or combined.
There is a nominal charge for metrics emitted from Kinesis Agent, which are enabled by default. For
more information, see Amazon CloudWatch Pricing.
Metric Description
BytesSent The number of bytes sent to the Kinesis Data Firehose delivery stream over
the specified time period.
Units: Bytes
RecordSendAttempts The number of records attempted (either first time, or as a retry) in a call to
PutRecordBatch over the specified time period.
Units: Count
Units: Count
Units: Count
To learn more about CloudTrail, including how to configure and enable it, see the AWS CloudTrail User
Guide.
100
Amazon Kinesis Data Firehose Developer Guide
Logging Kinesis Data Firehose
API Calls with AWS CloudTrail
For an ongoing record of events in your AWS account, including events for Kinesis Data Firehose, create
a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create
a trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the
AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can
configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.
For more information, see the following:
Kinesis Data Firehose supports logging the following actions as events in CloudTrail log files:
• CreateDeliveryStream
• DeleteDeliveryStream
• DescribeDeliveryStream
• ListDeliveryStreams
• ListTagsForDeliveryStream
• TagDeliveryStream
• StartDeliveryStreamEncryption
• StopDeliveryStreamEncryption
• UntagDeliveryStream
• UpdateDestination
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or AWS Identity and Access Management (IAM) user
credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
The following example shows a CloudTrail log entry that demonstrates the CreateDeliveryStream,
DescribeDeliveryStream, ListDeliveryStreams, UpdateDestination, and
DeleteDeliveryStream actions.
{
"Records":[
{
"eventVersion":"1.02",
101
Amazon Kinesis Data Firehose Developer Guide
Logging Kinesis Data Firehose
API Calls with AWS CloudTrail
"userIdentity":{
"type":"IAMUser",
"principalId":"AKIAIOSFODNN7EXAMPLE",
"arn":"arn:aws:iam::111122223333:user/CloudTrail_Test_User",
"accountId":"111122223333",
"accessKeyId":"AKIAI44QH8DHBEXAMPLE",
"userName":"CloudTrail_Test_User"
},
"eventTime":"2016-02-24T18:08:22Z",
"eventSource":"firehose.amazonaws.com",
"eventName":"CreateDeliveryStream",
"awsRegion":"us-east-1",
"sourceIPAddress":"127.0.0.1",
"userAgent":"aws-internal/3",
"requestParameters":{
"deliveryStreamName":"TestRedshiftStream",
"redshiftDestinationConfiguration":{
"s3Configuration":{
"compressionFormat":"GZIP",
"prefix":"prefix",
"bucketARN":"arn:aws:s3:::firehose-cloudtrail-test-bucket",
"roleARN":"arn:aws:iam::111122223333:role/Firehose",
"bufferingHints":{
"sizeInMBs":3,
"intervalInSeconds":900
},
"encryptionConfiguration":{
"kMSEncryptionConfig":{
"aWSKMSKeyARN":"arn:aws:kms:us-east-1:key"
}
}
},
"clusterJDBCURL":"jdbc:redshift://example.abc123.us-
west-2.redshift.amazonaws.com:5439/dev",
"copyCommand":{
"copyOptions":"copyOptions",
"dataTableName":"dataTable"
},
"password":"",
"username":"",
"roleARN":"arn:aws:iam::111122223333:role/Firehose"
}
},
"responseElements":{
"deliveryStreamARN":"arn:aws:firehose:us-east-1:111122223333:deliverystream/
TestRedshiftStream"
},
"requestID":"958abf6a-db21-11e5-bb88-91ae9617edf5",
"eventID":"875d2d68-476c-4ad5-bbc6-d02872cfc884",
"eventType":"AwsApiCall",
"recipientAccountId":"111122223333"
},
{
"eventVersion":"1.02",
"userIdentity":{
"type":"IAMUser",
"principalId":"AKIAIOSFODNN7EXAMPLE",
"arn":"arn:aws:iam::111122223333:user/CloudTrail_Test_User",
"accountId":"111122223333",
"accessKeyId":"AKIAI44QH8DHBEXAMPLE",
"userName":"CloudTrail_Test_User"
},
"eventTime":"2016-02-24T18:08:54Z",
"eventSource":"firehose.amazonaws.com",
"eventName":"DescribeDeliveryStream",
"awsRegion":"us-east-1",
102
Amazon Kinesis Data Firehose Developer Guide
Logging Kinesis Data Firehose
API Calls with AWS CloudTrail
"sourceIPAddress":"127.0.0.1",
"userAgent":"aws-internal/3",
"requestParameters":{
"deliveryStreamName":"TestRedshiftStream"
},
"responseElements":null,
"requestID":"aa6ea5ed-db21-11e5-bb88-91ae9617edf5",
"eventID":"d9b285d8-d690-4d5c-b9fe-d1ad5ab03f14",
"eventType":"AwsApiCall",
"recipientAccountId":"111122223333"
},
{
"eventVersion":"1.02",
"userIdentity":{
"type":"IAMUser",
"principalId":"AKIAIOSFODNN7EXAMPLE",
"arn":"arn:aws:iam::111122223333:user/CloudTrail_Test_User",
"accountId":"111122223333",
"accessKeyId":"AKIAI44QH8DHBEXAMPLE",
"userName":"CloudTrail_Test_User"
},
"eventTime":"2016-02-24T18:10:00Z",
"eventSource":"firehose.amazonaws.com",
"eventName":"ListDeliveryStreams",
"awsRegion":"us-east-1",
"sourceIPAddress":"127.0.0.1",
"userAgent":"aws-internal/3",
"requestParameters":{
"limit":10
},
"responseElements":null,
"requestID":"d1bf7f86-db21-11e5-bb88-91ae9617edf5",
"eventID":"67f63c74-4335-48c0-9004-4ba35ce00128",
"eventType":"AwsApiCall",
"recipientAccountId":"111122223333"
},
{
"eventVersion":"1.02",
"userIdentity":{
"type":"IAMUser",
"principalId":"AKIAIOSFODNN7EXAMPLE",
"arn":"arn:aws:iam::111122223333:user/CloudTrail_Test_User",
"accountId":"111122223333",
"accessKeyId":"AKIAI44QH8DHBEXAMPLE",
"userName":"CloudTrail_Test_User"
},
"eventTime":"2016-02-24T18:10:09Z",
"eventSource":"firehose.amazonaws.com",
"eventName":"UpdateDestination",
"awsRegion":"us-east-1",
"sourceIPAddress":"127.0.0.1",
"userAgent":"aws-internal/3",
"requestParameters":{
"destinationId":"destinationId-000000000001",
"deliveryStreamName":"TestRedshiftStream",
"currentDeliveryStreamVersionId":"1",
"redshiftDestinationUpdate":{
"roleARN":"arn:aws:iam::111122223333:role/Firehose",
"clusterJDBCURL":"jdbc:redshift://example.abc123.us-
west-2.redshift.amazonaws.com:5439/dev",
"password":"",
"username":"",
"copyCommand":{
"copyOptions":"copyOptions",
"dataTableName":"dataTable"
},
103
Amazon Kinesis Data Firehose Developer Guide
Logging Kinesis Data Firehose
API Calls with AWS CloudTrail
"s3Update":{
"bucketARN":"arn:aws:s3:::firehose-cloudtrail-test-bucket-update",
"roleARN":"arn:aws:iam::111122223333:role/Firehose",
"compressionFormat":"GZIP",
"bufferingHints":{
"sizeInMBs":3,
"intervalInSeconds":900
},
"encryptionConfiguration":{
"kMSEncryptionConfig":{
"aWSKMSKeyARN":"arn:aws:kms:us-east-1:key"
}
},
"prefix":"arn:aws:s3:::firehose-cloudtrail-test-bucket"
}
}
},
"responseElements":null,
"requestID":"d549428d-db21-11e5-bb88-91ae9617edf5",
"eventID":"1cb21e0b-416a-415d-bbf9-769b152a6585",
"eventType":"AwsApiCall",
"recipientAccountId":"111122223333"
},
{
"eventVersion":"1.02",
"userIdentity":{
"type":"IAMUser",
"principalId":"AKIAIOSFODNN7EXAMPLE",
"arn":"arn:aws:iam::111122223333:user/CloudTrail_Test_User",
"accountId":"111122223333",
"accessKeyId":"AKIAI44QH8DHBEXAMPLE",
"userName":"CloudTrail_Test_User"
},
"eventTime":"2016-02-24T18:10:12Z",
"eventSource":"firehose.amazonaws.com",
"eventName":"DeleteDeliveryStream",
"awsRegion":"us-east-1",
"sourceIPAddress":"127.0.0.1",
"userAgent":"aws-internal/3",
"requestParameters":{
"deliveryStreamName":"TestRedshiftStream"
},
"responseElements":null,
"requestID":"d85968c1-db21-11e5-bb88-91ae9617edf5",
"eventID":"dd46bb98-b4e9-42ff-a6af-32d57e636ad1",
"eventType":"AwsApiCall",
"recipientAccountId":"111122223333"
}
]
}
104
Amazon Kinesis Data Firehose Developer Guide
The timestamp namespace
You can use expressions of the following forms in your custom prefix: !{namespace:value}, where
namespace can be one of the following, as explained in the following sections.
• firehose
• timestamp
• partitionKeyFromQuery
• partitionKeyFromLambda
If a prefix ends with a slash, it appears as a folder in the Amazon S3 bucket. For more information, see
Amazon S3 Object Name Format in the Amazon Kinesis Data Firehose Developer Guide.
When evaluating timestamps, Kinesis Data Firehose uses the approximate arrival timestamp of the oldest
record that's contained in the Amazon S3 object being written.
If you use the timestamp namespace more than once in the same prefix expression, every instance
evaluates to the same instant in time.
105
Amazon Kinesis Data Firehose Developer Guide
partitionKeyFromLambda and
partitionKeyFromQuery namespaces
partitionKeyFromLambda and
partitionKeyFromQuery namespaces
For dynamic partitioning, you must use the following expression format in your S3 bucket
prefix: !{namespace:value}, where namespace can be either partitionKeyFromQuery or
partitionKeyFromLambda, or both. If you are using inline parsing to create the partitioning keys for
your source data, you must specify an S3 bucket prefix value that consists of expressions specified in the
following format: "partitionKeyFromQuery:keyID". If you are using an AWS Lambda function to
create partitioning keys for your source data, you must specify an S3 bucket prefix value that consists
of expressions specified in the following format: "partitionKeyFromLambda:keyID". For more
information, see the "Choose Amazon S3 for Your Destination" in Creating an Amazon Kinesis Data
Firehose Delivery Stream.
Semantic rules
The following rules apply to Prefix and ErrorOutputPrefix expressions.
• For the timestamp namespace, any character that isn't in single quotes is evaluated. In other words,
any string escaped with single quotes in the value field is taken literally.
106
Amazon Kinesis Data Firehose Developer Guide
Example prefixes
• If you specify a prefix that doesn't contain a timestamp namespace expression, Kinesis Data Firehose
appends the expression !{timestamp:yyyy/MM/dd/HH/}to the value in the Prefix field.
• The sequence !{ can only appear in !{namespace:value} expressions.
• ErrorOutputPrefix can be null only if Prefix contains no expressions. In this case, Prefix
evaluates to <specified-prefix>yyyy/MM/DDD/HH/ and ErrorOutputPrefix evaluates to
<specified-prefix><error-output-type>YYYY/MM/DDD/HH/. DDD represents the day of the
year.
• If you specify an expression for ErrorOutputPrefix, you must include at least one instance of !
{firehose:error-output-type}.
• Prefix can't contain !{firehose:error-output-type}.
• Neither Prefix nor ErrorOutputPrefix can be greater than 512 characters after they're evaluated.
• If the destination is Amazon Redshift, Prefix must not contain expressions and
ErrorOutputPrefix must be null.
• When the destination is Amazon OpenSearch Service or Splunk, and no ErrorOutputPrefix is
specified, Kinesis Data Firehose uses the Prefix field for failed records.
• When the destination is Amazon S3, the Prefix and ErrorOutputPrefix in the Amazon S3
destination configuration are used for successful records and failed records, respectively. If you use the
AWS CLI or the API, you can use ExtendedS3DestinationConfiguration to specify an Amazon S3
backup configuration with its own Prefix and ErrorOutputPrefix.
• When you use the AWS Management Console and set the destination to Amazon S3, Kinesis Data
Firehose uses the Prefix and ErrorOutputPrefix in the destination configuration for successful
records and failed records, respectively. If you specify a prefix but no error prefix, Kinesis Data Firehose
automatically sets the error prefix to !{firehose:error-output-type}/.
• When you use ExtendedS3DestinationConfiguration with the AWS CLI, the API, or AWS
CloudFormation, if you specify a S3BackupConfiguration, Kinesis Data Firehose doesn't provide a
default ErrorOutputPrefix.
• You cannot use partitionKeyFromLambda and partitionKeyFromQuery namespaces when
creating ErrorOutputPrefix expressions.
Example prefixes
Prefix and ErrorOutputPrefix examples
107
Amazon Kinesis Data Firehose Developer Guide
Example prefixes
108
Amazon Kinesis Data Firehose Developer Guide
Interface VPC endpoints (AWS
PrivateLink) for Kinesis Data Firehose
The following example shows how you can set up an AWS Lambda function in a VPC and create a VPC
endpoint to allow the function to communicate securely with the Kinesis Data Firehose service. In this
example, you use a policy that allows the Lambda function to list the delivery streams in the current
Region but not to describe any delivery stream.
1. Sign in to the AWS Management Console and open the Amazon VPC console at https://
console.aws.amazon.com/vpc/.
2. In the VPC Dashboard choose Endpoints.
3. Choose Create Endpoint.
4. In the list of service names, choose com.amazonaws.your_region.kinesis-firehose.
5. Choose the VPC and one or more subnets in which to create the endpoint.
6. Choose one or more security groups to associate with the endpoint.
7. For Policy, choose Custom and paste the following policy:
{
"Statement": [
{
"Sid": "Allow-only-specific-PrivateAPIs",
"Principal": "*",
"Action": [
"firehose:ListDeliveryStreams"
109
Amazon Kinesis Data Firehose Developer Guide
Using interface VPC endpoints (AWS
PrivateLink) for Kinesis Data Firehose
],
"Effect": "Allow",
"Resource": [
"*"
]
},
{
"Sid": "Allow-only-specific-PrivateAPIs",
"Principal": "*",
"Action": [
"firehose:DescribeDeliveryStream"
],
"Effect": "Deny",
"Resource": [
"*"
]
}
]
}
import json
import boto3
import os
from botocore.exceptions import ClientError
110
Amazon Kinesis Data Firehose Developer Guide
Availability
REGION = os.environ['AWS_REGION']
client = boto3.client(
'firehose',
REGION
)
print("Calling list_delivery_streams with ListDeliveryStreams allowed policy.")
delivery_stream_request = client.list_delivery_streams()
print("Successfully returned list_delivery_streams request %s." % (
delivery_stream_request
))
describe_access_denied = False
try:
print("Calling describe_delivery_stream with DescribeDeliveryStream denied
policy.")
delivery_stream_info =
client.describe_delivery_stream(DeliveryStreamName='test-describe-denied')
except ClientError as e:
error_code = e.response['Error']['Code']
print ("Caught %s." % (error_code))
if error_code == 'AccessDeniedException':
describe_access_denied = True
if not describe_access_denied:
raise
else:
print("Access denied test succeeded.")
Calling describe_delivery_stream.
AccessDeniedException
Availability
Interface VPC endpoints are currently supported within the following Regions:
• US East (Ohio)
• US East (N. Virginia)
• US West (N. California)
• US West (Oregon)
• Asia Pacific (Mumbai)
• Asia Pacific (Seoul)
• Asia Pacific (Singapore)
• Asia Pacific (Sydney)
111
Amazon Kinesis Data Firehose Developer Guide
Availability
112
Amazon Kinesis Data Firehose Developer Guide
Tag Basics
Topics
• Tag Basics (p. 113)
• Tracking Costs Using Tagging (p. 113)
• Tag Restrictions (p. 114)
• Tagging Delivery Streams Using the Amazon Kinesis Data Firehose API (p. 114)
Tag Basics
You can use the Amazon Kinesis Data Firehose API to complete the following tasks:
You can use tags to categorize your Kinesis Data Firehose delivery streams. For example, you can
categorize delivery streams by purpose, owner, or environment. Because you define the key and value for
each tag, you can create a custom set of categories to meet your specific needs. For example, you might
define a set of tags that helps you track delivery streams by owner and associated application.
113
Amazon Kinesis Data Firehose Developer Guide
Tag Restrictions
Tag Restrictions
The following restrictions apply to tags in Kinesis Data Firehose.
Basic restrictions
• Each tag key must be unique. If you add a tag with a key that's already in use, your new tag overwrites
the existing key-value pair.
• You can't start a tag key with aws: because this prefix is reserved for use by AWS. AWS creates tags
that begin with this prefix on your behalf, but you can't edit or delete them.
• Tag keys must be between 1 and 128 Unicode characters in length.
• Tag keys must consist of the following characters: Unicode letters, digits, white space, and the
following special characters: _ . / = + - @.
• TagDeliveryStream
• ListTagsForDeliveryStream
• UntagDeliveryStream
114
Amazon Kinesis Data Firehose Developer Guide
The following diagram shows the flow of data that is demonstrated in this tutorial.
As the diagram shows, first you send the Amazon VPC flow logs to Amazon CloudWatch. Then from
CloudWatch, the data goes to a Kinesis Data Firehose delivery stream. Kinesis Data Firehose then invokes
an AWS Lambda function to decompress the data, and sends the decompressed log data to Splunk.
Prerequisites
Before you begin, ensure that you have the following prerequisites:
• AWS account — If you don't have an AWS account, create one at http://aws.amazon.com. For more
information, see Setting Up for Amazon Kinesis Data Firehose (p. 4).
• AWS CLI — Parts of this tutorial require that you use the AWS Command Line Interface (AWS CLI).
To install the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line
Interface User Guide.
• HEC token — In your Splunk deployment, set up an HTTP Event Collector (HEC) token with the source
type aws:cloudwatchlogs:vpcflow. For more information, see Installation and configuration
overview for the Splunk Add-on for Amazon Kinesis Firehose in the Splunk documentation.
Topics
115
Amazon Kinesis Data Firehose Developer Guide
Step 1: Send Log Data to CloudWatch
• Step 1: Send Log Data from Amazon VPC to Amazon CloudWatch (p. 116)
• Step 2: Create a Kinesis Data Firehose Delivery Stream with Splunk as a Destination (p. 118)
• Step 3: Send the Data from Amazon CloudWatch to Kinesis Data Firehose (p. 121)
• Step 4: Check the Results in Splunk and in Kinesis Data Firehose (p. 122)
To create a CloudWatch log group to receive your Amazon VPC flow logs
1. Sign in to the AWS Management Console and open the CloudWatch console at https://
console.aws.amazon.com/cloudwatch/.
2. In the navigation pane, choose Log groups.
3. Choose Actions, and then choose Create log group.
4. Enter the name VPCtoSplunkLogGroup, and choose Create log group.
116
Amazon Kinesis Data Firehose Developer Guide
Step 1: Send Log Data to CloudWatch
8. In the new window that appears, keep IAM Role set to Create a new IAM Role. In the Role Name
box, enter VPCtoSplunkWritetoCWRole. Then choose Allow.
117
Amazon Kinesis Data Firehose Developer Guide
Step 2: Create the Delivery Stream
9. Return to the Create flow log browser tab, and refresh the IAM role* box. Then choose
VPCtoSplunkWritetoCWRole in the list.
10. Choose Create, and then choose Close.
11. Back on the Amazon VPC dashboard, choose Your VPCs in the navigation pane. Then select the
check box next to your VPC.
12. Scroll down and choose the Flow Logs tab, and look for the flow log that you created in the
preceding steps. Ensure that its status is Active. If it is not, review the previous steps.
Proceed to Step 2: Create a Kinesis Data Firehose Delivery Stream with Splunk as a Destination (p. 118).
The logs that CloudWatch sends to the delivery stream are in a compressed format. However, Kinesis
Data Firehose can't send compressed logs to Splunk. Therefore, when you create the delivery stream
in the following procedure, you enable data transformation and configure an AWS Lambda function to
uncompress the log data. Kinesis Data Firehose then sends the uncompressed data to Splunk.
118
Amazon Kinesis Data Firehose Developer Guide
Step 2: Create the Delivery Stream
7. On the AWS Lambda console, for the function name, enter VPCtoSplunkLambda.
8. In the description text under Execution role, choose the IAM console link to create a custom role.
This opens the AWS Identity and Access Management (IAM) console.
9. In the IAM console, choose Lambda.
10. Choose Next: Permissions.
11. Choose Create policy.
12. Choose the JSON tab and replace the existing JSON with the following. Be sure to replace the your-
region and your-aws-account-id placeholders with your AWS Region code and account ID.
Don't include any hyphens or dashes in the account ID. For a list of AWS Region codes, see AWS
Regions and Endpoints.
{
"Version": "2012-10-17",
"Statement": [
119
Amazon Kinesis Data Firehose Developer Guide
Step 2: Create the Delivery Stream
"Effect": "Allow",
"Action": [
"firehose:PutRecordBatch"
],
"Resource": [
"arn:aws:firehose:your-region:your-aws-account-id:deliverystream/
VPCtoSplunkStream"
]
}
]
}
This policy allows the Lambda function to put data back into the delivery stream by invoking the
PutRecordBatch operation. This step is needed because a Lambda function can only return up to 6
MiB of data every time Kinesis Data Firehose invokes it. If the size of the uncompressed data exceeds
6 MiB, the function invokes PutRecordBatch to put some of the data back into the delivery stream
for future processing.
13. Back in the Create role window, refresh the list of policies, then choose VPCtoSplunkLambdaPolicy
by selecting the box to its left.
14. Choose Next: Tags.
15. Choose Next: Review.
16. For Role Name, enter VPCtoSplunkLambdaRole, then choose Create role.
17. Back in the Lambda console, refresh the list of existing roles, then select
VPCtoSplunkLambdaRole.
18. Scroll down and choose Create function.
19. In the Lambda function pane, scroll down to the Basic settings section, and increase the timeout to
3 minutes.
20. Scroll up and choose Save.
21. Back in the Choose Lambda blueprint dialog box, choose Close.
22. On the delivery stream creation page, under the Transform source records with AWS Lambda
section, choose the refresh button. Then choose VPCtoSplunkLambda in the list of functions.
23. Scroll down and choose Next.
24. For Destination*, choose Splunk.
25. For Splunk cluster endpoint, see the information at Configure Amazon Kinesis Firehose to send
data to the Splunk platform in the Splunk documentation.
26. Keep Splunk endpoint type set to Raw endpoint.
27. Enter the value (and not the name) of your Splunk HTTP Event Collector (HEC) token.
28. For S3 backup mode*, choose Backup all events.
29. Choose an existing Amazon S3 bucket (or create a new one if you want), and choose Next.
30. On the Configure settings page, scroll down to the IAM role section, and choose Create new or
choose.
31. In the IAM role list, choose Create a new IAM role. For Role Name, enter
VPCtoSplunkLambdaFirehoseRole, and then choose Allow.
32. Choose Next, and review the configuration that you chose for the delivery stream. Then choose
Create delivery stream.
Proceed to Step 3: Send the Data from Amazon CloudWatch to Kinesis Data Firehose (p. 121).
120
Amazon Kinesis Data Firehose Developer Guide
Step 3: Send Data to the Delivery Stream
In this procedure, you use the AWS Command Line Interface (AWS CLI) to create a CloudWatch Logs
subscription that sends log events to your delivery stream.
1. Save the following trust policy to a local file, and name the file
VPCtoSplunkCWtoFHTrustPolicy.json. Be sure to replace the your-region placeholder with
your AWS Region code.
{
"Statement": {
"Effect": "Allow",
"Principal": { "Service": "logs.your-region.amazonaws.com" },
"Action": "sts:AssumeRole"
}
}
3. Save the following access policy to a local file, and name the file
VPCtoSplunkCWtoFHAccessPolicy.json. Be sure to replace the your-region and your-aws-
account-id placeholders with your AWS Region code and account ID.
{
"Statement":[
{
"Effect":"Allow",
"Action":["firehose:*"],
"Resource":["arn:aws:firehose:your-region:your-aws-account-id:deliverystream/
VPCtoSplunkStream"]
},
{
"Effect":"Allow",
"Action":["iam:PassRole"],
"Resource":["arn:aws:iam::your-aws-account-id:role/VPCtoSplunkCWtoFHRole"]
}
]
}
121
Amazon Kinesis Data Firehose Developer Guide
Step 4: Check the Results
5. Replace the your-region and your-aws-account-id placeholders in the following AWS CLI
command with your AWS Region code and account ID, and then run the command.
Proceed to Step 4: Check the Results in Splunk and in Kinesis Data Firehose (p. 122).
Important
After you verify your results, delete any AWS resources that you don't need to keep, so as not to
incur ongoing charges.
122
Amazon Kinesis Data Firehose Developer Guide
Data Not Delivered to Amazon S3
If the data source is a Kinesis data stream, Kinesis Data Firehose retries the following operations
indefinitely: DescribeStream, GetRecords, and GetShardIterator.
If the delivery stream uses DirectPut, check the IncomingBytes and IncomingRecords metrics
to see if there's incoming traffic. If you are using the PutRecord or PutRecordBatch, make sure you
catch exceptions and retry. We recommend a retry policy with exponential back-off with jitter and
several retries. Also, if you use the PutRecordBatch API, make sure your code checks the value of
FailedPutCount in the response even when the API call succeeds.
If the delivery stream uses a Kinesis data stream as its source, check the IncomingBytes
and IncomingRecords metrics for the source data stream. Additionally, ensure that the
DataReadFromKinesisStream.Bytes and DataReadFromKinesisStream.Records metrics are
being emitted for the delivery stream.
For information about tracking delivery errors using CloudWatch, see the section called “Monitoring with
CloudWatch Logs” (p. 93).
Issues
• Data Not Delivered to Amazon S3 (p. 123)
• Data Not Delivered to Amazon Redshift (p. 124)
• Data Not Delivered to Amazon OpenSearch Service (p. 125)
• Data Not Delivered to Splunk (p. 125)
• Delivery Stream Not Available as a Target for CloudWatch Logs, CloudWatch Events, or AWS IoT
Action (p. 126)
• Data Freshness Metric Increasing or Not Emitted (p. 126)
• Record Format Conversion to Apache Parquet Fails (p. 127)
• No Data at Destination Despite Good Metrics (p. 128)
• Troubleshooting HTTP Endpoints (p. 128)
123
Amazon Kinesis Data Firehose Developer Guide
Data Not Delivered to Amazon Redshift
• Check the Kinesis Data Firehose IncomingBytes and IncomingRecords metrics to make sure that
data is sent to your Kinesis Data Firehose delivery stream successfully. For more information, see
Monitoring Kinesis Data Firehose Using CloudWatch Metrics (p. 79).
• If data transformation with Lambda is enabled, check the Kinesis Data Firehose
ExecuteProcessingSuccess metric to make sure that Kinesis Data Firehose has tried to invoke
your Lambda function. For more information, see Monitoring Kinesis Data Firehose Using CloudWatch
Metrics (p. 79).
• Check the Kinesis Data Firehose DeliveryToS3.Success metric to make sure that Kinesis Data
Firehose has tried putting data to your Amazon S3 bucket. For more information, see Monitoring
Kinesis Data Firehose Using CloudWatch Metrics (p. 79).
• Enable error logging if it is not already enabled, and check error logs for delivery failure. For more
information, see Monitoring Kinesis Data Firehose Using CloudWatch Logs (p. 93).
• Make sure that the Amazon S3 bucket that is specified in your Kinesis Data Firehose delivery stream
still exists.
• If data transformation with Lambda is enabled, make sure that the Lambda function that is specified in
your delivery stream still exists.
• Make sure that the IAM role that is specified in your Kinesis Data Firehose delivery stream has access to
your S3 bucket and your Lambda function (if data transformation is enabled). For more information,
see Grant Kinesis Data Firehose Access to an Amazon S3 Destination (p. 41).
• If you're using data transformation, make sure that your Lambda function never returns responses
whose payload size exceeds 6 MB. For more information, see Amazon Kinesis Data Firehose Data
Transformation.
Data is delivered to your S3 bucket before loading into Amazon Redshift. If the data was not delivered to
your S3 bucket, see Data Not Delivered to Amazon S3 (p. 123).
• Check the Kinesis Data Firehose DeliveryToRedshift.Success metric to make sure that Kinesis
Data Firehose has tried to copy data from your S3 bucket to the Amazon Redshift cluster. For more
information, see Monitoring Kinesis Data Firehose Using CloudWatch Metrics (p. 79).
• Enable error logging if it is not already enabled, and check error logs for delivery failure. For more
information, see Monitoring Kinesis Data Firehose Using CloudWatch Logs (p. 93).
• Check the Amazon Redshift STL_CONNECTION_LOG table to see if Kinesis Data Firehose can make
successful connections. In this table, you should be able to see connections and their status based
on a user name. For more information, see STL_CONNECTION_LOG in the Amazon Redshift Database
Developer Guide.
• If the previous check shows that connections are being established, check the Amazon Redshift
STL_LOAD_ERRORS table to verify the reason for the COPY failure. For more information, see
STL_LOAD_ERRORS in the Amazon Redshift Database Developer Guide.
• Make sure that the Amazon Redshift configuration in your Kinesis Data Firehose delivery stream is
accurate and valid.
• Make sure that the IAM role that is specified in your Kinesis Data Firehose delivery stream can
access the S3 bucket that Amazon Redshift copies data from, and also the Lambda function for data
transformation (if data transformation is enabled). For more information, see Grant Kinesis Data
Firehose Access to an Amazon S3 Destination (p. 41).
• If your Amazon Redshift cluster is in a virtual private cloud (VPC), make sure that the cluster allows
access from Kinesis Data Firehose IP addresses. For more information, see Grant Kinesis Data Firehose
Access to an Amazon Redshift Destination (p. 43).
124
Amazon Kinesis Data Firehose Developer Guide
Data Not Delivered to Amazon OpenSearch Service
Data can be backed up to your Amazon S3 bucket concurrently. If data was not delivered to your S3
bucket, see Data Not Delivered to Amazon S3 (p. 123).
• Check the Kinesis Data Firehose IncomingBytes and IncomingRecords metrics to make sure that
data is sent to your Kinesis Data Firehose delivery stream successfully. For more information, see
Monitoring Kinesis Data Firehose Using CloudWatch Metrics (p. 79).
• If data transformation with Lambda is enabled, check the Kinesis Data Firehose
ExecuteProcessingSuccess metric to make sure that Kinesis Data Firehose has tried to invoke
your Lambda function. For more information, see Monitoring Kinesis Data Firehose Using CloudWatch
Metrics (p. 79).
• Check the Kinesis Data Firehose DeliveryToAmazonOpenSearchService.Success metric to make
sure that Kinesis Data Firehose has tried to index data to the OpenSearch Service cluster. For more
information, see Monitoring Kinesis Data Firehose Using CloudWatch Metrics (p. 79).
• Enable error logging if it is not already enabled, and check error logs for delivery failure. For more
information, see Monitoring Kinesis Data Firehose Using CloudWatch Logs (p. 93).
• Make sure that the OpenSearch Service configuration in your delivery stream is accurate and valid.
• If data transformation with Lambda is enabled, make sure that the Lambda function that is specified in
your delivery stream still exists.
• Make sure that the IAM role that is specified in your delivery stream can access your OpenSearch
Service cluster and Lambda function (if data transformation is enabled). For more information, see
Grant Kinesis Data Firehose Access to a Public OpenSearch Service Destination (p. 45).
• If you're using data transformation, make sure that your Lambda function never returns responses
whose payload size exceeds 6 MB. For more information, see Amazon Kinesis Data Firehose Data
Transformation.
• If your Splunk platform is in a VPC, make sure that Kinesis Data Firehose can access it. For more
information, see Access to Splunk in VPC.
• If you use an AWS load balancer, make sure that it is a Classic Load Balancer. Kinesis Data Firehose
does not support Application Load Balancers or Network Load Balancers. Also, enable duration-based
sticky sessions with cookie expiration disabled. For information about how to do this, see Duration-
Based Session Stickiness.
• Review the Splunk platform requirements. The Splunk add-on for Kinesis Data Firehose requires
Splunk platform version 6.6.X or later. For more information, see Splunk Add-on for Amazon Kinesis
Firehose.
• If you have a proxy (Elastic Load Balancing or other) between Kinesis Data Firehose and the HTTP
Event Collector (HEC) node, enable sticky sessions to support HEC acknowledgements (ACKs).
• Make sure that you are using a valid HEC token.
125
Amazon Kinesis Data Firehose Developer Guide
Delivery Stream Not Available as a Target for
CloudWatch Logs, CloudWatch Events, or AWS IoT Action
• Ensure that the HEC token is enabled. See Enable and disable Event Collector tokens.
• Check whether the data that you're sending to Splunk is formatted correctly. For more information,
see Format events for HTTP Event Collector.
• Make sure that the HEC token and input event are configured with a valid index.
• When an upload to Splunk fails due to a server error from the HEC node, the request is automatically
retried. If all retries fail, the data gets backed up to Amazon S3. Check if your data appears in Amazon
S3, which is an indication of such a failure.
• Make sure that you enabled indexer acknowledgment on your HEC token. For more information, see
Enable indexer acknowledgement.
• Increase the value of HECAcknowledgmentTimeoutInSeconds in the Splunk destination
configuration of your Kinesis Data Firehose delivery stream.
• Increase the value of DurationInSeconds under RetryOptions in the Splunk destination
configuration of your Kinesis Data Firehose delivery stream.
• Check your HEC health.
• If you're using data transformation, make sure that your Lambda function never returns responses
whose payload size exceeds 6 MB. For more information, see Amazon Kinesis Data Firehose Data
Transformation.
• Make sure that the Splunk parameter named ackIdleCleanup is set to true. It is false by default. To
set this parameter to true, do the following:
• For a managed Splunk Cloud deployment, submit a case using the Splunk support portal. In this
case, ask Splunk support to enable the HTTP event collector, set ackIdleCleanup to true in
inputs.conf, and create or modify a load balancer to use with this add-on.
• For a distributed Splunk Enterprise deployment, set the ackIdleCleanup parameter to true
in the inputs.conf file. For *nix users, this file is located under $SPLUNK_HOME/etc/apps/
splunk_httpinput/local/. For Windows users, it is under %SPLUNK_HOME%\etc\apps
\splunk_httpinput\local\.
• For a single-instance Splunk Enterprise deployment, set the ackIdleCleanup parameter to
true in the inputs.conf file. For *nix users, this file is located under $SPLUNK_HOME/etc/
apps/splunk_httpinput/local/. For Windows users, it is under %SPLUNK_HOME%\etc\apps
\splunk_httpinput\local\.
• See Troubleshoot the Splunk Add-on for Amazon Kinesis Firehose.
126
Amazon Kinesis Data Firehose Developer Guide
Record Format Conversion to Apache Parquet Fails
If you enable backup for all events or all documents, monitor two separate data-freshness metrics: one
for the main destination and one for the backup.
If the data-freshness metric isn't being emitted, this means that there is no active delivery for the
delivery stream. This happens when data delivery is completely blocked or when there's no incoming
data.
If the data-freshness metric is constantly increasing, this means that data delivery is falling behind. This
can happen for one of the following reasons.
• The destination can't handle the rate of delivery. If Kinesis Data Firehose encounters transient errors
due to high traffic, then the delivery might fall behind. This can happen for destinations other than
Amazon S3 (it can happen for OpenSearch Service, Amazon Redshift, or Splunk). Ensure that your
destination has enough capacity to handle the incoming traffic.
• The destination is slow. Data delivery might fall behind if Kinesis Data Firehose encounters high
latency. Monitor the destination's latency metric.
• The Lambda function is slow. This might lead to a data delivery rate that is less than the data ingestion
rate for the delivery stream. If possible, improve the efficiency of the Lambda function. For instance,
if the function does network IO, use multiple threads or asynchronous IO to increase parallelism. Also,
consider increasing the memory size of the Lambda function so that the CPU allocation can increase
accordingly. This might lead to faster Lambda invocations. For information about configuring Lambda
functions, see Configuring AWS Lambda Functions.
• There are failures during data delivery. For information about how to monitor errors using Amazon
CloudWatch Logs, see the section called “Monitoring with CloudWatch Logs” (p. 93).
• If the data source of the delivery stream is a Kinesis data stream, throttling might be happening. Check
the ThrottledGetRecords, ThrottledGetShardIterator, and ThrottledDescribeStream
metrics. If there are multiple consumers attached to the Kinesis data stream, consider the following:
• If the ThrottledGetRecords and ThrottledGetShardIterator metrics are high, we
recommend you increase the number of shards provisioned for the data stream.
• If the ThrottledDescribeStream is high, we recommend you add the kinesis:listshards
permission to the role configured in KinesisStreamSourceConfiguration.
• Low buffering hints for the destination. This might increase the number of round trips that Kinesis
Data Firehose needs to make to the destination, which might cause delivery to fall behind. Consider
increasing the value of the buffering hints. For more information, see BufferingHints.
• A high retry duration might cause delivery to fall behind when the errors are frequent. Consider
reducing the retry duration. Also, monitor the errors and try to reduce them. For information about
how to monitor errors using Amazon CloudWatch Logs, see the section called “Monitoring with
CloudWatch Logs” (p. 93).
• If the destination is Splunk and DeliveryToSplunk.DataFreshness is high but
DeliveryToSplunk.Success looks good, the Splunk cluster might be busy. Free the Splunk cluster
if possible. Alternatively, contact AWS Support and request an increase in the number of channels that
Kinesis Data Firehose is using to communicate with the Splunk cluster.
When the AWS Glue crawler indexes the DynamoDB set data types (StringSet, NumberSet, and
BinarySet), it stores them in the data catalog as SET<STRING>, SET<BIGINT>, and SET<BINARY>,
respectively. However, for Kinesis Data Firehose to convert the data records to the Apache Parquet
format, it requires Apache Hive data types. Because the set types aren't valid Apache Hive data types,
conversion fails. To get conversion to work, update the data catalog with Apache Hive data types. You
can do that by changing set to array in the data catalog.
127
Amazon Kinesis Data Firehose Developer Guide
No Data at Destination Despite Good Metrics
To change one or more data types from set to array in an AWS Glue data catalog
1. Sign in to the AWS Management Console and open the AWS Glue console at https://
console.aws.amazon.com/glue/.
2. In the left pane, under the Data catalog heading, choose Tables.
3. In the list of tables, choose the name of the table where you need to modify one or more data types.
This takes you to the details page for the table.
4. Choose the Edit schema button in the top right corner of the details page.
5. In the Data type column choose the first set data type.
6. In the Column type drop-down list, change the type from set to array.
7. In the ArraySchema field, enter array<string>, array<int>, or array<binary>, depending on
the appropriate type of data for your scenario.
8. Choose Update.
9. Repeat the previous steps to convert other set types to array types.
10. Choose Save.
CloudWatch Logs
It is highly recommended that you enable CloudWatch Logging for Firehose. Logs are only published
when there are errors delivering to your destination.
Destination Exceptions
ErrorCode: HttpEndpoint.DestinationException
{
"deliveryStreamARN": "arn:aws:firehose:us-east-1:123456789012:deliverystream/ronald-
test",
"destination": "custom.firehose.endpoint.com...",
"deliveryStreamVersionId": 1,
"message": "The following response was received from the endpoint destination. 413:
{\"requestId\": \"43b8e724-dbac-4510-adb7-ef211c6044b9\", \"timestamp\": 1598556019164,
\"errorMessage\": \"Payload too large\"}",
128
Amazon Kinesis Data Firehose Developer Guide
CloudWatch Logs
"errorCode": "HttpEndpoint.DestinationException",
"processor": "arn:aws:lambda:us-east-1:379522611494:function:httpLambdaProcessing"
}
Destination exceptions indicate that Firehose is able to establish a connection to your endpoint and
make an HTTP request, but did not receive a 200 response code. 2xx responses that are not 200s will
also result in a destination exception. Kinesis Data Firehose logs the response code and a truncated
response payload received from the configured endpoint to CloudWatch Logs. Because Kinesis Data
Firehose logs the response code and payload without modification or interpretation, it is up to the
endpoint to provide the exact reason why it rejected Kinesis Data Firehose's HTTP delivery request. The
following are the most common troubleshooting recommendations for these exceptions:
• 400: Indicates that you are sending a bad request due to a misconfiguration of your Kinesis Data
Firehose. Make sure that you have the correct url, common attributes, content encoding, access key,
and buffering hints for your destination. See the destination specific documentation on the required
configuration.
• 401: Indicates that the access key you configured for your delivery stream is incorrect or missing.
• 403: Indicates that the access key you configured for your delivery stream does not have permissions
to deliver data to the configured endpoint.
• 413: Indicates that the request payload that Kinesis Data Firehose sends to the endpoint is too
large for the endpoint to handle. Try lowering the buffering hint to the recommended size for your
destination.
• 429: Indicates that Kinesis Data Firehose is sending requests at a greater rate than the destination
can handle. Fine tune your buffering hint by increasing your buffering time and/or increasing your
buffering size (but still within the limit of your destination).
• 5xx: Indicates that there is a problem with the destination. The Kinesis Data Firehose service is still
working properly.
Important
Important: While these are the common troubleshooting recommendations, specific endpoints
may have different reasons for providing the response codes and the endpoint specific
recommendations should be followed first.
Invalid Response
ErrorCode: HttpEndpoint.InvalidResponseFromDestination
{
"deliveryStreamARN": "arn:aws:firehose:us-east-1:123456789012:deliverystream/ronald-
test",
"destination": "custom.firehose.endpoint.com...",
"deliveryStreamVersionId": 1,
"message": "The response received from the specified endpoint is invalid. Contact
the owner of the endpoint to resolve the issue. Response for request 2de9e8e9-7296-47b0-
bea6-9f17b133d847 is not recognized as valid JSON or has unexpected fields. Raw response
received: 200 {\"requestId\": null}",
"errorCode": "HttpEndpoint.InvalidResponseFromDestination",
"processor": "arn:aws:lambda:us-east-1:379522611494:function:httpLambdaProcessing"
}
Invalid response exceptions indicate that Kinesis Data Firehose received an invalid response from
the endpoint destination. The response must conform to the response specifications or Kinesis Data
Firehose will consider the delivery attempt a failure and will redeliver the same data until the configured
retry duration is exceeded. Kinesis Data Firehose treats responses that do not follow the response
129
Amazon Kinesis Data Firehose Developer Guide
CloudWatch Logs
specifications as failures even if the response has a 200 status. If you are developing a Kinesis Data
Firehose compatible endpoint, follow the response specifications to ensure data is successfully delivered.
Below are some of the common types of invalid responses and how to fix them:
• Invalid JSON or Unexpected Fields: Indicates that the response can not be properly deserialized as
JSON or has unexpected fields. Ensure that the response is not content-encoded.
• Missing RequestId: Indicates that the response does not contain a requestId.
• RequestId does not match: Indicates that the requestId in the response does not match the outgoing
requestId.
• Missing Timestamp: Indicates that the response does not contain a timestamp field. The timestamp
field must be a number and not a string.
• Missing Content-Type Header: Indicates that the response does not contain a “content-type:
application/json” header. No other content-type is accepted.
Important
Important: Kinesis Data Firehose can only deliver data to endpoints that follow the Firehose
request and response specifications. If you are configuring your destination to a third party
service, ensure that you are using the correct Kinesis Data Firehose compatible endpoint which
will likely be different than the public ingestion endpoint. For example Datadog’s Kinesis Data
Firehose endpoint is https://aws-kinesis-http-intake.logs.datadoghq.com/ while its public
endpoint is https://api.datadoghq.com/.
• Error Code: HttpEndpoint.RequestTimeout - Indicates that the endpoint took longer than 3 minutes
to respond. If you are the owner of the destination, decrease the response time of the destination
endpoint. If you are not the owner of the destination, contact the owner and ask if anything can be
done to lower the response time (i.e. decrease the buffering hint so there is less data being processed
per request).
• Error Code: HttpEndpoint.ResponseTooLarge - Indicates that the response is too large. The response
must be less than 1 MiB including headers.
• Error Code: HttpEndpoint.ConnectionFailed - Indicates a connection could not be established with
the configured endpoint. This could be due to a typo in the configured url, the endpoint not being
accessible to Kinesis Data Firehose, or the endpoint taking too long to respond to the connection
request.
• Error Code: HttpEndpoint.ConnectionReset - Indicates a connection was made but reset or
prematurely closed by the endpoint.
• Error Code: HttpEndpoint.SSLHandshakeFailure - Indicates an SSL handshake could not be
successfully completed with the configured endpoint.
130
Amazon Kinesis Data Firehose Developer Guide
• When dynamic partitioning on a delivery stream is enabled, there is a limit of 500 active partitions
that can be created for that delivery stream. You can use the Amazon Kinesis Data Firehose Limits form
to request an increase of this quota.
• When dynamic partitioning on a delivery stream is enabled, a max throughput of 25 MB per second is
supported for each active partition. This is a hard limit.
• By default, each account can have up to 50 Kinesis Data Firehose delivery streams per Region. If you
exceed this number, a call to CreateDeliveryStream results in a LimitExceededException
exception. To increase this quota, you can use Service Quotas if it's available in your Region. For
information about using Service Quotas, see Requesting a Quota Increase. If Service Quotas isn't
available in your region, you can use the Amazon Kinesis Data Firehose Limits form to request an
increase.
• When Direct PUT is configured as the data source, each Kinesis Data Firehose delivery stream provides
the following combined quota for PutRecord and PutRecordBatch requests:
• For US East (N. Virginia), US West (Oregon), and Europe (Ireland): 500,000 records/second, 2,000
requests/second, and 5 MiB/second.
• For US East (Ohio), US West (N. California), AWS GovCloud (US-East), AWS GovCloud (US-West),
Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia
Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (London), Europe
(Paris), Europe (Stockholm), Middle East (Bahrain), South America (São Paulo), Africa (Cape Town),
and Europe (Milan): 100,000 records/second, 1,000 requests/second, and 1 MiB/second.
To request an increase in quota, use the Amazon Kinesis Data Firehose Limits form. The three quota
scale proportionally. For example, if you increase the throughput quota in US East (N. Virginia), US
West (Oregon), or Europe (Ireland) to 10 MiB/second, the other two quota increase to 4,000 requests/
second and 1,000,000 records/second.
Important
If the increased quota is much higher than the running traffic, it causes small delivery batches
to destinations. This is inefficient and can result in higher costs at the destination services.
Be sure to increase the quota only to match current running traffic, and increase the quota
further if traffic increases.
Important
Note that smaller data records can lead to higher costs. Kinesis Data Firehose ingestion pricing
is based on the number of data records you send to the service, times the size of each record
rounded up to the nearest 5KB (5120 bytes). So, for the same volume of incoming data
(bytes), if there is a greater number of incoming records, the cost incurred would be higher.
For example, if the total incoming data volume is 5MiB, sending 5MiB of data over 5,000
records costs more compared to sending the same amount of data using 1,000 records. For
more information, see Kinesis Data Firehose in the AWS Calculator.
Note
When Kinesis Data Streams is configured as the data source, this quota doesn't apply, and
Kinesis Data Firehose scales up and down with no limit.
• Each Kinesis Data Firehose delivery stream stores data records for up to 24 hours in case the delivery
destination is unavailable and if the source is DirectPut. If the source is Kinesis Data Streams (KDS) and
the destination is unavailable, then the data will be retained based on your KDS configuration.
• The maximum size of a record sent to Kinesis Data Firehose, before base64-encoding, is 1,000 KiB.
• The PutRecordBatch operation can take up to 500 records per call or 4 MiB per call, whichever is
smaller. This quota cannot be changed.
131
Amazon Kinesis Data Firehose Developer Guide
• The following operations can provide up to five invocations per second (this is a hard limit):
CreateDeliveryStream, DeleteDeliveryStream, DescribeDeliveryStream,
ListDeliveryStreams, UpdateDestination, TagDeliveryStream, UntagDeliveryStream,
ListTagsForDeliveryStream, StartDeliveryStreamEncryption,
StopDeliveryStreamEncryption.
• The buffer sizes hints range from 1 MiB to 128 MiB for Amazon S3 delivery. For Amazon OpenSearch
Service (OpenSearch Service) delivery, they range from 1 MiB to 100 MiB. For AWS Lambda processing,
you can set a buffering hint between 1 MiB and 3 MiB using the BufferSizeInMBs processor
parameter. The size threshold is applied to the buffer before compression. These options are treated as
hints. Kinesis Data Firehose might choose to use different values when it is optimal.
• The buffer interval hints range from 60 seconds to 900 seconds.
• For delivery from Kinesis Data Firehose to Amazon Redshift, only publicly accessible Amazon Redshift
clusters are supported.
• The retry duration range is from 0 seconds to 7,200 seconds for Amazon Redshift and OpenSearch
Service delivery.
• Kinesis Data Firehose supports Elasticsearch versions 1.5, 2.3, 5.1, 5.3, 5.5, 5.6, as well as all 6.* and 7.*
versions and Amazon OpenSearch Service 1.x and later.
• When the destination is Amazon S3, Amazon Redshift, or OpenSearch Service, Kinesis Data Firehose
allows up to 5 outstanding Lambda invocations per shard. For Splunk, the quota is 10 outstanding
Lambda invocations per shard.
• You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.
132
Amazon Kinesis Data Firehose Developer Guide
Request Format
Topics
• Request Format (p. 133)
• Response Format (p. 136)
• Examples (p. 137)
Request Format
Path and URL Parameters
These are configured directly by you as part of a single URL field. Kinesis Data Firehose sends them
as configured without modification. Only https destinations are supported. URL restrictions are
applied during delivery-stream configuration.
Note
Currently, only port 443 is supported for HTTP endpoint data delivery.
HTTP Headers - X-Amz-Firehose-Protocol-Version
This header is used to indicate the version of the request/response formats. Currently the only
version is 1.0.
HTTP Headers - X-Amz-Firehose-Request-Id
The value of this header is an opaque GUID that can be used for debugging and deduplication
purposes. Endpoint implementations should log the value of this header if possible, for both
successful and unsuccessful requests. The request ID is kept the same between multiple attempts of
the same request.
HTTP Headers - Content-Type
A Kinesis Data Firehose delivery stream can be configured to use GZIP to compress the body when
sending requests. When this compression is enabled, the value of the Content-Encoding header is
set to gzip, as per standard practice. If compression is not enabled, the Content-Encoding header is
absent altogether.
133
Amazon Kinesis Data Firehose Developer Guide
Request Format
The ARN of the Kinesis Data Firehose delivery stream represented in ASCII string format. The ARN
encodes region, AWS account ID and the stream name. For example, arn:aws:firehose:us-
east-1:123456789:deliverystream/testStream.
HTTP Headers - X-Amz-Firehose-Access-Key
This header carries an API key or other credentials. You have the ability to create or update the API-
key (aka authorization token) when creating or updating your delivery-stream. Kinesis Data Firehose
restricts the size of the access key to 4096 bytes. Kinesis Data Firehose does not attempt to interpret
this key in any way. The configured key is copied verbatim into the value of this header.
The contents can be arbitrary and can potentially represent a JWT token or an ACCESS_KEY. If an
endpoint requires multi-field credentials (for example, username and password), the values of all
of the fields should be stored together within a single access-key in a format that the endpoint
understands (JSON or CSV). This field can be base-64 encoded if the original contents are binary.
Kinesis Data Firehose does not modifiy and/or encode the configured value and uses the contents as
is.
HTTP Headers - X-Amz-Firehose-Common-Attributes
This header carries the common attributes (metadata) that pertain to the entire request, and/or to
all records within the request. These are configured directly by you when creating a delivery stream.
The value of this attribute is encoded as a JSON object with the following schema:
"$schema": http://json-schema.org/draft-07/schema#
properties:
commonAttributes:
type: object
minProperties: 0
maxProperties: 50
patternProperties:
"^.{1,256}$":
type: string
minLength: 0
maxLength: 1024
Here's an example:
"commonAttributes": {
"deployment -context": "pre-prod-gamma",
"device-types": ""
}
The maximum body size is configured by you, and can be up to a maximum of 64 MiB, before
compression.
Body - Schema
The body carries a single JSON document with the following JSON Schema (written in YAML):
134
Amazon Kinesis Data Firehose Developer Guide
Request Format
"$schema": http://json-schema.org/draft-07/schema#
title: FirehoseCustomHttpsEndpointRequest
description: >
The request body that the Firehose service sends to
custom HTTPS endpoints.
type: object
properties:
requestId:
description: >
Same as the value in the X-Amz-Firehose-Request-Id header,
duplicated here for convenience.
type: string
timestamp:
description: >
The timestamp (milliseconds since epoch) at which the Firehose
server generated this request.
type: integer
records:
description: >
The actual records of the Delivery Stream, carrying
the customer data.
type: array
minItems: 1
maxItems: 10000
items:
type: object
properties:
data:
description: >
The data of this record, in Base64. Note that empty
records are permitted in Firehose. The maximum allowed
size of the data, before Base64 encoding, is 1024000
bytes; the maximum length of this field is therefore
1365336 chars.
type: string
minLength: 0
maxLength: 1365336
required:
- requestId
- records
Here's an example:
{
"requestId": "ed4acda5-034f-9f42-bba1-f29aea6d7d8f",
"timestamp": 1578090901599
"records": [
{
"data": "aGVsbG8="
},
{
"data": "aGVsbG8gd29ybGQ="
}
]
}
135
Amazon Kinesis Data Firehose Developer Guide
Response Format
Response Format
Default Behavior on Error
If a response fails to conform to the requirements below, the Kinesis Firehose server treats it as
though it had a 500 status code with no body.
Status Code
The HTTP status code MUST be in the 2XX, 4XX or 5XX range.
The Kinesis Data Firehose server does NOT follow redirects (3XX status codes). Only response
code 200 is considered as a successful delivery of the records to HTTP/EP. Response code 413 (size
exceeded) is considered as a permanent failure and the record batch is not sent to error bucket if
configured. All other response codes are considered as retriable errors and are subjected to back-off
retry algorithm explained later.
Headers - Content Type
"$schema": http://json-schema.org/draft-07/schema#
title: FirehoseCustomHttpsEndpointResponse
description: >
The response body that the Firehose service sends to
custom HTTPS endpoints.
type: object
properties:
requestId:
description: >
Must match the requestId in the request.
type: string
timestamp:
description: >
The timestamp (milliseconds since epoch) at which the
server processed this request.
type: integer
errorMessage:
description: >
For failed requests, a message explaining the failure.
If a request fails after exhausting all retries, the last
Instance of the error message is copied to error output
S3 bucket if configured.
type: string
minLength: 0
maxLength: 8192
required:
- requestId
136
Amazon Kinesis Data Firehose Developer Guide
Examples
- timestamp
Here's an example:
In all error cases the Kinesis Data Firehose server reattempts delivery of the same batch of records
using an exponential back-off algorithm. The retries are backed off using an initial back-off time
(1 second) with a jitter factor of (15%) and each subsequent retry is backed off using the formula
(initial-backoff-time * (multiplier(2) ^ retry_count)) with added jitter. The backoff time is capped by
a maximum interval of 2 minutes. For example on the ‘n’-th retry the back off time is = MAX(120sec,
(1 * (2^n)) * random(0.85, 1,15).
The parameters specified in the previous equation are subject to change. Refer to the AWS Firehose
documentation for exact initial back off time, max backoff time, multiplier and jitter percentages
used in exponential back off algorithm.
In each subsequent retry attempt the access key and/or destination to which records are delivered
might change based on updated configuration of the delivery stream. Kinesis Data Firehose service
uses the same request-id across retries in a best-effort manner. This last feature can be used for
deduplication purpose by the HTTP end point server. If the request is still not delivered after the
maximum time allowed (based on delivery stream configuration) the batch of records can optionally
be delivered to an error bucket based on stream configuration.
Examples
Example of a CWLog sourced request:
{
"requestId": "ed4acda5-034f-9f42-bba1-f29aea6d7d8f",
"timestamp": 1578090901599,
"records": [
{
"data": {
"messageType": "DATA_MESSAGE",
"owner": "123456789012",
"logGroup": "log_group_name",
"logStream": "log_stream_name",
"subscriptionFilters": [
"subscription_filter_name"
],
"logEvents": [
137
Amazon Kinesis Data Firehose Developer Guide
Examples
{
"id": "0123456789012345678901234567890123456789012345",
"timestamp": 1510109208016,
"message": "log message 1"
},
{
"id": "0123456789012345678901234567890123456789012345",
"timestamp": 1510109208017,
"message": "log message 2"
}
]
}
}
]
}
138
Amazon Kinesis Data Firehose Developer Guide
Document History
The following table describes the important changes to the Amazon Kinesis Data Firehose
documentation.
Added a topic on Added a topic about the expressions that you can use December 20, 2018
custom prefixes. when building a custom prefix for data that is delivered
to Amazon S3. See Custom Amazon S3 Prefixes (p. 105).
Added New Kinesis Added a tutorial that demonstrates how to send Amazon October 30, 2018
Data Firehose VPC flow logs to Splunk through Kinesis Data Firehose.
Tutorial See Tutorial: Sending VPC Flow Logs to Splunk Using
Amazon Kinesis Data Firehose (p. 115).
Added Four New Added Paris, Mumbai, Sao Paulo, and London. For June 27, 2018
Kinesis Data Firehose more information, see Amazon Kinesis Data Firehose
Regions Quota (p. 131).
Added Two New Added Seoul and Montreal. For more information, see June 13, 2018
Kinesis Data Firehose Amazon Kinesis Data Firehose Quota (p. 131).
Regions
New Kinesis Streams Added Kinesis Streams as a potential source for records August 18, 2017
as Source feature for a Firehose Delivery Stream. For more information,
see Source, Destination, and Name (p. 5).
Update to console The delivery stream creation wizard was updated. For July 19, 2017
documentation more information, see Creating an Amazon Kinesis Data
Firehose Delivery Stream (p. 5).
New data You can configure Kinesis Data Firehose to December 19, 2016
transformation transform your data before data delivery. For more
information, see Amazon Kinesis Data Firehose Data
Transformation (p. 58).
New Amazon You can configure Kinesis Data Firehose to retry a COPY May 18, 2016
Redshift COPY retry command to your Amazon Redshift cluster if it fails. For
more information, see Creating an Amazon Kinesis Data
Firehose Delivery Stream (p. 5), Amazon Kinesis Data
Firehose Data Delivery (p. 73), and Amazon Kinesis Data
Firehose Quota (p. 131).
New Kinesis Data You can create a delivery stream with Amazon April 19, 2016
Firehose destination, OpenSearch Service as the destination. For more
Amazon OpenSearch information, see Creating an Amazon Kinesis Data
Service Firehose Delivery Stream (p. 5), Amazon Kinesis Data
Firehose Data Delivery (p. 73), and Grant Kinesis
Data Firehose Access to a Public OpenSearch Service
Destination (p. 45).
New enhanced Updated Monitoring Amazon Kinesis Data April 19, 2016
CloudWatch metrics Firehose (p. 79) and Troubleshooting Amazon Kinesis
and troubleshooting Data Firehose (p. 123).
features
139
Amazon Kinesis Data Firehose Developer Guide
New enhanced Updated Writing to Kinesis Data Firehose Using Kinesis April 11, 2016
Kinesis agent Agent (p. 25).
New Kinesis agents Added Writing to Kinesis Data Firehose Using Kinesis October 2, 2015
Agent (p. 25).
Initial release Initial release of the Amazon Kinesis Data Firehose October 4, 2015
Developer Guide.
140
Amazon Kinesis Data Firehose Developer Guide
AWS glossary
For the latest AWS terminology, see the AWS glossary in the AWS General Reference.
141