You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: administration/configuring-fluent-bit/classic-mode/upstream-servers.md
+1
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,7 @@ It's common that Fluent Bit [output plugins](../../pipeline/outputs/) aims to co
5
5
An _Upstream_ defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin **must** support the _Upstream_ feature. The following plugin\(s\) have _Upstream_ support:
Copy file name to clipboardExpand all lines: pipeline/outputs/elasticsearch.md
+85-42
Original file line number
Diff line number
Diff line change
@@ -8,54 +8,58 @@ The **es** output plugin, allows to ingest your records into an [Elasticsearch](
8
8
9
9
## Configuration Parameters
10
10
11
-
| Key | Description | default |
12
-
| :--- | :--- | :--- |
13
-
| Host | IP address or hostname of the target Elasticsearch instance | 127.0.0.1 |
14
-
| Port | TCP port of the target Elasticsearch instance | 9200 |
15
-
| Path | Elasticsearch accepts new data on HTTP query path "/\_bulk". But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI. | Empty string |
16
-
| compress | Set payload compression mechanism. Option available is 'gzip' ||
17
-
| Buffer\_Size | Specify the buffer size used to read the response from the Elasticsearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an _unlimited_ amount of memory set this value to **False**, otherwise the value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | 512KB |
18
-
| Pipeline | Newer versions of Elasticsearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines. ||
19
-
| AWS\_Auth | Enable AWS Sigv4 Authentication for Amazon OpenSearch Service | Off |
20
-
| AWS\_Region | Specify the AWS region for Amazon OpenSearch Service ||
21
-
| AWS\_STS\_Endpoint | Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service ||
22
-
| AWS\_Role\_ARN | AWS IAM Role to assume to put records to your Amazon cluster ||
23
-
| AWS\_External\_ID | External ID for the AWS IAM Role specified with `aws_role_arn`||
24
-
| AWS\_Service\_Name | Service name to be used in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to `aoss`. See the [FAQ](opensearch.md#faq) section on Amazon OpenSearch Serverless for more information. | es |
25
-
| Cloud\_ID | If you are using Elastic's Elasticsearch Service you can specify the cloud\_id of the cluster running. The Cloud ID string has the format `<deployment_name>:<base64_info>`. Once decoded, the `base64_info` string has the format `<deployment_region>$<elasticsearch_hostname>$<kibana_hostname>`.
26
-
||
27
-
| Cloud\_Auth | Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud ||
| HTTP\_Passwd | Password for user defined in HTTP\_User ||
30
-
| Index | Index name | fluent-bit |
31
-
| Type | Type name |\_doc |
32
-
| Logstash\_Format | Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off | Off |
33
-
| Logstash\_Prefix | When Logstash\_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash\_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated. | logstash |
34
-
| Logstash\_Prefix\_Key | When included: the value of the key in the record will be evaluated as key reference and overrides Logstash\_Prefix for index generation. If the key/value is not found in the record then the Logstash\_Prefix option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). ||
35
-
| Logstash\_Prefix\_Separator | Set a separator between logstash_prefix and date.| - |
36
-
| Logstash\_DateFormat | Time format \(based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html)\) to generate the second part of the Index name. | %Y.%m.%d |
37
-
| Time\_Key | When Logstash\_Format is enabled, each record will get a new timestamp field. The Time\_Key property defines the name of that field. |@timestamp|
38
-
| Time\_Key\_Format | When Logstash\_Format is enabled, this property defines the format of the timestamp. | %Y-%m-%dT%H:%M:%S |
39
-
| Time\_Key\_Nanos | When Logstash\_Format is enabled, enabling this property sends nanosecond precision timestamps. | Off |
40
-
| Include\_Tag\_Key | When enabled, it append the Tag name to the record. | Off |
41
-
| Tag\_Key | When Include\_Tag\_Key is enabled, this property defines the key name for the tag. |\_flb-key |
42
-
| Generate\_ID | When enabled, generate `_id` for outgoing records. This prevents duplicate records when retrying ES. | Off |
43
-
| Id\_Key | If set, `_id` will be the value of the key from incoming record and `Generate_ID` option is ignored. ||
44
-
| Write\_Operation | The write\_operation can be any of: create (default), index, update, upsert. | create |
45
-
| Replace\_Dots | When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3. | Off |
46
-
| Trace\_Output | Print all elasticsearch API request payloads to stdout \(for diag only\)| Off |
47
-
| Trace\_Error | If elasticsearch return an error, print the elasticsearch API request and response \(for diag only\)| Off |
48
-
| Current\_Time\_Index | Use current time for index generation instead of message record | Off |
49
-
50
-
| Suppress\_Type\_Name | When enabled, mapping types is removed and `Type` option is ignored. Types are deprecated in APIs in [v7.0](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html). This options is for v7.0 or later. | Off |
51
-
| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 |
11
+
| Key | Description | default | Overridable in NODE section of [Upstream](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) configuration |
12
+
| :--- | :--- | :--- | :--- |
13
+
| Host | IP address or hostname of the target Elasticsearch instance | 127.0.0.1 | Yes, default value is not applicable for NODE section of Upstream configuration, which **requires** host to be specified |
14
+
| Port | TCP port of the target Elasticsearch instance | 9200 | Yes, default value is not applicable for NODE section of Upstream configuration, which **requires** port to be specified |
15
+
| Path | Elasticsearch accepts new data on HTTP query path "/\_bulk". But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI. | Empty string | Yes |
16
+
| compress | Set payload compression mechanism. Option available is 'gzip' || Yes |
17
+
| Buffer\_Size | Specify the buffer size used to read the response from the Elasticsearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an _unlimited_ amount of memory set this value to **False**, otherwise the value must be according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. | 4KB | Yes |
18
+
| Pipeline | Newer versions of Elasticsearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines. || Yes |
19
+
| AWS\_Auth | Enable AWS Sigv4 Authentication for Amazon OpenSearch Service | Off | Yes |
20
+
| AWS\_Region | Specify the AWS region for Amazon OpenSearch Service || Yes |
21
+
| AWS\_STS\_Endpoint | Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service || Yes |
22
+
| AWS\_Role\_ARN | AWS IAM Role to assume to put records to your Amazon cluster || Yes |
23
+
| AWS\_External\_ID | External ID for the AWS IAM Role specified with `aws_role_arn`|| Yes |
24
+
| AWS\_Service\_Name | Service name to be used in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to `aoss`. See the [FAQ](opensearch.md#faq) section on Amazon OpenSearch Serverless for more information. | es | Yes |
25
+
| Cloud\_ID | If you are using Elastic's Elasticsearch Service you can specify the cloud\_id of the cluster running. The Cloud ID string has the format `<deployment_name>:<base64_info>`. Once decoded, the `base64_info` string has the format `<deployment_region>$<elasticsearch_hostname>$<kibana_hostname>`. || No |
26
+
| Cloud\_Auth | Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud || Yes |
| HTTP\_Passwd | Password for user defined in HTTP\_User || Yes |
29
+
| Index | Index name | fluent-bit | Yes |
30
+
| Type | Type name |\_doc | Yes |
31
+
| Logstash\_Format | Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off | Off | Yes |
32
+
| Logstash\_Prefix | When Logstash\_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash\_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated. | logstash | Yes |
33
+
| Logstash\_Prefix\_Key | When included: the value of the key in the record will be evaluated as key reference and overrides Logstash\_Prefix for index generation. If the key/value is not found in the record then the Logstash\_Prefix option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). || Yes |
34
+
| Logstash\_Prefix\_Separator | Set a separator between logstash_prefix and date.| - | Yes |
35
+
| Logstash\_DateFormat | Time format \(based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html)\) to generate the second part of the Index name. | %Y.%m.%d | Yes |
36
+
| Time\_Key | When Logstash\_Format is enabled, each record will get a new timestamp field. The Time\_Key property defines the name of that field. |@timestamp| Yes |
37
+
| Time\_Key\_Format | When Logstash\_Format is enabled, this property defines the format of the timestamp. | %Y-%m-%dT%H:%M:%S | Yes |
38
+
| Time\_Key\_Nanos | When Logstash\_Format is enabled, enabling this property sends nanosecond precision timestamps. | Off | Yes |
39
+
| Include\_Tag\_Key | When enabled, it append the Tag name to the record. | Off | Yes |
40
+
| Tag\_Key | When Include\_Tag\_Key is enabled, this property defines the key name for the tag. |\_flb-key | Yes |
41
+
| Generate\_ID | When enabled, generate `_id` for outgoing records. This prevents duplicate records when retrying ES. | Off | Yes |
42
+
| Id\_Key | If set, `_id` will be the value of the key from incoming record and `Generate_ID` option is ignored. || Yes |
43
+
| Write\_Operation | The write\_operation can be any of: create (default), index, update, upsert. | create | Yes |
44
+
| Replace\_Dots | When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3. | Off | Yes |
45
+
| Trace\_Output | Print all elasticsearch API request payloads to stdout \(for diag only\)| Off | Yes |
46
+
| Trace\_Error | If elasticsearch return an error, print the elasticsearch API request and response \(for diag only\)| Off | Yes |
47
+
| Current\_Time\_Index | Use current time for index generation instead of message record | Off | Yes |
48
+
| Suppress\_Type\_Name | When enabled, mapping types is removed and `Type` option is ignored. Types are deprecated in APIs in [v7.0](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html). This options is for v7.0 or later. | Off | Yes |
49
+
| Workers | Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0. | 2 | No |
50
+
| Upstream | If plugin will connect to an _Upstream_ instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the [Upstream Servers](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) documentation section. || No |
52
51
53
52
> The parameters _index_ and _type_ can be confusing if you are new to Elastic, if you have used a common relational database before, they can be compared to the _database_ and _table_ concepts. Also see [the FAQ below](elasticsearch.md#faq)
54
53
55
54
### TLS / SSL
56
55
57
56
Elasticsearch output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the [TLS/SSL](tcp-and-tls.md) section.
58
57
58
+
### AWS Sigv4 Authentication and Upstream Servers
59
+
60
+
http_proxy, no_proxy and TLS parameters used for AWS Sigv4 Authentication - for connection of plugin to AWS to generate authentication signature - are never picked from NODE section of [Upstream](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) configuration.
61
+
TLS parameters for connection of plugin to Elasticsearch **can** be overridden in NODE section of Upstream (even if AWS authentication is used).
62
+
59
63
### write\_operation
60
64
61
65
The write\_operation can be any of:
@@ -99,7 +103,7 @@ $ fluent-bit -i cpu -t cpu -o es -p Host=192.168.2.3 -p Port=9200 \
99
103
100
104
In your main configuration file append the following _Input_ & _Output_ sections. You can visualize this configuration [here](https://link.calyptia.com/qhq)
101
105
102
-
```python
106
+
```text
103
107
[INPUT]
104
108
Name cpu
105
109
Tag cpu
@@ -115,6 +119,45 @@ In your main configuration file append the following _Input_ & _Output_ sections
115
119
116
120

117
121
122
+
### Configuration File with Upstream
123
+
124
+
In your main configuration file append the following _Input_ & _Output_ sections.
125
+
126
+
```text
127
+
[INPUT]
128
+
Name cpu
129
+
Tag cpu
130
+
131
+
[OUTPUT]
132
+
Name es
133
+
Match *
134
+
Upstream ./upstream.conf
135
+
Index my_index
136
+
Type my_type
137
+
```
138
+
139
+
Your [Upstream Servers](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) configuration file can look like:
140
+
141
+
```text
142
+
[UPSTREAM]
143
+
name es-balancing
144
+
145
+
[NODE]
146
+
name node-1
147
+
host localhost
148
+
port 9201
149
+
150
+
[NODE]
151
+
name node-2
152
+
host localhost
153
+
port 9202
154
+
155
+
[NODE]
156
+
name node-3
157
+
host localhost
158
+
port 9203
159
+
```
160
+
118
161
## About Elasticsearch field names
119
162
120
163
Some input plugins may generate messages where the field names contains dots, since Elasticsearch 2.0 this is not longer allowed, so the current **es** plugin replaces them with an underscore, e.g:
0 commit comments