Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Clean up cross-repo links #17190

Merged
merged 1 commit into from
Mar 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/extend/codec-new-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ With these both defined, the install process will search for the required jar fi

## Document your plugin [_document_your_plugin_2]

Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://docs/reference/integration-plugins.md).
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://reference/integration-plugins.md).

See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.

Expand Down
2 changes: 1 addition & 1 deletion docs/extend/filter-new-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,7 @@ With these both defined, the install process will search for the required jar fi

## Document your plugin [_document_your_plugin_3]

Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://docs/reference/integration-plugins.md).
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://reference/integration-plugins.md).

See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.

Expand Down
2 changes: 1 addition & 1 deletion docs/extend/input-new-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -443,7 +443,7 @@ With these both defined, the install process will search for the required jar fi

## Document your plugin [_document_your_plugin]

Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://docs/reference/integration-plugins.md).
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://reference/integration-plugins.md).

See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.

Expand Down
2 changes: 1 addition & 1 deletion docs/extend/output-new-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,7 @@ With these both defined, the install process will search for the required jar fi

## Document your plugin [_document_your_plugin_4]

Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://docs/reference/integration-plugins.md).
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs://reference/integration-plugins.md).

See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.

Expand Down
10 changes: 5 additions & 5 deletions docs/extend/plugin-doc.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ mapped_pages:

Documentation is a required component of your plugin. Quality documentation with good examples contributes to the adoption of your plugin.

The documentation that you write for your plugin will be generated and published in the [Logstash Reference](/reference/index.md) and the [Logstash Versioned Plugin Reference](logstash-docs://docs/reference/integration-plugins.md).
The documentation that you write for your plugin will be generated and published in the [Logstash Reference](/reference/index.md) and the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md).

::::{admonition} Plugin listing in {{ls}} Reference
:class: note
Expand All @@ -26,7 +26,7 @@ Documentation belongs in a single file called *docs/index.asciidoc*. It belongs

## Heading IDs [heading-ids]

Format heading anchors with variables that can support generated IDs. This approach creates unique IDs when the [Logstash Versioned Plugin Reference](logstash-docs://docs/reference/integration-plugins.md) is built. Unique heading IDs are required to avoid duplication over multiple versions of a plugin.
Format heading anchors with variables that can support generated IDs. This approach creates unique IDs when the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md) is built. Unique heading IDs are required to avoid duplication over multiple versions of a plugin.

**Example**

Expand All @@ -39,7 +39,7 @@ Instead, use variables to define it:
==== Configuration models
```

If you hardcode an ID, the [Logstash Versioned Plugin Reference](logstash-docs://docs/reference/integration-plugins.md) builds correctly the first time. The second time the doc build runs, the ID is flagged as a duplicate, and the build fails.
If you hardcode an ID, the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md) builds correctly the first time. The second time the doc build runs, the ID is flagged as a duplicate, and the build fails.


## Link formats [link-format]
Expand Down Expand Up @@ -136,7 +136,7 @@ match => {

## Where’s my doc? [_wheres_my_doc]

Plugin documentation goes through several steps before it gets published in the [Logstash Versioned Plugin Reference](logstash-docs://docs/reference/integration-plugins.md) and the [Logstash Reference](/reference/index.md).
Plugin documentation goes through several steps before it gets published in the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md) and the [Logstash Reference](/reference/index.md).

Here’s an overview of the workflow:

Expand All @@ -145,7 +145,7 @@ Here’s an overview of the workflow:
* Wait for the continuous integration build to complete successfully.
* Publish the plugin to [https://rubygems.org](https://rubygems.org).
* A script detects the new or changed version, and picks up the `index.asciidoc` file for inclusion in the doc build.
* The documentation for your new plugin is published in the [Logstash Versioned Plugin Reference](logstash-docs://docs/reference/integration-plugins.md).
* The documentation for your new plugin is published in the [Logstash Versioned Plugin Reference](logstash-docs://reference/integration-plugins.md).

We’re not done yet.

Expand Down
6 changes: 3 additions & 3 deletions docs/reference/advanced-pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ In a typical use case, Filebeat runs on a separate machine from the machine runn

The default Logstash installation includes the [`Beats input`](/reference/plugins-inputs-beats.md) plugin. The Beats input plugin enables Logstash to receive events from the Elastic Beats framework, which means that any Beat written to work with the Beats framework, such as Packetbeat and Metricbeat, can also send event data to Logstash.

To install Filebeat on your data source machine, download the appropriate package from the Filebeat [product page](https://www.elastic.co/downloads/beats/filebeat). You can also refer to [Filebeat quick start](beats://docs/reference/filebeat/filebeat-installation-configuration.md) for additional installation instructions.
To install Filebeat on your data source machine, download the appropriate package from the Filebeat [product page](https://www.elastic.co/downloads/beats/filebeat). You can also refer to [Filebeat quick start](beats://reference/filebeat/filebeat-installation-configuration.md) for additional installation instructions.

After installing Filebeat, you need to configure it. Open the `filebeat.yml` file located in your Filebeat installation directory, and replace the contents with the following lines. Make sure `paths` points to the example Apache log file, `logstash-tutorial.log`, that you downloaded earlier:

Expand All @@ -49,7 +49,7 @@ sudo ./filebeat -e -c filebeat.yml -d "publish"
```

::::{note}
If you run Filebeat as root, you need to change ownership of the configuration file (see [Config File Ownership and Permissions](beats://docs/reference/libbeat/config-file-permissions.md) in the *Beats Platform Reference*).
If you run Filebeat as root, you need to change ownership of the configuration file (see [Config File Ownership and Permissions](beats://reference/libbeat/config-file-permissions.md) in the *Beats Platform Reference*).
::::


Expand Down Expand Up @@ -605,7 +605,7 @@ If you are using Kibana to visualize your data, you can also explore the Filebea
:alt: Discovering Filebeat data in Kibana
:::

See the [Filebeat quick start docs](beats://docs/reference/filebeat/filebeat-installation-configuration.md) for info about loading the Kibana index pattern for Filebeat.
See the [Filebeat quick start docs](beats://reference/filebeat/filebeat-installation-configuration.md) for info about loading the Kibana index pattern for Filebeat.

You’ve successfully created a pipeline that uses Filebeat to take Apache web logs as input, parses those logs to create specific, named fields from the logs, and writes the parsed data to an Elasticsearch cluster. Next, you learn how to create a pipeline that uses multiple input and output plugins.

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/dashboard-monitoring-with-elastic-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ monitoring.cluster_uuid: PRODUCTION_ES_CLUSTER_UUID
::::{dropdown} Create a monitoring user (standalone agent only)
:name: create-user-db

Create a user on the production cluster that has the `remote_monitoring_collector` [built-in role](elasticsearch://docs/reference/elasticsearch/roles.md).
Create a user on the production cluster that has the `remote_monitoring_collector` [built-in role](elasticsearch://reference/elasticsearch/roles.md).

::::

Expand Down
6 changes: 3 additions & 3 deletions docs/reference/deploying-scaling-logstash.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The goal of this document is to highlight the most common architecture patterns

## Getting Started [deploying-getting-started]

For first time users, if you simply want to tail a log file to grasp the power of the Elastic Stack, we recommend trying [Filebeat Modules](beats://docs/reference/filebeat/filebeat-modules-overview.md). Filebeat Modules enable you to quickly collect, parse, and index popular log types and view pre-built Kibana dashboards within minutes. [Metricbeat Modules](beats://docs/reference/metricbeat/metricbeat-modules.md) provide a similar experience, but with metrics data. In this context, Beats will ship data directly to Elasticsearch where [Ingest Nodes](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) will process and index your data.
For first time users, if you simply want to tail a log file to grasp the power of the Elastic Stack, we recommend trying [Filebeat Modules](beats://reference/filebeat/filebeat-modules-overview.md). Filebeat Modules enable you to quickly collect, parse, and index popular log types and view pre-built Kibana dashboards within minutes. [Metricbeat Modules](beats://reference/metricbeat/metricbeat-modules.md) provide a similar experience, but with metrics data. In this context, Beats will ship data directly to Elasticsearch where [Ingest Nodes](docs-content://manage-data/ingest/transform-enrich/ingest-pipelines.md) will process and index your data.

:::{image} ../images/deploy1.png
:alt: deploy1
Expand Down Expand Up @@ -56,7 +56,7 @@ Enabling persistent queues is strongly recommended, and these architecture chara

Logstash is horizontally scalable and can form groups of nodes running the same pipeline. Logstash’s adaptive buffering capabilities will facilitate smooth streaming even through variable throughput loads. If the Logstash layer becomes an ingestion bottleneck, simply add more nodes to scale out. Here are a few general recommendations:

* Beats should [load balance](beats://docs/reference/filebeat/elasticsearch-output.md#_loadbalance) across a group of Logstash nodes.
* Beats should [load balance](beats://reference/filebeat/elasticsearch-output.md#_loadbalance) across a group of Logstash nodes.
* A minimum of two Logstash nodes are recommended for high availability.
* It’s common to deploy just one Beats input per Logstash node, but multiple Beats inputs can also be deployed per Logstash node to expose independent endpoints for different data sources.

Expand All @@ -82,7 +82,7 @@ Logstash will commonly extract fields with [grok](/reference/plugins-filters-gro

Enterprise-grade security is available across the entire delivery chain.

* Wire encryption is recommended for both the transport from [Beats to Logstash](beats://docs/reference/filebeat/configuring-ssl-logstash.md) and from [Logstash to Elasticsearch](/reference/secure-connection.md).
* Wire encryption is recommended for both the transport from [Beats to Logstash](beats://reference/filebeat/configuring-ssl-logstash.md) and from [Logstash to Elasticsearch](/reference/secure-connection.md).
* There’s a wealth of security options when communicating with Elasticsearch including basic authentication, TLS, PKI, LDAP, AD, and other custom realms. To enable Elasticsearch security, see [Secure a cluster](docs-content://deploy-manage/security.md).


Expand Down
2 changes: 1 addition & 1 deletion docs/reference/ecs-ls.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ mapped_pages:

# ECS in Logstash [ecs-ls]

The [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://docs/reference/index.md)) is an open source specification, developed with support from the Elastic user community. ECS defines a common set of fields to be used for storing event data, such as logs and metrics, in {{es}}. With ECS, users can normalize event data to better analyze, visualize, and correlate the data represented in their events.
The [Elastic Common Schema (ECS)][Elastic Common Schema (ECS)](ecs://reference/index.md)) is an open source specification, developed with support from the Elastic user community. ECS defines a common set of fields to be used for storing event data, such as logs and metrics, in {{es}}. With ECS, users can normalize event data to better analyze, visualize, and correlate the data represented in their events.

## ECS compatibility [ecs-compatibility]

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/ls-to-ls-http.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ It is important that you secure the communication between Logstash instances. Us
1. Create a certificate authority (CA) in order to sign the certificates that you plan to use between Logstash instances. Creating a correct SSL/TLS infrastructure is outside the scope of this document.

::::{tip}
We recommend you use the [elasticsearch-certutil](elasticsearch://docs/reference/elasticsearch/command-line-tools/certutil.md) tool to generate your certificates.
We recommend you use the [elasticsearch-certutil](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) tool to generate your certificates.
::::

2. Configure the downstream (receiving) Logstash to use SSL. Add these settings to the HTTP Input configuration:
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/ls-to-ls-native.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ It is important that you secure the communication between Logstash instances. Us
1. Create a certificate authority (CA) in order to sign the certificates that you plan to use between Logstash instances. Creating a correct SSL/TLS infrastructure is outside the scope of this document.

::::{tip}
We recommend you use the [elasticsearch-certutil](elasticsearch://docs/reference/elasticsearch/command-line-tools/certutil.md) tool to generate your certificates.
We recommend you use the [elasticsearch-certutil](elasticsearch://reference/elasticsearch/command-line-tools/certutil.md) tool to generate your certificates.
::::

2. Configure the downstream (receiving) Logstash to use SSL. Add these settings to the Logstash input configuration:
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/monitoring-with-elastic-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ monitoring.cluster_uuid: PRODUCTION_ES_CLUSTER_UUID
::::{dropdown} Create a monitoring user (standalone agent only)
:name: create-user-ea

Create a user on the production cluster that has the `remote_monitoring_collector` [built-in role](elasticsearch://docs/reference/elasticsearch/roles.md).
Create a user on the production cluster that has the `remote_monitoring_collector` [built-in role](elasticsearch://reference/elasticsearch/roles.md).

::::

Expand Down
14 changes: 7 additions & 7 deletions docs/reference/monitoring-with-metricbeat.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Refer to [{{es}} cluster stats page](https://www.elastic.co/docs/api/doc/elastic

## Install and configure {{metricbeat}} [configure-metricbeat]

1. [Install {{metricbeat}}](beats://docs/reference/metricbeat/metricbeat-installation-configuration.md) on the same server as {{ls}}.
1. [Install {{metricbeat}}](beats://reference/metricbeat/metricbeat-installation-configuration.md) on the same server as {{ls}}.
2. Enable the `logstash-xpack` module in {{metricbeat}}.<br>

To enable the default configuration in the {{metricbeat}} `modules.d` directory, run:
Expand All @@ -67,7 +67,7 @@ Refer to [{{es}} cluster stats page](https://www.elastic.co/docs/api/doc/elastic
PS > .\metricbeat.exe modules enable logstash-xpack
```

For more information, see [Specify which modules to run](beats://docs/reference/metricbeat/configuration-metricbeat.md) and [beat module](beats://docs/reference/metricbeat/metricbeat-module-beat.md).
For more information, see [Specify which modules to run](beats://reference/metricbeat/configuration-metricbeat.md) and [beat module](beats://reference/metricbeat/metricbeat-module-beat.md).

3. Configure the `logstash-xpack` module in {{metricbeat}}.<br>

Expand Down Expand Up @@ -97,12 +97,12 @@ Refer to [{{es}} cluster stats page](https://www.elastic.co/docs/api/doc/elastic

**Elastic security.** The Elastic {{security-features}} are enabled by default. You must provide a user ID and password so that {{metricbeat}} can collect metrics successfully:

1. Create a user on the production cluster that has the `remote_monitoring_collector` [built-in role](elasticsearch://docs/reference/elasticsearch/roles.md).
1. Create a user on the production cluster that has the `remote_monitoring_collector` [built-in role](elasticsearch://reference/elasticsearch/roles.md).
2. Add the `username` and `password` settings to the module configuration file (`logstash-xpack.yml`).

4. Optional: Disable the system module in the {{metricbeat}}.

By default, the [system module](beats://docs/reference/metricbeat/metricbeat-module-system.md) is enabled. The information it collects, however, is not shown on the **Stack Monitoring** page in {{kib}}. Unless you want to use that information for other purposes, run the following command:
By default, the [system module](beats://reference/metricbeat/metricbeat-module-system.md) is enabled. The information it collects, however, is not shown on the **Stack Monitoring** page in {{kib}}. Unless you want to use that information for other purposes, run the following command:

```sh
metricbeat modules disable system
Expand Down Expand Up @@ -140,17 +140,17 @@ Refer to [{{es}} cluster stats page](https://www.elastic.co/docs/api/doc/elastic

**Elastic security.** The Elastic {{security-features}} are enabled by default. You must provide a user ID and password so that {{metricbeat}} can send metrics successfully:

1. Create a user on the monitoring cluster that has the `remote_monitoring_agent` [built-in role](elasticsearch://docs/reference/elasticsearch/roles.md). Alternatively, use the `remote_monitoring_user` [built-in user](docs-content://deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md).
1. Create a user on the monitoring cluster that has the `remote_monitoring_agent` [built-in role](elasticsearch://reference/elasticsearch/roles.md). Alternatively, use the `remote_monitoring_user` [built-in user](docs-content://deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md).

::::{tip}
If you’re using index lifecycle management, the remote monitoring user requires additional privileges to create and read indices. For more information, see `<<feature-roles>>`.
::::

2. Add the `username` and `password` settings to the {{es}} output information in the {{metricbeat}} configuration file.

For more information about these configuration options, see [Configure the {{es}} output](beats://docs/reference/metricbeat/elasticsearch-output.md).
For more information about these configuration options, see [Configure the {{es}} output](beats://reference/metricbeat/elasticsearch-output.md).

6. [Start {{metricbeat}}](beats://docs/reference/metricbeat/metricbeat-starting.md) to begin collecting monitoring data.
6. [Start {{metricbeat}}](beats://reference/metricbeat/metricbeat-starting.md) to begin collecting monitoring data.
7. [View the monitoring data in {{kib}}](docs-content://deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md).

Your monitoring setup is complete.
Loading
Loading