Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: fix typos in documentation #55182

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/cloud/managing-airbyte-cloud/manage-data-residency.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ products: cloud

# Setting data residency

In Airbyte Cloud, you can set the default data residency for your workspace and also set the the data residency for individual connections, which can help you comply with data localization requirements.
In Airbyte Cloud, you can set the default data residency for your workspace and also set the data residency for individual connections, which can help you comply with data localization requirements.

## Choose your workspace default data residency

Expand Down
2 changes: 1 addition & 1 deletion docs/connector-development/cdk-python/basic-concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ As a quick recap, the Airbyte Specification requires an Airbyte Source to suppor

A core concept discussed here is the **Source**.

The Source contains one or more **Streams** \(or **Airbyte Streams**\). A **Stream** is the other concept key to understanding how Airbyte models the data syncing process. A **Stream** models the logical data groups that make up the larger **Source**. If the **Source** is a RDMS, each **Stream** is a table. In a REST API setting, each **Stream** corresponds to one resource within the API. e.g. a **Stripe Source** would have have one **Stream** for `Transactions`, one for `Charges` and so on.
The Source contains one or more **Streams** \(or **Airbyte Streams**\). A **Stream** is the other concept key to understanding how Airbyte models the data syncing process. A **Stream** models the logical data groups that make up the larger **Source**. If the **Source** is a RDMS, each **Stream** is a table. In a REST API setting, each **Stream** corresponds to one resource within the API. e.g. a **Stripe Source** would have one **Stream** for `Transactions`, one for `Charges` and so on.

## The `Source` class

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -892,7 +892,7 @@ complete_oauth_server_output_specification:
</details>

#### Case A: OAuth Flow returns the `access_token` only
When the `access_token` is the only key expected after the successfull `OAuth2.0` authentication
When the `access_token` is the only key expected after the successful `OAuth2.0` authentication

<details>
<summary>Example Declarative OAuth Specification</summary>
Expand Down Expand Up @@ -975,7 +975,7 @@ oauth_config_specification:


#### Case B: OAuth Flow returns the `refresh_token` only
When the `refresh_token` is the only key expected after the successfull `OAuth2.0` authentication
When the `refresh_token` is the only key expected after the successful `OAuth2.0` authentication

<details>
<summary>Example Declarative OAuth Specification</summary>
Expand Down Expand Up @@ -1015,7 +1015,7 @@ oauth_config_specification:
</details>

#### Case C: OAuth Flow returns the `access_token` and the `refresh_token`
When the the `access_token` and the `refresh_token` are the only keys expected after the successfull `OAuth2.0` authentication
When the `access_token` and the `refresh_token` are the only keys expected after the successful `OAuth2.0` authentication

<details>
<summary>Example Declarative OAuth Specification</summary>
Expand Down
2 changes: 1 addition & 1 deletion docs/enterprise-setup/upgrading-from-community.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ These instructions are for you if:
You must first update to the latest Open Source Community release. We assume you are running the following steps from the root of the `airbytehq/airbyte-platform` cloned repo.

1. Determine your current helm release name by running `helm list`. This will now be referred to as `[RELEASE_NAME]` for the rest of this guide.
2. Upgrade to the latest Open Source Community release. The output will now be refered to as `[RELEASE_VERSION]` for the rest of this guide:
2. Upgrade to the latest Open Source Community release. The output will now be referred to as `[RELEASE_VERSION]` for the rest of this guide:

```sh
helm upgrade [RELEASE_NAME] airbyte/airbyte
Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/destinations/duckdb-migrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ MotherDuck users will need to log into the MotherDuck UI at https://app.motherdu

This version updates the DuckDB libraries from `v0.8.1` to `v0.9.1`. Note that DuckDB `0.9.x` is not backwards compatible with prior versions of DuckDB. Please see the [DuckDB 0.9.0 release notes](https://github.com/duckdb/duckdb/releases/tag/v0.9.0) for more information and for upgrade instructions.

MotherDuck users will need to log into the MotherDuck UI at https://app.motherduck.com/ and click "Start Upgrade". The upgrade prompt will automatically appear the next time the user logs in. If the prompt does not appear, then your database has been upgraded automatically, and in this case you are ready to to use the latest version of the connector.
MotherDuck users will need to log into the MotherDuck UI at https://app.motherduck.com/ and click "Start Upgrade". The upgrade prompt will automatically appear the next time the user logs in. If the prompt does not appear, then your database has been upgraded automatically, and in this case you are ready to use the latest version of the connector.
2 changes: 1 addition & 1 deletion docs/integrations/destinations/duckdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ We do not recommend providing your API token in the `md:` connection string, as

### Authenticating to MotherDuck

For authentication, you can can provide your [MotherDuck Service Credential](https://motherduck.com/docs/authenticating-to-motherduck/#syntax) as the `motherduck_api_key` configuration option.
For authentication, you can provide your [MotherDuck Service Credential](https://motherduck.com/docs/authenticating-to-motherduck/#syntax) as the `motherduck_api_key` configuration option.

### Sync Overview

Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/destinations/firebolt.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Firebolt. Each table will contain 3 columns:
- `_airbyte_emitted_at`: a timestamp representing when the event was pulled from the data source.
The column type in Firebolt is `TIMESTAMP`.
- `_airbyte_data`: a json blob representing the event data. The column type in Firebolt is `VARCHAR`
but can be be parsed with JSON functions.
but can be parsed with JSON functions.

## Changelog

Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/destinations/motherduck.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ This connector is primarily designed to work with MotherDuck and local DuckDB fi
| Version | Date | Pull Request | Subject |
| :------ | :--------- | :------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------- |
| 0.1.18 | 2025-03-01 | [54737](https://github.com/airbytehq/airbyte/pull/54737) | Update airbyte-cdk to ^6.0.0 in destination-motherduck |
| 0.1.17 | 2024-12-26 | [50425](https://github.com/airbytehq/airbyte/pull/50425) | Fix bug overwrite write method not not saving all batches |
| 0.1.17 | 2024-12-26 | [50425](https://github.com/airbytehq/airbyte/pull/50425) | Fix bug overwrite write method not saving all batches |
| 0.1.16 | 2024-12-06 | [48562](https://github.com/airbytehq/airbyte/pull/48562) | Improved handling of config parameters during SQL engine creation. |
| 0.1.15 | 2024-11-07 | [48405](https://github.com/airbytehq/airbyte/pull/48405) | Updated docs and hovertext for schema, api key, and database name. |
| 0.1.14 | 2024-10-30 | [48006](https://github.com/airbytehq/airbyte/pull/48006) | Fix bug in \_flush_buffer, explicitly register dataframe before inserting |
Expand Down
4 changes: 2 additions & 2 deletions docs/integrations/destinations/oracle.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Airbyte has the ability to connect to the Oracle source with 3 network connectiv
1. `Unencrypted` the connection will be made using the TCP protocol. In this case, all data over the network will be transmitted in unencrypted form.
2. `Native network encryption` gives you the ability to encrypt database connections, without the configuration overhead of TCP / IP and SSL / TLS and without the need to open and listen on different ports. In this case, the _SQLNET.ENCRYPTION_CLIENT_
option will always be set as _REQUIRED_ by default: The client or server will only accept encrypted traffic, but the user has the opportunity to choose an `Encryption algorithm` according to the security policies he needs.
3. `TLS Encrypted` (verify certificate) - if this option is selected, data transfer will be transfered using the TLS protocol, taking into account the handshake procedure and certificate verification. To use this option, insert the content of the certificate issued by the server into the `SSL PEM file` field
3. `TLS Encrypted` (verify certificate) - if this option is selected, data transfer will be transferred using the TLS protocol, taking into account the handshake procedure and certificate verification. To use this option, insert the content of the certificate issued by the server into the `SSL PEM file` field

## Changelog

Expand Down Expand Up @@ -114,4 +114,4 @@ Airbyte has the ability to connect to the Oracle source with 3 network connectiv
| 0.1.3 | 2021-07-21 | [\#3555](https://github.com/airbytehq/airbyte/pull/3555) | Partial Success in BufferedStreamConsumer |
| 0.1.2 | 2021-07-20 | [\#4874](https://github.com/airbytehq/airbyte/pull/4874) | Require `sid` instead of `database` in connector specification |

</details>
</details>
2 changes: 1 addition & 1 deletion docs/integrations/destinations/vectara.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The Vectara destination connector supports Full Refresh Overwrite, Full Refresh

All streams will be output into a corpus in Vectara whose name must be specified in the config.

Note that there are no restrictions in naming the Vectara corpus and if a corpus with the specified name is not found, a new corpus with that name will be created. Also, if multiple corpora exists with the same name, an error will be returned as Airbyte will be unable to determine the prefered corpus.
Note that there are no restrictions in naming the Vectara corpus and if a corpus with the specified name is not found, a new corpus with that name will be created. Also, if multiple corpora exists with the same name, an error will be returned as Airbyte will be unable to determine the preferred corpus.

### Features

Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/sources/amazon-sqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The Amazon SQS source syncs the SQS API, refer: https://docs.aws.amazon.com/AWSS

### Supported Streams

This Source is capable of syncing the following core Action that would be recieved as streams for sync:
This Source is capable of syncing the following core Action that would be received as streams for sync:

- [RecieveMessage](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html)
- [QueueAttributes](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html)
Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/sources/confluence.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ The Confluence connector should not run into Confluence API limitations under no
| 0.2.9 | 2024-07-06 | [41013](https://github.com/airbytehq/airbyte/pull/41013) | Update dependencies |
| 0.2.8 | 2024-06-25 | [40436](https://github.com/airbytehq/airbyte/pull/40436) | Update dependencies |
| 0.2.7 | 2024-06-22 | [40115](https://github.com/airbytehq/airbyte/pull/40115) | Update dependencies |
| 0.2.6 | 2024-06-15 | [39495](https://github.com/airbytehq/airbyte/pull/39495) | Fix parameters as comma seperated single string |
| 0.2.6 | 2024-06-15 | [39495](https://github.com/airbytehq/airbyte/pull/39495) | Fix parameters as comma separated single string |
| 0.2.5 | 2024-06-06 | [39261](https://github.com/airbytehq/airbyte/pull/39261) | [autopull] Upgrade base image to v1.2.2 |
| 0.2.4 | 2024-05-14 | [38137](https://github.com/airbytehq/airbyte/pull/38137) | Make connector compatible with the builder |
| 0.2.3 | 2024-04-19 | [37143](https://github.com/airbytehq/airbyte/pull/37143) | Upgrade to CDK 0.80.0 and manage dependencies with Poetry. |
Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/sources/hubspot.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ The HubSpot source connector supports the following streams:
- Fewer than 10,000 records are being synced
- **EngagementsAll** if either of these criteria are not met.

Because of this, the `engagements` stream can be slow to sync if it hasn't synced within the last 30 days and/or is generating large volumes of new data. To accomodate for this limitation, we recommend scheduling more frequent syncs.
Because of this, the `engagements` stream can be slow to sync if it hasn't synced within the last 30 days and/or is generating large volumes of new data. To accommodate for this limitation, we recommend scheduling more frequent syncs.

### Notes on the `Forms` and `Form Submissions` stream

Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/sources/mailchimp-migrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Depending on the destination type, you may not be prompted to reset your data

- The `._links` field, which contained non-user-relevant Mailchimp metadata, has been removed from all streams.
- All instances of datetime fields have had their type changed from `string` to airbyte-type `timestamp-with-timezone`. This change should ensure greater precision and consistency in how datetime information is represented and processed by destinations.
- The Mailchimp API returns many fields without data as empty strings. To accomodate the above changes, empty strings are now converted to null values:
- The Mailchimp API returns many fields without data as empty strings. To accommodate the above changes, empty strings are now converted to null values:

```md
{"id": "record_id", "last_opened": ""} -> {"id": "record_id", "last_opened": null}
Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/sources/oracle.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ If you do not see a type in this list, assume that it is coerced into a string.

Airbyte has the ability to connect to the Oracle source with 3 network connectivity options:

1.`Unencrypted` the connection will be made using the TCP protocol. In this case, all data over the network will be transmitted in unencrypted form. 2.`Native network encryption` gives you the ability to encrypt database connections, without the configuration overhead of TCP / IP and SSL / TLS and without the need to open and listen on different ports. In this case, the _SQLNET.ENCRYPTION_CLIENT_ option will always be set as _REQUIRED_ by default: The client or server will only accept encrypted traffic, but the user has the opportunity to choose an `Encryption algorithm` according to the security policies he needs. 3.`TLS Encrypted` \(verify certificate\) - if this option is selected, data transfer will be transfered using the TLS protocol, taking into account the handshake procedure and certificate verification. To use this option, insert the content of the certificate issued by the server into the `SSL PEM file` field
1.`Unencrypted` the connection will be made using the TCP protocol. In this case, all data over the network will be transmitted in unencrypted form. 2.`Native network encryption` gives you the ability to encrypt database connections, without the configuration overhead of TCP / IP and SSL / TLS and without the need to open and listen on different ports. In this case, the _SQLNET.ENCRYPTION_CLIENT_ option will always be set as _REQUIRED_ by default: The client or server will only accept encrypted traffic, but the user has the opportunity to choose an `Encryption algorithm` according to the security policies he needs. 3.`TLS Encrypted` \(verify certificate\) - if this option is selected, data transfer will be transferred using the TLS protocol, taking into account the handshake procedure and certificate verification. To use this option, insert the content of the certificate issued by the server into the `SSL PEM file` field

## Changelog

Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/sources/paypal-transaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ After creating your account you will be able to get your `Client ID` and `Secret

By default, syncs are run with a slice period of 7 days. If you see errors with the message `Result set size is greater than the maximum limit` or an error code like `RESULTSET_TOO_LARGE`:

- Try lower the the size of the slice period in your optional parameters in your connection configuration.
- Try lower the size of the slice period in your optional parameters in your connection configuration.
- You can try to lower the scheduling sync window in case a day slice period is not enough. Lowering the sync period it may help avoid reaching the 10K limit.

:::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ There are some notable shortcomings associated with the Xmin replication method:
- Schema changes are not supported automatically for CDC sources. Reset and resync data if you make a schema change.
- The records produced by `DELETE` statements only contain primary keys. All other data fields are unset.
- Log-based replication only works for master instances of Postgres. CDC cannot be run from a read-replica of your primary database.
- An Airbyte database source using CDC replication can only be used with a single Airbyte destination. This is due to how Postgres CDC is implemented - each destination would recieve only part of the data available in the replication slot.
- An Airbyte database source using CDC replication can only be used with a single Airbyte destination. This is due to how Postgres CDC is implemented - each destination would receive only part of the data available in the replication slot.
- Using logical replication increases disk space used on the database server. The additional data is stored until it is consumed.
- Set frequent syncs for CDC to ensure that the data doesn't fill up your disk space.
- If you stop syncing a CDC-configured Postgres instance with Airbyte, delete the replication slot. Otherwise, it may fill up your disk space.
Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/sources/ringcentral.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This page contains the setup guide and reference information for the [RingCentra

## Prerequisites

Auth Token (which acts as bearer token), account id and extension id are mandate for this connector to work, Account token could be recieved by following (Bearer ref - https://developers.ringcentral.com/api-reference/authentication), and account_id and extension id could be seen at response to basic api call to an endpoint with ~ operator. Example- (https://platform.devtest.ringcentral.com/restapi/v1.0/account/~/extension/~/business-hours)
Auth Token (which acts as bearer token), account id and extension id are mandate for this connector to work, Account token could be received by following (Bearer ref - https://developers.ringcentral.com/api-reference/authentication), and account_id and extension id could be seen at response to basic api call to an endpoint with ~ operator. Example- (https://platform.devtest.ringcentral.com/restapi/v1.0/account/~/extension/~/business-hours)

## Setup guide

Expand Down
2 changes: 1 addition & 1 deletion docs/integrations/sources/zendesk-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ The Zendesk connector ideally should not run into Zendesk API limitations under
| 2.6.5 | 2024-05-23 | [38607](https://github.com/airbytehq/airbyte/pull/38607) | Migrate to cursor based pagination in stream `Organization memberships` |
| 2.6.4 | 2024-05-20 | [38310](https://github.com/airbytehq/airbyte/pull/38310) | Fix record filter for `Ticket Metrics` stream |
| 2.6.3 | 2024-05-02 | [36669](https://github.com/airbytehq/airbyte/pull/36669) | Schema descriptions |
| 2.6.2 | 2024-02-05 | [37761](https://github.com/airbytehq/airbyte/pull/37761) | Add stop condition for `Ticket Audits` when recieved old records; Ignore 403 and 404 status codes. |
| 2.6.2 | 2024-02-05 | [37761](https://github.com/airbytehq/airbyte/pull/37761) | Add stop condition for `Ticket Audits` when received old records; Ignore 403 and 404 status codes. |
| 2.6.1 | 2024-04-30 | [37723](https://github.com/airbytehq/airbyte/pull/37723) | Add %Y-%m-%dT%H:%M:%S%z to cursor_datetime_formats |
| 2.6.0 | 2024-04-29 | [36823](https://github.com/airbytehq/airbyte/pull/36823) | Migrate to low code; Add new stream `Ticket Activities` |
| 2.5.0 | 2024-04-25 | [36388](https://github.com/airbytehq/airbyte/pull/36388) | Fix data type of field in `Tickets` stream schema stream. |
Expand Down
2 changes: 1 addition & 1 deletion docs/operator-guides/upgrading-airbyte.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,5 +54,5 @@ When deployed this way, you'll upgrade by modifying the `values.yaml` file. If y
Run `abctl local install` to upgrade to the latest version of Airbyte. If you'd like to ensure you're running the latest version of Airbyte, you can check the value of the Helm Chart's app version by running `abctl local status`.

:::note
Occasionally, `abctl` itself will need to be updated. Do that by running `brew update abctl`. This is seperate from upgrading Airbyte and only upgrades the command line tool.
Occasionally, `abctl` itself will need to be updated. Do that by running `brew update abctl`. This is separate from upgrading Airbyte and only upgrades the command line tool.
:::
Loading
Loading