Skip to content

Commit

Permalink
misc: address badwords complaints
Browse files Browse the repository at this point in the history
  • Loading branch information
bagder committed Aug 23, 2024
1 parent c2d5cdb commit 456a032
Show file tree
Hide file tree
Showing 20 changed files with 69 additions and 68 deletions.
4 changes: 2 additions & 2 deletions build/autotools.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,8 @@ One of the differences between linking with a static library compared to
linking with a shared one is in how shared libraries handle their own
dependencies while static ones do not. In order to link with library `xyz` as
a shared library, it is basically a matter of adding `-lxyz` to the linker
command line no matter which other libraries `xyz` itself was built to
use. But, if that `xyz` is instead a static library we also need to specify
command line no matter which other libraries `xyz` itself was built to use.
However, if that `xyz` is instead a static library we also need to specify
each dependency of `xyz` on the linker command line. curl's configure cannot
keep up with or know all possible dependencies for all the libraries it can be
made to build with, so users wanting to build with static libs mostly need to
Expand Down
5 changes: 4 additions & 1 deletion cmdline/copyas.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,10 @@ Chromium)._

## From Safari

In Safari, the "development" menu is not visible until you go into **preferences->Advanced** and enable it. But once you have done that, you can select **Show web inspector** in that development menu and get to see a new console pop up that is similar to the development tools of Firefox and Chrome.
In Safari, the "development" menu is not visible until you go into
**preferences->Advanced** and enable it. Once you have done that, you can
select **Show web inspector** in that development menu and get to see a new
console pop up that is similar to the development tools of Firefox and Chrome.

Select the network tab, reload the webpage and then you can right click the
particular resources that you want to fetch with curl, as if you did it with
Expand Down
41 changes: 20 additions & 21 deletions cmdline/exitcode.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,18 +41,17 @@ A basic Unix shell script could look like something like this:
not enabled or was explicitly disabled at build-time. To make curl able
to do this, you probably need another build of libcurl.

5. Couldn't resolve proxy. The address of the given proxy host could not be
5. Could not resolve proxy. The address of the given proxy host could not be
resolved. Either the given proxy name is just wrong, or the DNS server is
misbehaving and does not know about this name when it should or perhaps
even the system you run curl on is misconfigured so that it does not
find/use the correct DNS server.
misbehaving and does not know about this name when it should or perhaps even
the system you run curl on is misconfigured so that it does not find/use the
correct DNS server.

6. Couldn't resolve host. The given remote host's address was not
resolved. The address of the given server could not be resolved. Either
the given hostname is just wrong, or the DNS server is misbehaving and
does not know about this name when it should or perhaps even the system you
run curl on is misconfigured so that it does not find/use the correct DNS
server.
6. Could not resolve host. The given remote host's address was not resolved.
The address of the given server could not be resolved. Either the given
hostname is just wrong, or the DNS server is misbehaving and does not know
about this name when it should or perhaps even the system you run curl on is
misconfigured so that it does not find/use the correct DNS server.

7. Failed to connect to host. curl managed to get an IP address to the
machine and it tried to set up a TCP connection to the host but
Expand Down Expand Up @@ -103,15 +102,15 @@ A basic Unix shell script could look like something like this:
passive mode. You might be able to work-around this problem by using PORT
instead, with the `--ftp-port` option.

15. FTP cannot get host. Couldn't use the host IP address we got in the
15. FTP cannot get host. Could not use the host IP address we got in the
227-line. This is most likely an internal error.

16. HTTP/2 error. A problem was detected in the HTTP2 framing layer. This is
somewhat generic and can be one out of several problems, see the error
message for details.

17. FTP could not set binary. Couldn't change transfer method to binary. This
server is broken. curl needs to set the transfer to the correct mode
17. FTP could not set binary. Could not change transfer method to binary.
This server is broken. curl needs to set the transfer to the correct mode
before it is started as otherwise the transfer cannot work.

18. Partial file. Only a part of the file was transferred. When the transfer
Expand Down Expand Up @@ -199,9 +198,9 @@ A basic Unix shell script could look like something like this:
asking to resume a transfer that then ends up not possible to do, this
error can get returned. For FILE, FTP or SFTP.

37. Couldn't read the given file when using the FILE:// scheme. Failed to
open the file. The file could be non-existing or is it a permission
problem perhaps?
37. Could not read the given file when using the FILE:// scheme. Failed to
open the file. The file could be non-existing or is it a permission problem
perhaps?

38. LDAP cannot bind. LDAP "bind" operation failed, which is a necessary step
in the LDAP operation and thus this means the LDAP query could not be
Expand Down Expand Up @@ -287,12 +286,12 @@ A basic Unix shell script could look like something like this:
57. **Not used**

58. Problem with the local certificate. The client certificate had a problem
so it could not be used. Permissions? The wrong pass phrase?
so it could not be used. Permissions? The wrong passphrase?

59. Couldn't use the specified SSL cipher. The cipher names need to be
specified exactly and they are also unfortunately specific to the
particular TLS backend curl has been built to use. For the current list
of support ciphers and how to write them, see the online docs at
59. Could not use the specified SSL cipher. The cipher names need to be
specified exactly and they are also unfortunately specific to the particular
TLS backend curl has been built to use. For the current list of support
ciphers and how to write them, see the online docs at
[https://curl.se/docs/ssl-ciphers.html](https://curl.se/docs/ssl-ciphers.html).

60. Peer certificate cannot be authenticated with known CA certificates. This
Expand Down
4 changes: 2 additions & 2 deletions cmdline/urls/ftptype.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ ASCII could then be made with:

curl "ftp://example.com/foo;type=A"

And while curl defaults to binary transfers for FTP, the URL format allows you
to also specify the binary type with type=I:
curl defaults to binary transfers for FTP, but the URL format allows you to
specify the binary type with `type=I`:

curl "ftp://example.com/foo;type=I"

Expand Down
4 changes: 2 additions & 2 deletions cmdline/urls/globbing.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,8 @@ Or download all the images of a chess board, indexed by two coordinates ranged

curl -O "http://example.com/chess-[0-7]x[0-7].jpg"

And you can, of course, mix ranges and series. Get a week's worth of logs for
both the web server and the mail server:
You can, of course, mix ranges and series. Get a week's worth of logs for both
the web server and the mail server:

curl -O "http://example.com/{web,mail}-log[0-6].txt"

Expand Down
7 changes: 3 additions & 4 deletions helpers/sharing.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,9 @@ run its own thread and transfer data, but you still want the different
transfers to share data. Then you need to set the mutex callbacks.

If you do not use threading and you *know* you access the shared object in a
serial one-at-a-time manner you do not need to set any locks. But if there is
ever more than one transfer that access share object at a time, it needs to
get mutex callbacks setup to prevent data destruction and possibly even
crashes.
serial one-at-a-time manner you do not need to set any locks. If there is ever
more than one transfer that access share object at a time, it needs to get
mutex callbacks setup to prevent data destruction and possibly even crashes.

Since libcurl itself does not know how to lock things or even what threading
model you are using, you must make sure to do mutex locks that only allows one
Expand Down
2 changes: 1 addition & 1 deletion http/post/multipart.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ submitted. The particular boundary you see in this example has the random part
`d74496d66958873e` but you, of course, get something different when you run
curl (or when you submit such a form with a browser).

So after that initial set of headers follows the request body
After that initial set of headers follows the request body

--------------------------d74496d66958873e
Content-Disposition: form-data; name="person"
Expand Down
2 changes: 1 addition & 1 deletion http/put.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ identifies the resource and you point out the local file to put there:

curl -T localfile http://example.com/new/resource/file

`-T` implies a PUT and tell curl which file to send off. But the similarities
`-T` implies a PUT and tell curl which file to send off. The similarities
between POST and PUT also allows you to send a PUT with a string by using the
regular curl POST mechanism using `-d` but asking for it to use a PUT instead:

Expand Down
8 changes: 4 additions & 4 deletions http/redirects.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,10 +107,10 @@ a particular site, but since an HTTP redirect might move away to a different
host curl limits what it sends away to other hosts than the original within
the same transfer.

So if you want the credentials to also get sent to the following hostnames
even though they are not the same as the original—presumably because you trust
them and know that there is no harm in doing that—you can tell curl that it is
fine to do so by using the `--location-trusted` option.
If you want the credentials to also get sent to the following hostnames even
though they are not the same as the original—presumably because you trust them
and know that there is no harm in doing that—you can tell curl that it is fine
to do so by using the `--location-trusted` option.

# Non-HTTP redirects

Expand Down
10 changes: 5 additions & 5 deletions http/response.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,11 +92,11 @@ in fact any other compression algorithm that curl understands) by using

A less common feature used with transfer encoding is compression.

Compression in itself is common. Over time the dominant and web compatible
way to do compression for HTTP has become to use `Content-Encoding` as
described in the section above. But HTTP was originally intended and specified
to allow transparent compression as a transfer encoding, and curl supports
this feature.
Compression in itself is common. Over time the dominant and web compatible way
to do compression for HTTP has become to use `Content-Encoding` as described
in the section above. HTTP was originally intended and specified to allow
transparent compression as a transfer encoding, and curl supports this
feature.

The client then simply asks the server to do compression transfer encoding and
if acceptable, it responds with a header indicating that it does and curl then
Expand Down
2 changes: 1 addition & 1 deletion install/container.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Invoke curl with `podman`:

alias -s curl='podman run -it --rm docker.io/curlimages/curl'

And simply invoke `curl www.example.com` to make a request
Simply invoke `curl www.example.com` to make a request

## Running curl in kubernetes

Expand Down
2 changes: 1 addition & 1 deletion install/linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ instead of `zypper`. To install the curl command-line utility:

transactional-update pkg install curl

And to install the libcurl development package:
To install the libcurl development package:

transactional-update pkg install libcurl-devel

Expand Down
12 changes: 6 additions & 6 deletions libcurl/globalinit.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ global state so you should only call it once, and once your program is
completely done using libcurl you can call `curl_global_cleanup()` to
free and clean up the associated global resources the init call allocated.

libcurl is built to handle the situation where you skip the `curl_global_init()` call, but
it does so by calling it itself instead (if you did not do it before any actual
file transfer starts) and it then uses its own defaults. But beware that it is
still not thread safe even then, so it might cause some "interesting" side
effects for you. It is much better to call curl_global_init() yourself in a
controlled manner.
libcurl is built to handle the situation where you skip the
`curl_global_init()` call, but it does so by calling it itself instead (if you
did not do it before any actual file transfer starts) and it then uses its own
defaults. Beware that it is still not thread safe even then, so it might cause
some "interesting" side effects for you. It is much better to call
curl_global_init() yourself in a controlled manner.

10 changes: 5 additions & 5 deletions project/comm.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,11 @@ debugging or whatever.
In this day, mailing lists may be considered the old style of communication —
no fancy web forums or similar. Using a mailing list is therefore becoming an
art that is not practiced everywhere and may be a bit strange and unusual to
you. But fear not. It is just about sending emails to an address that then
sends that email out to all the subscribers. Our mailing lists have at most a
few thousand subscribers. If you are mailing for the first time, it might be
good to read a few old mails first to get to learn the culture and what's
considered good practice.
you. It is just about sending emails to an address that then sends that email
out to all the subscribers. Our mailing lists have at most a few thousand
subscribers. If you are mailing for the first time, it might be good to read a
few old mails first to get to learn the culture and what's considered good
practice.

The mailing lists and the bug tracker have changed hosting providers a few
times and there are reasons to suspect it might happen again in the future. It
Expand Down
2 changes: 1 addition & 1 deletion protocols/http.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ A server always responds to an HTTP request unless something is wrong.

## The URL converted to a request

So when an HTTP client is given a URL to operate on, that URL is then used,
When an HTTP client is given a URL to operate on, that URL is then used,
picked apart and those parts are used in various places in the outgoing
request to the server. Let's take an example URL:

Expand Down
4 changes: 2 additions & 2 deletions transfers/conn/keepalive.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

Once a TCP connection has been established, that connection is defined to be
valid until one side closes it. Once the connection has entered the connected
state, it will remain connected indefinitely. But, in reality, the connection
will not last indefinitely. Many firewalls or NAT systems close connections if
state, it will remain connected indefinitely. In reality, the connection will
not last indefinitely. Many firewalls or NAT systems close connections if
there has been no activity in some time period. The Keep Alive signal can be
used to refrain intermediate hosts from closing idle connection due to
inactivity.
Expand Down
2 changes: 1 addition & 1 deletion transfers/drive/multi-socket.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ registered:

### timer_callback

The application is in control and waits for socket activity. But even without
The application is in control and waits for socket activity. Even without
socket activity there are things libcurl needs to do. Timeout things, calling
the progress callback, starting over a retry or failing a transfer that takes
too long, etc. To make that work, the application must also make sure to
Expand Down
4 changes: 2 additions & 2 deletions transfers/drive/multi.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,8 @@ codes*):
Both these loops let you use one or more file descriptors of your own on which
to wait, like if you read from your own sockets or a pipe or similar.

And again, you can add and remove easy handles to the multi handle at any
point during the looping. Removing a handle mid-transfer aborts that transfer.
Again: you can add and remove easy handles to the multi handle at any point
during the looping. Removing a handle mid-transfer aborts that transfer.

## When is a single transfer done?

Expand Down
8 changes: 4 additions & 4 deletions usingcurl/connections/keepalive.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ frames" back and forth when it would otherwise be totally idle. It helps idle
connections to detect breakage even when no traffic is moving over it, and
helps intermediate systems not consider the connection dead.

curl uses TCP keepalive by default for the reasons mentioned here. But there
might be times when you want to *disable* keepalive or you may want to change
the interval between the TCP "pings" (curl defaults to 60 seconds). You can
switch off keepalive with:
curl uses TCP keepalive by default for the reasons mentioned here. There might
be times when you want to *disable* keepalive or you may want to change the
interval between the TCP "pings" (curl defaults to 60 seconds). You can switch
off keepalive with:

curl --no-keepalive https://example.com/

Expand Down
4 changes: 2 additions & 2 deletions usingcurl/connections/name.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ want to send a test request to one specific server out of the load balanced
set (`load1.example.com` for example) you can instruct curl to do that.

You *can* still use `--resolve` to accomplish this if you know the specific IP
address of load1. But without having to first resolve and fix the IP address
address of load1. Without having to first resolve and fix the IP address
separately, you can tell curl:

curl --connect-to www.example.com:80:load1.example.com:80 \
Expand Down Expand Up @@ -110,6 +110,6 @@ end of the DNS communication to a specific IP address and with
use for its DNS requests.

These `--dns-*` options are advanced and are only meant for people who know
what they are doing and understand what these options do. But they offer
what they are doing and understand what these options do. They offer
customizable DNS name resolution operations.

0 comments on commit 456a032

Please sign in to comment.